Dec 16 13:31:48.693378 kernel: Linux version 6.12.61-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Fri Dec 12 15:21:28 -00 2025 Dec 16 13:31:48.693393 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=a214a2d85e162c493e8b13db2df50a43e1005a0e4854a1ae089a14f442a30022 Dec 16 13:31:48.693399 kernel: Disabled fast string operations Dec 16 13:31:48.693403 kernel: BIOS-provided physical RAM map: Dec 16 13:31:48.693407 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ebff] usable Dec 16 13:31:48.693411 kernel: BIOS-e820: [mem 0x000000000009ec00-0x000000000009ffff] reserved Dec 16 13:31:48.693415 kernel: BIOS-e820: [mem 0x00000000000dc000-0x00000000000fffff] reserved Dec 16 13:31:48.693438 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007fedffff] usable Dec 16 13:31:48.693442 kernel: BIOS-e820: [mem 0x000000007fee0000-0x000000007fefefff] ACPI data Dec 16 13:31:48.693446 kernel: BIOS-e820: [mem 0x000000007feff000-0x000000007fefffff] ACPI NVS Dec 16 13:31:48.693450 kernel: BIOS-e820: [mem 0x000000007ff00000-0x000000007fffffff] usable Dec 16 13:31:48.693453 kernel: BIOS-e820: [mem 0x00000000f0000000-0x00000000f7ffffff] reserved Dec 16 13:31:48.693457 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec0ffff] reserved Dec 16 13:31:48.693461 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Dec 16 13:31:48.693467 kernel: BIOS-e820: [mem 0x00000000fffe0000-0x00000000ffffffff] reserved Dec 16 13:31:48.693472 kernel: NX (Execute Disable) protection: active Dec 16 13:31:48.693477 kernel: APIC: Static calls initialized Dec 16 13:31:48.693481 kernel: SMBIOS 2.7 present. Dec 16 13:31:48.693486 kernel: DMI: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 05/28/2020 Dec 16 13:31:48.693491 kernel: DMI: Memory slots populated: 1/128 Dec 16 13:31:48.693495 kernel: vmware: hypercall mode: 0x00 Dec 16 13:31:48.693499 kernel: Hypervisor detected: VMware Dec 16 13:31:48.693504 kernel: vmware: TSC freq read from hypervisor : 3408.000 MHz Dec 16 13:31:48.693509 kernel: vmware: Host bus clock speed read from hypervisor : 66000000 Hz Dec 16 13:31:48.693514 kernel: vmware: using clock offset of 3190465913 ns Dec 16 13:31:48.693518 kernel: tsc: Detected 3408.000 MHz processor Dec 16 13:31:48.693523 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 16 13:31:48.693528 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 16 13:31:48.693532 kernel: last_pfn = 0x80000 max_arch_pfn = 0x400000000 Dec 16 13:31:48.693537 kernel: total RAM covered: 3072M Dec 16 13:31:48.693542 kernel: Found optimal setting for mtrr clean up Dec 16 13:31:48.693547 kernel: gran_size: 64K chunk_size: 64K num_reg: 2 lose cover RAM: 0G Dec 16 13:31:48.693552 kernel: MTRR map: 6 entries (5 fixed + 1 variable; max 21), built from 8 variable MTRRs Dec 16 13:31:48.693557 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 16 13:31:48.693562 kernel: Using GB pages for direct mapping Dec 16 13:31:48.693566 kernel: ACPI: Early table checksum verification disabled Dec 16 13:31:48.693571 kernel: ACPI: RSDP 0x00000000000F6A00 000024 (v02 PTLTD ) Dec 16 13:31:48.693575 kernel: ACPI: XSDT 0x000000007FEE965B 00005C (v01 INTEL 440BX 06040000 VMW 01324272) Dec 16 13:31:48.693580 kernel: ACPI: FACP 0x000000007FEFEE73 0000F4 (v04 INTEL 440BX 06040000 PTL 000F4240) Dec 16 13:31:48.693584 kernel: ACPI: DSDT 0x000000007FEEAD55 01411E (v01 PTLTD Custom 06040000 MSFT 03000001) Dec 16 13:31:48.693591 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Dec 16 13:31:48.693596 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Dec 16 13:31:48.693601 kernel: ACPI: BOOT 0x000000007FEEAD2D 000028 (v01 PTLTD $SBFTBL$ 06040000 LTP 00000001) Dec 16 13:31:48.693606 kernel: ACPI: APIC 0x000000007FEEA5EB 000742 (v01 PTLTD ? APIC 06040000 LTP 00000000) Dec 16 13:31:48.693611 kernel: ACPI: MCFG 0x000000007FEEA5AF 00003C (v01 PTLTD $PCITBL$ 06040000 LTP 00000001) Dec 16 13:31:48.693615 kernel: ACPI: SRAT 0x000000007FEE9757 0008A8 (v02 VMWARE MEMPLUG 06040000 VMW 00000001) Dec 16 13:31:48.693621 kernel: ACPI: HPET 0x000000007FEE971F 000038 (v01 VMWARE VMW HPET 06040000 VMW 00000001) Dec 16 13:31:48.693626 kernel: ACPI: WAET 0x000000007FEE96F7 000028 (v01 VMWARE VMW WAET 06040000 VMW 00000001) Dec 16 13:31:48.693631 kernel: ACPI: Reserving FACP table memory at [mem 0x7fefee73-0x7fefef66] Dec 16 13:31:48.693636 kernel: ACPI: Reserving DSDT table memory at [mem 0x7feead55-0x7fefee72] Dec 16 13:31:48.693641 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Dec 16 13:31:48.693646 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Dec 16 13:31:48.693650 kernel: ACPI: Reserving BOOT table memory at [mem 0x7feead2d-0x7feead54] Dec 16 13:31:48.693655 kernel: ACPI: Reserving APIC table memory at [mem 0x7feea5eb-0x7feead2c] Dec 16 13:31:48.693660 kernel: ACPI: Reserving MCFG table memory at [mem 0x7feea5af-0x7feea5ea] Dec 16 13:31:48.693665 kernel: ACPI: Reserving SRAT table memory at [mem 0x7fee9757-0x7fee9ffe] Dec 16 13:31:48.693670 kernel: ACPI: Reserving HPET table memory at [mem 0x7fee971f-0x7fee9756] Dec 16 13:31:48.693675 kernel: ACPI: Reserving WAET table memory at [mem 0x7fee96f7-0x7fee971e] Dec 16 13:31:48.693680 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Dec 16 13:31:48.693684 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Dec 16 13:31:48.693689 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000-0xbfffffff] hotplug Dec 16 13:31:48.693694 kernel: NUMA: Node 0 [mem 0x00001000-0x0009ffff] + [mem 0x00100000-0x7fffffff] -> [mem 0x00001000-0x7fffffff] Dec 16 13:31:48.693699 kernel: NODE_DATA(0) allocated [mem 0x7fff8dc0-0x7fffffff] Dec 16 13:31:48.693704 kernel: Zone ranges: Dec 16 13:31:48.693708 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 16 13:31:48.693714 kernel: DMA32 [mem 0x0000000001000000-0x000000007fffffff] Dec 16 13:31:48.693719 kernel: Normal empty Dec 16 13:31:48.693723 kernel: Device empty Dec 16 13:31:48.693728 kernel: Movable zone start for each node Dec 16 13:31:48.693733 kernel: Early memory node ranges Dec 16 13:31:48.693738 kernel: node 0: [mem 0x0000000000001000-0x000000000009dfff] Dec 16 13:31:48.693742 kernel: node 0: [mem 0x0000000000100000-0x000000007fedffff] Dec 16 13:31:48.693747 kernel: node 0: [mem 0x000000007ff00000-0x000000007fffffff] Dec 16 13:31:48.693752 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007fffffff] Dec 16 13:31:48.693756 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 16 13:31:48.693762 kernel: On node 0, zone DMA: 98 pages in unavailable ranges Dec 16 13:31:48.693767 kernel: On node 0, zone DMA32: 32 pages in unavailable ranges Dec 16 13:31:48.693772 kernel: ACPI: PM-Timer IO Port: 0x1008 Dec 16 13:31:48.693776 kernel: ACPI: LAPIC_NMI (acpi_id[0x00] high edge lint[0x1]) Dec 16 13:31:48.693781 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Dec 16 13:31:48.693786 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Dec 16 13:31:48.693790 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Dec 16 13:31:48.693795 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Dec 16 13:31:48.693800 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Dec 16 13:31:48.693805 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Dec 16 13:31:48.693810 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Dec 16 13:31:48.693814 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Dec 16 13:31:48.693826 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Dec 16 13:31:48.693832 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Dec 16 13:31:48.693836 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Dec 16 13:31:48.693841 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Dec 16 13:31:48.693845 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Dec 16 13:31:48.693850 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Dec 16 13:31:48.693855 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Dec 16 13:31:48.693861 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Dec 16 13:31:48.693865 kernel: ACPI: LAPIC_NMI (acpi_id[0x11] high edge lint[0x1]) Dec 16 13:31:48.693870 kernel: ACPI: LAPIC_NMI (acpi_id[0x12] high edge lint[0x1]) Dec 16 13:31:48.693875 kernel: ACPI: LAPIC_NMI (acpi_id[0x13] high edge lint[0x1]) Dec 16 13:31:48.693879 kernel: ACPI: LAPIC_NMI (acpi_id[0x14] high edge lint[0x1]) Dec 16 13:31:48.693884 kernel: ACPI: LAPIC_NMI (acpi_id[0x15] high edge lint[0x1]) Dec 16 13:31:48.693889 kernel: ACPI: LAPIC_NMI (acpi_id[0x16] high edge lint[0x1]) Dec 16 13:31:48.693893 kernel: ACPI: LAPIC_NMI (acpi_id[0x17] high edge lint[0x1]) Dec 16 13:31:48.693898 kernel: ACPI: LAPIC_NMI (acpi_id[0x18] high edge lint[0x1]) Dec 16 13:31:48.693903 kernel: ACPI: LAPIC_NMI (acpi_id[0x19] high edge lint[0x1]) Dec 16 13:31:48.693909 kernel: ACPI: LAPIC_NMI (acpi_id[0x1a] high edge lint[0x1]) Dec 16 13:31:48.693913 kernel: ACPI: LAPIC_NMI (acpi_id[0x1b] high edge lint[0x1]) Dec 16 13:31:48.693918 kernel: ACPI: LAPIC_NMI (acpi_id[0x1c] high edge lint[0x1]) Dec 16 13:31:48.693923 kernel: ACPI: LAPIC_NMI (acpi_id[0x1d] high edge lint[0x1]) Dec 16 13:31:48.693927 kernel: ACPI: LAPIC_NMI (acpi_id[0x1e] high edge lint[0x1]) Dec 16 13:31:48.693932 kernel: ACPI: LAPIC_NMI (acpi_id[0x1f] high edge lint[0x1]) Dec 16 13:31:48.693937 kernel: ACPI: LAPIC_NMI (acpi_id[0x20] high edge lint[0x1]) Dec 16 13:31:48.693941 kernel: ACPI: LAPIC_NMI (acpi_id[0x21] high edge lint[0x1]) Dec 16 13:31:48.693946 kernel: ACPI: LAPIC_NMI (acpi_id[0x22] high edge lint[0x1]) Dec 16 13:31:48.693950 kernel: ACPI: LAPIC_NMI (acpi_id[0x23] high edge lint[0x1]) Dec 16 13:31:48.693956 kernel: ACPI: LAPIC_NMI (acpi_id[0x24] high edge lint[0x1]) Dec 16 13:31:48.693961 kernel: ACPI: LAPIC_NMI (acpi_id[0x25] high edge lint[0x1]) Dec 16 13:31:48.693965 kernel: ACPI: LAPIC_NMI (acpi_id[0x26] high edge lint[0x1]) Dec 16 13:31:48.693970 kernel: ACPI: LAPIC_NMI (acpi_id[0x27] high edge lint[0x1]) Dec 16 13:31:48.693978 kernel: ACPI: LAPIC_NMI (acpi_id[0x28] high edge lint[0x1]) Dec 16 13:31:48.693984 kernel: ACPI: LAPIC_NMI (acpi_id[0x29] high edge lint[0x1]) Dec 16 13:31:48.693989 kernel: ACPI: LAPIC_NMI (acpi_id[0x2a] high edge lint[0x1]) Dec 16 13:31:48.693994 kernel: ACPI: LAPIC_NMI (acpi_id[0x2b] high edge lint[0x1]) Dec 16 13:31:48.694000 kernel: ACPI: LAPIC_NMI (acpi_id[0x2c] high edge lint[0x1]) Dec 16 13:31:48.694005 kernel: ACPI: LAPIC_NMI (acpi_id[0x2d] high edge lint[0x1]) Dec 16 13:31:48.694010 kernel: ACPI: LAPIC_NMI (acpi_id[0x2e] high edge lint[0x1]) Dec 16 13:31:48.694014 kernel: ACPI: LAPIC_NMI (acpi_id[0x2f] high edge lint[0x1]) Dec 16 13:31:48.694019 kernel: ACPI: LAPIC_NMI (acpi_id[0x30] high edge lint[0x1]) Dec 16 13:31:48.694024 kernel: ACPI: LAPIC_NMI (acpi_id[0x31] high edge lint[0x1]) Dec 16 13:31:48.694029 kernel: ACPI: LAPIC_NMI (acpi_id[0x32] high edge lint[0x1]) Dec 16 13:31:48.694034 kernel: ACPI: LAPIC_NMI (acpi_id[0x33] high edge lint[0x1]) Dec 16 13:31:48.694039 kernel: ACPI: LAPIC_NMI (acpi_id[0x34] high edge lint[0x1]) Dec 16 13:31:48.694045 kernel: ACPI: LAPIC_NMI (acpi_id[0x35] high edge lint[0x1]) Dec 16 13:31:48.694050 kernel: ACPI: LAPIC_NMI (acpi_id[0x36] high edge lint[0x1]) Dec 16 13:31:48.694055 kernel: ACPI: LAPIC_NMI (acpi_id[0x37] high edge lint[0x1]) Dec 16 13:31:48.694060 kernel: ACPI: LAPIC_NMI (acpi_id[0x38] high edge lint[0x1]) Dec 16 13:31:48.694065 kernel: ACPI: LAPIC_NMI (acpi_id[0x39] high edge lint[0x1]) Dec 16 13:31:48.694070 kernel: ACPI: LAPIC_NMI (acpi_id[0x3a] high edge lint[0x1]) Dec 16 13:31:48.694075 kernel: ACPI: LAPIC_NMI (acpi_id[0x3b] high edge lint[0x1]) Dec 16 13:31:48.694080 kernel: ACPI: LAPIC_NMI (acpi_id[0x3c] high edge lint[0x1]) Dec 16 13:31:48.694085 kernel: ACPI: LAPIC_NMI (acpi_id[0x3d] high edge lint[0x1]) Dec 16 13:31:48.694090 kernel: ACPI: LAPIC_NMI (acpi_id[0x3e] high edge lint[0x1]) Dec 16 13:31:48.694095 kernel: ACPI: LAPIC_NMI (acpi_id[0x3f] high edge lint[0x1]) Dec 16 13:31:48.694100 kernel: ACPI: LAPIC_NMI (acpi_id[0x40] high edge lint[0x1]) Dec 16 13:31:48.694105 kernel: ACPI: LAPIC_NMI (acpi_id[0x41] high edge lint[0x1]) Dec 16 13:31:48.694110 kernel: ACPI: LAPIC_NMI (acpi_id[0x42] high edge lint[0x1]) Dec 16 13:31:48.694115 kernel: ACPI: LAPIC_NMI (acpi_id[0x43] high edge lint[0x1]) Dec 16 13:31:48.694120 kernel: ACPI: LAPIC_NMI (acpi_id[0x44] high edge lint[0x1]) Dec 16 13:31:48.694125 kernel: ACPI: LAPIC_NMI (acpi_id[0x45] high edge lint[0x1]) Dec 16 13:31:48.694130 kernel: ACPI: LAPIC_NMI (acpi_id[0x46] high edge lint[0x1]) Dec 16 13:31:48.694135 kernel: ACPI: LAPIC_NMI (acpi_id[0x47] high edge lint[0x1]) Dec 16 13:31:48.694140 kernel: ACPI: LAPIC_NMI (acpi_id[0x48] high edge lint[0x1]) Dec 16 13:31:48.694146 kernel: ACPI: LAPIC_NMI (acpi_id[0x49] high edge lint[0x1]) Dec 16 13:31:48.694150 kernel: ACPI: LAPIC_NMI (acpi_id[0x4a] high edge lint[0x1]) Dec 16 13:31:48.694156 kernel: ACPI: LAPIC_NMI (acpi_id[0x4b] high edge lint[0x1]) Dec 16 13:31:48.694161 kernel: ACPI: LAPIC_NMI (acpi_id[0x4c] high edge lint[0x1]) Dec 16 13:31:48.694166 kernel: ACPI: LAPIC_NMI (acpi_id[0x4d] high edge lint[0x1]) Dec 16 13:31:48.694170 kernel: ACPI: LAPIC_NMI (acpi_id[0x4e] high edge lint[0x1]) Dec 16 13:31:48.694175 kernel: ACPI: LAPIC_NMI (acpi_id[0x4f] high edge lint[0x1]) Dec 16 13:31:48.694180 kernel: ACPI: LAPIC_NMI (acpi_id[0x50] high edge lint[0x1]) Dec 16 13:31:48.694185 kernel: ACPI: LAPIC_NMI (acpi_id[0x51] high edge lint[0x1]) Dec 16 13:31:48.694190 kernel: ACPI: LAPIC_NMI (acpi_id[0x52] high edge lint[0x1]) Dec 16 13:31:48.694196 kernel: ACPI: LAPIC_NMI (acpi_id[0x53] high edge lint[0x1]) Dec 16 13:31:48.694201 kernel: ACPI: LAPIC_NMI (acpi_id[0x54] high edge lint[0x1]) Dec 16 13:31:48.694206 kernel: ACPI: LAPIC_NMI (acpi_id[0x55] high edge lint[0x1]) Dec 16 13:31:48.694211 kernel: ACPI: LAPIC_NMI (acpi_id[0x56] high edge lint[0x1]) Dec 16 13:31:48.694216 kernel: ACPI: LAPIC_NMI (acpi_id[0x57] high edge lint[0x1]) Dec 16 13:31:48.694221 kernel: ACPI: LAPIC_NMI (acpi_id[0x58] high edge lint[0x1]) Dec 16 13:31:48.694225 kernel: ACPI: LAPIC_NMI (acpi_id[0x59] high edge lint[0x1]) Dec 16 13:31:48.694230 kernel: ACPI: LAPIC_NMI (acpi_id[0x5a] high edge lint[0x1]) Dec 16 13:31:48.694235 kernel: ACPI: LAPIC_NMI (acpi_id[0x5b] high edge lint[0x1]) Dec 16 13:31:48.694241 kernel: ACPI: LAPIC_NMI (acpi_id[0x5c] high edge lint[0x1]) Dec 16 13:31:48.694246 kernel: ACPI: LAPIC_NMI (acpi_id[0x5d] high edge lint[0x1]) Dec 16 13:31:48.694251 kernel: ACPI: LAPIC_NMI (acpi_id[0x5e] high edge lint[0x1]) Dec 16 13:31:48.694256 kernel: ACPI: LAPIC_NMI (acpi_id[0x5f] high edge lint[0x1]) Dec 16 13:31:48.694261 kernel: ACPI: LAPIC_NMI (acpi_id[0x60] high edge lint[0x1]) Dec 16 13:31:48.694266 kernel: ACPI: LAPIC_NMI (acpi_id[0x61] high edge lint[0x1]) Dec 16 13:31:48.694271 kernel: ACPI: LAPIC_NMI (acpi_id[0x62] high edge lint[0x1]) Dec 16 13:31:48.694276 kernel: ACPI: LAPIC_NMI (acpi_id[0x63] high edge lint[0x1]) Dec 16 13:31:48.694281 kernel: ACPI: LAPIC_NMI (acpi_id[0x64] high edge lint[0x1]) Dec 16 13:31:48.694286 kernel: ACPI: LAPIC_NMI (acpi_id[0x65] high edge lint[0x1]) Dec 16 13:31:48.694292 kernel: ACPI: LAPIC_NMI (acpi_id[0x66] high edge lint[0x1]) Dec 16 13:31:48.694297 kernel: ACPI: LAPIC_NMI (acpi_id[0x67] high edge lint[0x1]) Dec 16 13:31:48.694302 kernel: ACPI: LAPIC_NMI (acpi_id[0x68] high edge lint[0x1]) Dec 16 13:31:48.694307 kernel: ACPI: LAPIC_NMI (acpi_id[0x69] high edge lint[0x1]) Dec 16 13:31:48.694312 kernel: ACPI: LAPIC_NMI (acpi_id[0x6a] high edge lint[0x1]) Dec 16 13:31:48.694317 kernel: ACPI: LAPIC_NMI (acpi_id[0x6b] high edge lint[0x1]) Dec 16 13:31:48.694322 kernel: ACPI: LAPIC_NMI (acpi_id[0x6c] high edge lint[0x1]) Dec 16 13:31:48.694326 kernel: ACPI: LAPIC_NMI (acpi_id[0x6d] high edge lint[0x1]) Dec 16 13:31:48.694331 kernel: ACPI: LAPIC_NMI (acpi_id[0x6e] high edge lint[0x1]) Dec 16 13:31:48.694336 kernel: ACPI: LAPIC_NMI (acpi_id[0x6f] high edge lint[0x1]) Dec 16 13:31:48.694343 kernel: ACPI: LAPIC_NMI (acpi_id[0x70] high edge lint[0x1]) Dec 16 13:31:48.694348 kernel: ACPI: LAPIC_NMI (acpi_id[0x71] high edge lint[0x1]) Dec 16 13:31:48.694353 kernel: ACPI: LAPIC_NMI (acpi_id[0x72] high edge lint[0x1]) Dec 16 13:31:48.694358 kernel: ACPI: LAPIC_NMI (acpi_id[0x73] high edge lint[0x1]) Dec 16 13:31:48.694363 kernel: ACPI: LAPIC_NMI (acpi_id[0x74] high edge lint[0x1]) Dec 16 13:31:48.694368 kernel: ACPI: LAPIC_NMI (acpi_id[0x75] high edge lint[0x1]) Dec 16 13:31:48.694373 kernel: ACPI: LAPIC_NMI (acpi_id[0x76] high edge lint[0x1]) Dec 16 13:31:48.694378 kernel: ACPI: LAPIC_NMI (acpi_id[0x77] high edge lint[0x1]) Dec 16 13:31:48.694383 kernel: ACPI: LAPIC_NMI (acpi_id[0x78] high edge lint[0x1]) Dec 16 13:31:48.694388 kernel: ACPI: LAPIC_NMI (acpi_id[0x79] high edge lint[0x1]) Dec 16 13:31:48.694393 kernel: ACPI: LAPIC_NMI (acpi_id[0x7a] high edge lint[0x1]) Dec 16 13:31:48.694398 kernel: ACPI: LAPIC_NMI (acpi_id[0x7b] high edge lint[0x1]) Dec 16 13:31:48.694403 kernel: ACPI: LAPIC_NMI (acpi_id[0x7c] high edge lint[0x1]) Dec 16 13:31:48.694408 kernel: ACPI: LAPIC_NMI (acpi_id[0x7d] high edge lint[0x1]) Dec 16 13:31:48.694413 kernel: ACPI: LAPIC_NMI (acpi_id[0x7e] high edge lint[0x1]) Dec 16 13:31:48.694418 kernel: ACPI: LAPIC_NMI (acpi_id[0x7f] high edge lint[0x1]) Dec 16 13:31:48.694423 kernel: IOAPIC[0]: apic_id 1, version 17, address 0xfec00000, GSI 0-23 Dec 16 13:31:48.694433 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 high edge) Dec 16 13:31:48.694440 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 16 13:31:48.694447 kernel: ACPI: HPET id: 0x8086af01 base: 0xfed00000 Dec 16 13:31:48.694452 kernel: TSC deadline timer available Dec 16 13:31:48.694457 kernel: CPU topo: Max. logical packages: 128 Dec 16 13:31:48.694462 kernel: CPU topo: Max. logical dies: 128 Dec 16 13:31:48.694467 kernel: CPU topo: Max. dies per package: 1 Dec 16 13:31:48.694477 kernel: CPU topo: Max. threads per core: 1 Dec 16 13:31:48.694483 kernel: CPU topo: Num. cores per package: 1 Dec 16 13:31:48.694488 kernel: CPU topo: Num. threads per package: 1 Dec 16 13:31:48.694493 kernel: CPU topo: Allowing 2 present CPUs plus 126 hotplug CPUs Dec 16 13:31:48.694498 kernel: [mem 0x80000000-0xefffffff] available for PCI devices Dec 16 13:31:48.694504 kernel: Booting paravirtualized kernel on VMware hypervisor Dec 16 13:31:48.694514 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 16 13:31:48.694520 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:128 nr_cpu_ids:128 nr_node_ids:1 Dec 16 13:31:48.694525 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u262144 Dec 16 13:31:48.694530 kernel: pcpu-alloc: s207832 r8192 d29736 u262144 alloc=1*2097152 Dec 16 13:31:48.694535 kernel: pcpu-alloc: [0] 000 001 002 003 004 005 006 007 Dec 16 13:31:48.694545 kernel: pcpu-alloc: [0] 008 009 010 011 012 013 014 015 Dec 16 13:31:48.694551 kernel: pcpu-alloc: [0] 016 017 018 019 020 021 022 023 Dec 16 13:31:48.694558 kernel: pcpu-alloc: [0] 024 025 026 027 028 029 030 031 Dec 16 13:31:48.694563 kernel: pcpu-alloc: [0] 032 033 034 035 036 037 038 039 Dec 16 13:31:48.694568 kernel: pcpu-alloc: [0] 040 041 042 043 044 045 046 047 Dec 16 13:31:48.694576 kernel: pcpu-alloc: [0] 048 049 050 051 052 053 054 055 Dec 16 13:31:48.694582 kernel: pcpu-alloc: [0] 056 057 058 059 060 061 062 063 Dec 16 13:31:48.694587 kernel: pcpu-alloc: [0] 064 065 066 067 068 069 070 071 Dec 16 13:31:48.694591 kernel: pcpu-alloc: [0] 072 073 074 075 076 077 078 079 Dec 16 13:31:48.694596 kernel: pcpu-alloc: [0] 080 081 082 083 084 085 086 087 Dec 16 13:31:48.694605 kernel: pcpu-alloc: [0] 088 089 090 091 092 093 094 095 Dec 16 13:31:48.694614 kernel: pcpu-alloc: [0] 096 097 098 099 100 101 102 103 Dec 16 13:31:48.694619 kernel: pcpu-alloc: [0] 104 105 106 107 108 109 110 111 Dec 16 13:31:48.694624 kernel: pcpu-alloc: [0] 112 113 114 115 116 117 118 119 Dec 16 13:31:48.694629 kernel: pcpu-alloc: [0] 120 121 122 123 124 125 126 127 Dec 16 13:31:48.694639 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=a214a2d85e162c493e8b13db2df50a43e1005a0e4854a1ae089a14f442a30022 Dec 16 13:31:48.694646 kernel: random: crng init done Dec 16 13:31:48.694651 kernel: printk: log_buf_len individual max cpu contribution: 4096 bytes Dec 16 13:31:48.694656 kernel: printk: log_buf_len total cpu_extra contributions: 520192 bytes Dec 16 13:31:48.694668 kernel: printk: log_buf_len min size: 262144 bytes Dec 16 13:31:48.694674 kernel: printk: log_buf_len: 1048576 bytes Dec 16 13:31:48.694679 kernel: printk: early log buf free: 245704(93%) Dec 16 13:31:48.694684 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 16 13:31:48.694689 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 16 13:31:48.694694 kernel: Fallback order for Node 0: 0 Dec 16 13:31:48.694699 kernel: Built 1 zonelists, mobility grouping on. Total pages: 524157 Dec 16 13:31:48.694705 kernel: Policy zone: DMA32 Dec 16 13:31:48.694710 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 16 13:31:48.694715 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=128, Nodes=1 Dec 16 13:31:48.694722 kernel: ftrace: allocating 40103 entries in 157 pages Dec 16 13:31:48.694727 kernel: ftrace: allocated 157 pages with 5 groups Dec 16 13:31:48.694732 kernel: Dynamic Preempt: voluntary Dec 16 13:31:48.694737 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 16 13:31:48.694742 kernel: rcu: RCU event tracing is enabled. Dec 16 13:31:48.694748 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=128. Dec 16 13:31:48.694753 kernel: Trampoline variant of Tasks RCU enabled. Dec 16 13:31:48.694758 kernel: Rude variant of Tasks RCU enabled. Dec 16 13:31:48.694763 kernel: Tracing variant of Tasks RCU enabled. Dec 16 13:31:48.694769 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 16 13:31:48.694774 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=128 Dec 16 13:31:48.694779 kernel: RCU Tasks: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Dec 16 13:31:48.694785 kernel: RCU Tasks Rude: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Dec 16 13:31:48.694790 kernel: RCU Tasks Trace: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Dec 16 13:31:48.694795 kernel: NR_IRQS: 33024, nr_irqs: 1448, preallocated irqs: 16 Dec 16 13:31:48.694800 kernel: rcu: srcu_init: Setting srcu_struct sizes to big. Dec 16 13:31:48.694805 kernel: Console: colour VGA+ 80x25 Dec 16 13:31:48.694810 kernel: printk: legacy console [tty0] enabled Dec 16 13:31:48.695023 kernel: printk: legacy console [ttyS0] enabled Dec 16 13:31:48.695032 kernel: ACPI: Core revision 20240827 Dec 16 13:31:48.695038 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 133484882848 ns Dec 16 13:31:48.695043 kernel: APIC: Switch to symmetric I/O mode setup Dec 16 13:31:48.695048 kernel: x2apic enabled Dec 16 13:31:48.695053 kernel: APIC: Switched APIC routing to: physical x2apic Dec 16 13:31:48.695059 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Dec 16 13:31:48.695064 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Dec 16 13:31:48.695069 kernel: Calibrating delay loop (skipped) preset value.. 6816.00 BogoMIPS (lpj=3408000) Dec 16 13:31:48.695076 kernel: Disabled fast string operations Dec 16 13:31:48.695081 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Dec 16 13:31:48.695086 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Dec 16 13:31:48.695092 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 16 13:31:48.695097 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall and VM exit Dec 16 13:31:48.695102 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Dec 16 13:31:48.695107 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Dec 16 13:31:48.695112 kernel: RETBleed: Mitigation: Enhanced IBRS Dec 16 13:31:48.695137 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 16 13:31:48.695145 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Dec 16 13:31:48.695150 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Dec 16 13:31:48.695155 kernel: SRBDS: Unknown: Dependent on hypervisor status Dec 16 13:31:48.695160 kernel: GDS: Unknown: Dependent on hypervisor status Dec 16 13:31:48.695165 kernel: active return thunk: its_return_thunk Dec 16 13:31:48.695170 kernel: ITS: Mitigation: Aligned branch/return thunks Dec 16 13:31:48.695175 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 16 13:31:48.695180 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 16 13:31:48.695186 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 16 13:31:48.695192 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 16 13:31:48.695197 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Dec 16 13:31:48.695202 kernel: Freeing SMP alternatives memory: 32K Dec 16 13:31:48.695210 kernel: pid_max: default: 131072 minimum: 1024 Dec 16 13:31:48.695216 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Dec 16 13:31:48.695221 kernel: landlock: Up and running. Dec 16 13:31:48.695226 kernel: SELinux: Initializing. Dec 16 13:31:48.695231 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 16 13:31:48.695236 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 16 13:31:48.695242 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Dec 16 13:31:48.695247 kernel: Performance Events: Skylake events, core PMU driver. Dec 16 13:31:48.695252 kernel: core: CPUID marked event: 'cpu cycles' unavailable Dec 16 13:31:48.695258 kernel: core: CPUID marked event: 'instructions' unavailable Dec 16 13:31:48.695262 kernel: core: CPUID marked event: 'bus cycles' unavailable Dec 16 13:31:48.695267 kernel: core: CPUID marked event: 'cache references' unavailable Dec 16 13:31:48.695272 kernel: core: CPUID marked event: 'cache misses' unavailable Dec 16 13:31:48.695456 kernel: core: CPUID marked event: 'branch instructions' unavailable Dec 16 13:31:48.695467 kernel: core: CPUID marked event: 'branch misses' unavailable Dec 16 13:31:48.695486 kernel: ... version: 1 Dec 16 13:31:48.695491 kernel: ... bit width: 48 Dec 16 13:31:48.695496 kernel: ... generic registers: 4 Dec 16 13:31:48.695501 kernel: ... value mask: 0000ffffffffffff Dec 16 13:31:48.695506 kernel: ... max period: 000000007fffffff Dec 16 13:31:48.695511 kernel: ... fixed-purpose events: 0 Dec 16 13:31:48.695516 kernel: ... event mask: 000000000000000f Dec 16 13:31:48.695538 kernel: signal: max sigframe size: 1776 Dec 16 13:31:48.695558 kernel: rcu: Hierarchical SRCU implementation. Dec 16 13:31:48.695564 kernel: rcu: Max phase no-delay instances is 400. Dec 16 13:31:48.695570 kernel: Timer migration: 3 hierarchy levels; 8 children per group; 3 crossnode level Dec 16 13:31:48.695575 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Dec 16 13:31:48.695580 kernel: smp: Bringing up secondary CPUs ... Dec 16 13:31:48.695585 kernel: smpboot: x86: Booting SMP configuration: Dec 16 13:31:48.695590 kernel: .... node #0, CPUs: #1 Dec 16 13:31:48.695595 kernel: Disabled fast string operations Dec 16 13:31:48.695600 kernel: smp: Brought up 1 node, 2 CPUs Dec 16 13:31:48.695605 kernel: smpboot: Total of 2 processors activated (13632.00 BogoMIPS) Dec 16 13:31:48.695612 kernel: Memory: 1916044K/2096628K available (14336K kernel code, 2444K rwdata, 26064K rodata, 46188K init, 2572K bss, 169200K reserved, 0K cma-reserved) Dec 16 13:31:48.695617 kernel: devtmpfs: initialized Dec 16 13:31:48.695622 kernel: x86/mm: Memory block size: 128MB Dec 16 13:31:48.695627 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7feff000-0x7fefffff] (4096 bytes) Dec 16 13:31:48.695633 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 16 13:31:48.695638 kernel: futex hash table entries: 32768 (order: 9, 2097152 bytes, linear) Dec 16 13:31:48.695643 kernel: pinctrl core: initialized pinctrl subsystem Dec 16 13:31:48.695648 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 16 13:31:48.695653 kernel: audit: initializing netlink subsys (disabled) Dec 16 13:31:48.695659 kernel: audit: type=2000 audit(1765891906.264:1): state=initialized audit_enabled=0 res=1 Dec 16 13:31:48.695664 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 16 13:31:48.695670 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 16 13:31:48.695675 kernel: cpuidle: using governor menu Dec 16 13:31:48.695680 kernel: Simple Boot Flag at 0x36 set to 0x80 Dec 16 13:31:48.695685 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 16 13:31:48.695690 kernel: dca service started, version 1.12.1 Dec 16 13:31:48.695695 kernel: PCI: ECAM [mem 0xf0000000-0xf7ffffff] (base 0xf0000000) for domain 0000 [bus 00-7f] Dec 16 13:31:48.695707 kernel: PCI: Using configuration type 1 for base access Dec 16 13:31:48.695714 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 16 13:31:48.695720 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 16 13:31:48.695725 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Dec 16 13:31:48.695730 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 16 13:31:48.695762 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 16 13:31:48.695768 kernel: ACPI: Added _OSI(Module Device) Dec 16 13:31:48.696952 kernel: ACPI: Added _OSI(Processor Device) Dec 16 13:31:48.696959 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 16 13:31:48.696964 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 16 13:31:48.696972 kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored Dec 16 13:31:48.696977 kernel: ACPI: Interpreter enabled Dec 16 13:31:48.696983 kernel: ACPI: PM: (supports S0 S1 S5) Dec 16 13:31:48.696988 kernel: ACPI: Using IOAPIC for interrupt routing Dec 16 13:31:48.696994 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 16 13:31:48.696999 kernel: PCI: Using E820 reservations for host bridge windows Dec 16 13:31:48.697005 kernel: ACPI: Enabled 4 GPEs in block 00 to 0F Dec 16 13:31:48.697010 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-7f]) Dec 16 13:31:48.697089 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 16 13:31:48.697144 kernel: acpi PNP0A03:00: _OSC: platform does not support [AER LTR] Dec 16 13:31:48.697192 kernel: acpi PNP0A03:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability] Dec 16 13:31:48.697201 kernel: PCI host bridge to bus 0000:00 Dec 16 13:31:48.697251 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 16 13:31:48.697295 kernel: pci_bus 0000:00: root bus resource [mem 0x000cc000-0x000dbfff window] Dec 16 13:31:48.697337 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Dec 16 13:31:48.697381 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 16 13:31:48.697423 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xfeff window] Dec 16 13:31:48.697464 kernel: pci_bus 0000:00: root bus resource [bus 00-7f] Dec 16 13:31:48.697521 kernel: pci 0000:00:00.0: [8086:7190] type 00 class 0x060000 conventional PCI endpoint Dec 16 13:31:48.697577 kernel: pci 0000:00:01.0: [8086:7191] type 01 class 0x060400 conventional PCI bridge Dec 16 13:31:48.697627 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Dec 16 13:31:48.697682 kernel: pci 0000:00:07.0: [8086:7110] type 00 class 0x060100 conventional PCI endpoint Dec 16 13:31:48.697736 kernel: pci 0000:00:07.1: [8086:7111] type 00 class 0x01018a conventional PCI endpoint Dec 16 13:31:48.697784 kernel: pci 0000:00:07.1: BAR 4 [io 0x1060-0x106f] Dec 16 13:31:48.698851 kernel: pci 0000:00:07.1: BAR 0 [io 0x01f0-0x01f7]: legacy IDE quirk Dec 16 13:31:48.698977 kernel: pci 0000:00:07.1: BAR 1 [io 0x03f6]: legacy IDE quirk Dec 16 13:31:48.699026 kernel: pci 0000:00:07.1: BAR 2 [io 0x0170-0x0177]: legacy IDE quirk Dec 16 13:31:48.699074 kernel: pci 0000:00:07.1: BAR 3 [io 0x0376]: legacy IDE quirk Dec 16 13:31:48.699126 kernel: pci 0000:00:07.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint Dec 16 13:31:48.699175 kernel: pci 0000:00:07.3: quirk: [io 0x1000-0x103f] claimed by PIIX4 ACPI Dec 16 13:31:48.699222 kernel: pci 0000:00:07.3: quirk: [io 0x1040-0x104f] claimed by PIIX4 SMB Dec 16 13:31:48.699274 kernel: pci 0000:00:07.7: [15ad:0740] type 00 class 0x088000 conventional PCI endpoint Dec 16 13:31:48.699325 kernel: pci 0000:00:07.7: BAR 0 [io 0x1080-0x10bf] Dec 16 13:31:48.699372 kernel: pci 0000:00:07.7: BAR 1 [mem 0xfebfe000-0xfebfffff 64bit] Dec 16 13:31:48.699424 kernel: pci 0000:00:0f.0: [15ad:0405] type 00 class 0x030000 conventional PCI endpoint Dec 16 13:31:48.699472 kernel: pci 0000:00:0f.0: BAR 0 [io 0x1070-0x107f] Dec 16 13:31:48.699534 kernel: pci 0000:00:0f.0: BAR 1 [mem 0xe8000000-0xefffffff pref] Dec 16 13:31:48.699592 kernel: pci 0000:00:0f.0: BAR 2 [mem 0xfe000000-0xfe7fffff] Dec 16 13:31:48.699653 kernel: pci 0000:00:0f.0: ROM [mem 0x00000000-0x00007fff pref] Dec 16 13:31:48.699709 kernel: pci 0000:00:0f.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 16 13:31:48.699764 kernel: pci 0000:00:11.0: [15ad:0790] type 01 class 0x060401 conventional PCI bridge Dec 16 13:31:48.699828 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] (subtractive decode) Dec 16 13:31:48.699886 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Dec 16 13:31:48.699935 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Dec 16 13:31:48.699982 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Dec 16 13:31:48.700038 kernel: pci 0000:00:15.0: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Dec 16 13:31:48.700088 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Dec 16 13:31:48.700135 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Dec 16 13:31:48.700183 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Dec 16 13:31:48.700231 kernel: pci 0000:00:15.0: PME# supported from D0 D3hot D3cold Dec 16 13:31:48.700284 kernel: pci 0000:00:15.1: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Dec 16 13:31:48.700333 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Dec 16 13:31:48.700389 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Dec 16 13:31:48.700438 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Dec 16 13:31:48.700498 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Dec 16 13:31:48.700554 kernel: pci 0000:00:15.1: PME# supported from D0 D3hot D3cold Dec 16 13:31:48.700609 kernel: pci 0000:00:15.2: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Dec 16 13:31:48.700658 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Dec 16 13:31:48.700708 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Dec 16 13:31:48.700755 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Dec 16 13:31:48.700803 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Dec 16 13:31:48.703047 kernel: pci 0000:00:15.2: PME# supported from D0 D3hot D3cold Dec 16 13:31:48.703122 kernel: pci 0000:00:15.3: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Dec 16 13:31:48.703175 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Dec 16 13:31:48.703224 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Dec 16 13:31:48.703276 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Dec 16 13:31:48.703325 kernel: pci 0000:00:15.3: PME# supported from D0 D3hot D3cold Dec 16 13:31:48.703379 kernel: pci 0000:00:15.4: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Dec 16 13:31:48.703428 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Dec 16 13:31:48.703475 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Dec 16 13:31:48.703523 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Dec 16 13:31:48.703570 kernel: pci 0000:00:15.4: PME# supported from D0 D3hot D3cold Dec 16 13:31:48.703626 kernel: pci 0000:00:15.5: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Dec 16 13:31:48.703675 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Dec 16 13:31:48.703723 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Dec 16 13:31:48.703771 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Dec 16 13:31:48.703827 kernel: pci 0000:00:15.5: PME# supported from D0 D3hot D3cold Dec 16 13:31:48.703882 kernel: pci 0000:00:15.6: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Dec 16 13:31:48.703931 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Dec 16 13:31:48.703982 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Dec 16 13:31:48.704030 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Dec 16 13:31:48.704078 kernel: pci 0000:00:15.6: PME# supported from D0 D3hot D3cold Dec 16 13:31:48.704129 kernel: pci 0000:00:15.7: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Dec 16 13:31:48.704178 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Dec 16 13:31:48.704226 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Dec 16 13:31:48.704273 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Dec 16 13:31:48.704320 kernel: pci 0000:00:15.7: PME# supported from D0 D3hot D3cold Dec 16 13:31:48.704376 kernel: pci 0000:00:16.0: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Dec 16 13:31:48.704425 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Dec 16 13:31:48.704473 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Dec 16 13:31:48.704520 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Dec 16 13:31:48.704567 kernel: pci 0000:00:16.0: PME# supported from D0 D3hot D3cold Dec 16 13:31:48.704619 kernel: pci 0000:00:16.1: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Dec 16 13:31:48.704667 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Dec 16 13:31:48.704718 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Dec 16 13:31:48.704765 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Dec 16 13:31:48.704812 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Dec 16 13:31:48.707653 kernel: pci 0000:00:16.1: PME# supported from D0 D3hot D3cold Dec 16 13:31:48.707709 kernel: pci 0000:00:16.2: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Dec 16 13:31:48.707760 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Dec 16 13:31:48.707809 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Dec 16 13:31:48.707872 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Dec 16 13:31:48.707922 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Dec 16 13:31:48.707971 kernel: pci 0000:00:16.2: PME# supported from D0 D3hot D3cold Dec 16 13:31:48.708025 kernel: pci 0000:00:16.3: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Dec 16 13:31:48.708074 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Dec 16 13:31:48.708122 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Dec 16 13:31:48.708171 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Dec 16 13:31:48.708223 kernel: pci 0000:00:16.3: PME# supported from D0 D3hot D3cold Dec 16 13:31:48.708277 kernel: pci 0000:00:16.4: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Dec 16 13:31:48.708326 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Dec 16 13:31:48.708374 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Dec 16 13:31:48.708421 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Dec 16 13:31:48.708468 kernel: pci 0000:00:16.4: PME# supported from D0 D3hot D3cold Dec 16 13:31:48.708521 kernel: pci 0000:00:16.5: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Dec 16 13:31:48.708572 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Dec 16 13:31:48.708619 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Dec 16 13:31:48.708666 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Dec 16 13:31:48.708714 kernel: pci 0000:00:16.5: PME# supported from D0 D3hot D3cold Dec 16 13:31:48.708766 kernel: pci 0000:00:16.6: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Dec 16 13:31:48.708815 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Dec 16 13:31:48.708925 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Dec 16 13:31:48.709006 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Dec 16 13:31:48.709054 kernel: pci 0000:00:16.6: PME# supported from D0 D3hot D3cold Dec 16 13:31:48.709106 kernel: pci 0000:00:16.7: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Dec 16 13:31:48.709154 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Dec 16 13:31:48.709203 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Dec 16 13:31:48.709250 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Dec 16 13:31:48.709299 kernel: pci 0000:00:16.7: PME# supported from D0 D3hot D3cold Dec 16 13:31:48.709353 kernel: pci 0000:00:17.0: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Dec 16 13:31:48.709404 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Dec 16 13:31:48.709452 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Dec 16 13:31:48.709499 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Dec 16 13:31:48.709547 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Dec 16 13:31:48.709595 kernel: pci 0000:00:17.0: PME# supported from D0 D3hot D3cold Dec 16 13:31:48.709647 kernel: pci 0000:00:17.1: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Dec 16 13:31:48.709699 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Dec 16 13:31:48.709752 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Dec 16 13:31:48.709811 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Dec 16 13:31:48.711923 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Dec 16 13:31:48.711980 kernel: pci 0000:00:17.1: PME# supported from D0 D3hot D3cold Dec 16 13:31:48.712037 kernel: pci 0000:00:17.2: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Dec 16 13:31:48.712088 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Dec 16 13:31:48.712138 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Dec 16 13:31:48.712188 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Dec 16 13:31:48.712236 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Dec 16 13:31:48.712285 kernel: pci 0000:00:17.2: PME# supported from D0 D3hot D3cold Dec 16 13:31:48.712341 kernel: pci 0000:00:17.3: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Dec 16 13:31:48.712392 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Dec 16 13:31:48.712440 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Dec 16 13:31:48.712492 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Dec 16 13:31:48.712549 kernel: pci 0000:00:17.3: PME# supported from D0 D3hot D3cold Dec 16 13:31:48.712601 kernel: pci 0000:00:17.4: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Dec 16 13:31:48.712655 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Dec 16 13:31:48.712712 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Dec 16 13:31:48.712760 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Dec 16 13:31:48.712808 kernel: pci 0000:00:17.4: PME# supported from D0 D3hot D3cold Dec 16 13:31:48.712873 kernel: pci 0000:00:17.5: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Dec 16 13:31:48.712922 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Dec 16 13:31:48.712970 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Dec 16 13:31:48.713018 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Dec 16 13:31:48.713069 kernel: pci 0000:00:17.5: PME# supported from D0 D3hot D3cold Dec 16 13:31:48.713122 kernel: pci 0000:00:17.6: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Dec 16 13:31:48.713171 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Dec 16 13:31:48.713219 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Dec 16 13:31:48.713266 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Dec 16 13:31:48.713313 kernel: pci 0000:00:17.6: PME# supported from D0 D3hot D3cold Dec 16 13:31:48.713364 kernel: pci 0000:00:17.7: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Dec 16 13:31:48.713413 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Dec 16 13:31:48.713464 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Dec 16 13:31:48.713512 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Dec 16 13:31:48.713559 kernel: pci 0000:00:17.7: PME# supported from D0 D3hot D3cold Dec 16 13:31:48.713612 kernel: pci 0000:00:18.0: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Dec 16 13:31:48.713660 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Dec 16 13:31:48.713708 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Dec 16 13:31:48.713755 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Dec 16 13:31:48.713805 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Dec 16 13:31:48.714912 kernel: pci 0000:00:18.0: PME# supported from D0 D3hot D3cold Dec 16 13:31:48.714971 kernel: pci 0000:00:18.1: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Dec 16 13:31:48.715023 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Dec 16 13:31:48.715072 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Dec 16 13:31:48.715121 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Dec 16 13:31:48.715231 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Dec 16 13:31:48.715480 kernel: pci 0000:00:18.1: PME# supported from D0 D3hot D3cold Dec 16 13:31:48.715541 kernel: pci 0000:00:18.2: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Dec 16 13:31:48.715619 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Dec 16 13:31:48.715669 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Dec 16 13:31:48.715718 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Dec 16 13:31:48.715766 kernel: pci 0000:00:18.2: PME# supported from D0 D3hot D3cold Dec 16 13:31:48.715826 kernel: pci 0000:00:18.3: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Dec 16 13:31:48.715884 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Dec 16 13:31:48.715933 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Dec 16 13:31:48.715982 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Dec 16 13:31:48.716029 kernel: pci 0000:00:18.3: PME# supported from D0 D3hot D3cold Dec 16 13:31:48.716083 kernel: pci 0000:00:18.4: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Dec 16 13:31:48.716132 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Dec 16 13:31:48.716179 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Dec 16 13:31:48.716230 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Dec 16 13:31:48.716278 kernel: pci 0000:00:18.4: PME# supported from D0 D3hot D3cold Dec 16 13:31:48.716330 kernel: pci 0000:00:18.5: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Dec 16 13:31:48.716379 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Dec 16 13:31:48.716426 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Dec 16 13:31:48.716475 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Dec 16 13:31:48.716523 kernel: pci 0000:00:18.5: PME# supported from D0 D3hot D3cold Dec 16 13:31:48.716581 kernel: pci 0000:00:18.6: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Dec 16 13:31:48.716630 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Dec 16 13:31:48.716679 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Dec 16 13:31:48.716727 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Dec 16 13:31:48.716775 kernel: pci 0000:00:18.6: PME# supported from D0 D3hot D3cold Dec 16 13:31:48.716834 kernel: pci 0000:00:18.7: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Dec 16 13:31:48.716892 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Dec 16 13:31:48.716944 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Dec 16 13:31:48.716993 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Dec 16 13:31:48.717042 kernel: pci 0000:00:18.7: PME# supported from D0 D3hot D3cold Dec 16 13:31:48.717095 kernel: pci_bus 0000:01: extended config space not accessible Dec 16 13:31:48.717146 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Dec 16 13:31:48.717196 kernel: pci_bus 0000:02: extended config space not accessible Dec 16 13:31:48.717205 kernel: acpiphp: Slot [32] registered Dec 16 13:31:48.717211 kernel: acpiphp: Slot [33] registered Dec 16 13:31:48.717219 kernel: acpiphp: Slot [34] registered Dec 16 13:31:48.717224 kernel: acpiphp: Slot [35] registered Dec 16 13:31:48.717230 kernel: acpiphp: Slot [36] registered Dec 16 13:31:48.717235 kernel: acpiphp: Slot [37] registered Dec 16 13:31:48.717241 kernel: acpiphp: Slot [38] registered Dec 16 13:31:48.717246 kernel: acpiphp: Slot [39] registered Dec 16 13:31:48.717252 kernel: acpiphp: Slot [40] registered Dec 16 13:31:48.717257 kernel: acpiphp: Slot [41] registered Dec 16 13:31:48.717263 kernel: acpiphp: Slot [42] registered Dec 16 13:31:48.717269 kernel: acpiphp: Slot [43] registered Dec 16 13:31:48.717275 kernel: acpiphp: Slot [44] registered Dec 16 13:31:48.717280 kernel: acpiphp: Slot [45] registered Dec 16 13:31:48.717286 kernel: acpiphp: Slot [46] registered Dec 16 13:31:48.717291 kernel: acpiphp: Slot [47] registered Dec 16 13:31:48.717296 kernel: acpiphp: Slot [48] registered Dec 16 13:31:48.717302 kernel: acpiphp: Slot [49] registered Dec 16 13:31:48.717307 kernel: acpiphp: Slot [50] registered Dec 16 13:31:48.717313 kernel: acpiphp: Slot [51] registered Dec 16 13:31:48.717318 kernel: acpiphp: Slot [52] registered Dec 16 13:31:48.717324 kernel: acpiphp: Slot [53] registered Dec 16 13:31:48.717330 kernel: acpiphp: Slot [54] registered Dec 16 13:31:48.717335 kernel: acpiphp: Slot [55] registered Dec 16 13:31:48.717341 kernel: acpiphp: Slot [56] registered Dec 16 13:31:48.717347 kernel: acpiphp: Slot [57] registered Dec 16 13:31:48.717352 kernel: acpiphp: Slot [58] registered Dec 16 13:31:48.717357 kernel: acpiphp: Slot [59] registered Dec 16 13:31:48.717362 kernel: acpiphp: Slot [60] registered Dec 16 13:31:48.717368 kernel: acpiphp: Slot [61] registered Dec 16 13:31:48.717374 kernel: acpiphp: Slot [62] registered Dec 16 13:31:48.717380 kernel: acpiphp: Slot [63] registered Dec 16 13:31:48.717437 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] (subtractive decode) Dec 16 13:31:48.717487 kernel: pci 0000:00:11.0: bridge window [mem 0x000a0000-0x000bffff window] (subtractive decode) Dec 16 13:31:48.717535 kernel: pci 0000:00:11.0: bridge window [mem 0x000cc000-0x000dbfff window] (subtractive decode) Dec 16 13:31:48.717584 kernel: pci 0000:00:11.0: bridge window [mem 0xc0000000-0xfebfffff window] (subtractive decode) Dec 16 13:31:48.717631 kernel: pci 0000:00:11.0: bridge window [io 0x0000-0x0cf7 window] (subtractive decode) Dec 16 13:31:48.717679 kernel: pci 0000:00:11.0: bridge window [io 0x0d00-0xfeff window] (subtractive decode) Dec 16 13:31:48.717738 kernel: pci 0000:03:00.0: [15ad:07c0] type 00 class 0x010700 PCIe Endpoint Dec 16 13:31:48.717789 kernel: pci 0000:03:00.0: BAR 0 [io 0x4000-0x4007] Dec 16 13:31:48.718153 kernel: pci 0000:03:00.0: BAR 1 [mem 0xfd5f8000-0xfd5fffff 64bit] Dec 16 13:31:48.718208 kernel: pci 0000:03:00.0: ROM [mem 0x00000000-0x0000ffff pref] Dec 16 13:31:48.718259 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Dec 16 13:31:48.718308 kernel: pci 0000:03:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Dec 16 13:31:48.718359 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Dec 16 13:31:48.718412 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Dec 16 13:31:48.718466 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Dec 16 13:31:48.718526 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Dec 16 13:31:48.718589 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Dec 16 13:31:48.718651 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Dec 16 13:31:48.718720 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Dec 16 13:31:48.718779 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Dec 16 13:31:48.718897 kernel: pci 0000:0b:00.0: [15ad:07b0] type 00 class 0x020000 PCIe Endpoint Dec 16 13:31:48.719004 kernel: pci 0000:0b:00.0: BAR 0 [mem 0xfd4fc000-0xfd4fcfff] Dec 16 13:31:48.719055 kernel: pci 0000:0b:00.0: BAR 1 [mem 0xfd4fd000-0xfd4fdfff] Dec 16 13:31:48.719119 kernel: pci 0000:0b:00.0: BAR 2 [mem 0xfd4fe000-0xfd4fffff] Dec 16 13:31:48.719204 kernel: pci 0000:0b:00.0: BAR 3 [io 0x5000-0x500f] Dec 16 13:31:48.719256 kernel: pci 0000:0b:00.0: ROM [mem 0x00000000-0x0000ffff pref] Dec 16 13:31:48.719305 kernel: pci 0000:0b:00.0: supports D1 D2 Dec 16 13:31:48.719355 kernel: pci 0000:0b:00.0: PME# supported from D0 D1 D2 D3hot D3cold Dec 16 13:31:48.719408 kernel: pci 0000:0b:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Dec 16 13:31:48.719458 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Dec 16 13:31:48.719506 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Dec 16 13:31:48.719556 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Dec 16 13:31:48.719605 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Dec 16 13:31:48.719654 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Dec 16 13:31:48.719712 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Dec 16 13:31:48.719765 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Dec 16 13:31:48.719815 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Dec 16 13:31:48.719877 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Dec 16 13:31:48.719927 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Dec 16 13:31:48.719976 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Dec 16 13:31:48.720024 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Dec 16 13:31:48.720073 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Dec 16 13:31:48.720122 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Dec 16 13:31:48.720174 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Dec 16 13:31:48.720224 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Dec 16 13:31:48.720272 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Dec 16 13:31:48.720320 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Dec 16 13:31:48.720369 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Dec 16 13:31:48.720441 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Dec 16 13:31:48.720541 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Dec 16 13:31:48.720592 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Dec 16 13:31:48.720641 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Dec 16 13:31:48.720689 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Dec 16 13:31:48.720698 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 9 Dec 16 13:31:48.720704 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 0 Dec 16 13:31:48.720710 kernel: ACPI: PCI: Interrupt link LNKB disabled Dec 16 13:31:48.720715 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 16 13:31:48.720721 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 10 Dec 16 13:31:48.720728 kernel: iommu: Default domain type: Translated Dec 16 13:31:48.720734 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 16 13:31:48.720740 kernel: PCI: Using ACPI for IRQ routing Dec 16 13:31:48.720746 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 16 13:31:48.720751 kernel: e820: reserve RAM buffer [mem 0x0009ec00-0x0009ffff] Dec 16 13:31:48.720757 kernel: e820: reserve RAM buffer [mem 0x7fee0000-0x7fffffff] Dec 16 13:31:48.720804 kernel: pci 0000:00:0f.0: vgaarb: setting as boot VGA device Dec 16 13:31:48.720885 kernel: pci 0000:00:0f.0: vgaarb: bridge control possible Dec 16 13:31:48.722464 kernel: pci 0000:00:0f.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 16 13:31:48.722478 kernel: vgaarb: loaded Dec 16 13:31:48.722485 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 Dec 16 13:31:48.722490 kernel: hpet0: 16 comparators, 64-bit 14.318180 MHz counter Dec 16 13:31:48.722496 kernel: clocksource: Switched to clocksource tsc-early Dec 16 13:31:48.722502 kernel: VFS: Disk quotas dquot_6.6.0 Dec 16 13:31:48.722507 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 16 13:31:48.722513 kernel: pnp: PnP ACPI init Dec 16 13:31:48.722567 kernel: system 00:00: [io 0x1000-0x103f] has been reserved Dec 16 13:31:48.722617 kernel: system 00:00: [io 0x1040-0x104f] has been reserved Dec 16 13:31:48.722661 kernel: system 00:00: [io 0x0cf0-0x0cf1] has been reserved Dec 16 13:31:48.722711 kernel: system 00:04: [mem 0xfed00000-0xfed003ff] has been reserved Dec 16 13:31:48.722758 kernel: pnp 00:06: [dma 2] Dec 16 13:31:48.722806 kernel: system 00:07: [io 0xfce0-0xfcff] has been reserved Dec 16 13:31:48.722859 kernel: system 00:07: [mem 0xf0000000-0xf7ffffff] has been reserved Dec 16 13:31:48.722914 kernel: system 00:07: [mem 0xfe800000-0xfe9fffff] has been reserved Dec 16 13:31:48.722923 kernel: pnp: PnP ACPI: found 8 devices Dec 16 13:31:48.722929 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 16 13:31:48.722935 kernel: NET: Registered PF_INET protocol family Dec 16 13:31:48.722940 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 16 13:31:48.722946 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Dec 16 13:31:48.722952 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 16 13:31:48.722957 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 16 13:31:48.722963 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Dec 16 13:31:48.722970 kernel: TCP: Hash tables configured (established 16384 bind 16384) Dec 16 13:31:48.722976 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 16 13:31:48.722982 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 16 13:31:48.722987 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 16 13:31:48.722992 kernel: NET: Registered PF_XDP protocol family Dec 16 13:31:48.723041 kernel: pci 0000:00:15.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Dec 16 13:31:48.723092 kernel: pci 0000:00:15.3: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Dec 16 13:31:48.723140 kernel: pci 0000:00:15.4: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Dec 16 13:31:48.723192 kernel: pci 0000:00:15.5: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Dec 16 13:31:48.723242 kernel: pci 0000:00:15.6: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Dec 16 13:31:48.723296 kernel: pci 0000:00:15.7: bridge window [io 0x1000-0x0fff] to [bus 0a] add_size 1000 Dec 16 13:31:48.723345 kernel: pci 0000:00:16.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0b] add_size 200000 add_align 100000 Dec 16 13:31:48.723394 kernel: pci 0000:00:16.3: bridge window [io 0x1000-0x0fff] to [bus 0e] add_size 1000 Dec 16 13:31:48.723443 kernel: pci 0000:00:16.4: bridge window [io 0x1000-0x0fff] to [bus 0f] add_size 1000 Dec 16 13:31:48.723490 kernel: pci 0000:00:16.5: bridge window [io 0x1000-0x0fff] to [bus 10] add_size 1000 Dec 16 13:31:48.723539 kernel: pci 0000:00:16.6: bridge window [io 0x1000-0x0fff] to [bus 11] add_size 1000 Dec 16 13:31:48.723591 kernel: pci 0000:00:16.7: bridge window [io 0x1000-0x0fff] to [bus 12] add_size 1000 Dec 16 13:31:48.723640 kernel: pci 0000:00:17.3: bridge window [io 0x1000-0x0fff] to [bus 16] add_size 1000 Dec 16 13:31:48.723688 kernel: pci 0000:00:17.4: bridge window [io 0x1000-0x0fff] to [bus 17] add_size 1000 Dec 16 13:31:48.723736 kernel: pci 0000:00:17.5: bridge window [io 0x1000-0x0fff] to [bus 18] add_size 1000 Dec 16 13:31:48.723783 kernel: pci 0000:00:17.6: bridge window [io 0x1000-0x0fff] to [bus 19] add_size 1000 Dec 16 13:31:48.723871 kernel: pci 0000:00:17.7: bridge window [io 0x1000-0x0fff] to [bus 1a] add_size 1000 Dec 16 13:31:48.723922 kernel: pci 0000:00:18.2: bridge window [io 0x1000-0x0fff] to [bus 1d] add_size 1000 Dec 16 13:31:48.723970 kernel: pci 0000:00:18.3: bridge window [io 0x1000-0x0fff] to [bus 1e] add_size 1000 Dec 16 13:31:48.724020 kernel: pci 0000:00:18.4: bridge window [io 0x1000-0x0fff] to [bus 1f] add_size 1000 Dec 16 13:31:48.724068 kernel: pci 0000:00:18.5: bridge window [io 0x1000-0x0fff] to [bus 20] add_size 1000 Dec 16 13:31:48.724116 kernel: pci 0000:00:18.6: bridge window [io 0x1000-0x0fff] to [bus 21] add_size 1000 Dec 16 13:31:48.724163 kernel: pci 0000:00:18.7: bridge window [io 0x1000-0x0fff] to [bus 22] add_size 1000 Dec 16 13:31:48.724222 kernel: pci 0000:00:15.0: bridge window [mem 0xc0000000-0xc01fffff 64bit pref]: assigned Dec 16 13:31:48.724271 kernel: pci 0000:00:16.0: bridge window [mem 0xc0200000-0xc03fffff 64bit pref]: assigned Dec 16 13:31:48.724320 kernel: pci 0000:00:15.3: bridge window [io size 0x1000]: can't assign; no space Dec 16 13:31:48.724393 kernel: pci 0000:00:15.3: bridge window [io size 0x1000]: failed to assign Dec 16 13:31:48.724444 kernel: pci 0000:00:15.4: bridge window [io size 0x1000]: can't assign; no space Dec 16 13:31:48.724491 kernel: pci 0000:00:15.4: bridge window [io size 0x1000]: failed to assign Dec 16 13:31:48.724539 kernel: pci 0000:00:15.5: bridge window [io size 0x1000]: can't assign; no space Dec 16 13:31:48.724586 kernel: pci 0000:00:15.5: bridge window [io size 0x1000]: failed to assign Dec 16 13:31:48.724633 kernel: pci 0000:00:15.6: bridge window [io size 0x1000]: can't assign; no space Dec 16 13:31:48.724682 kernel: pci 0000:00:15.6: bridge window [io size 0x1000]: failed to assign Dec 16 13:31:48.724731 kernel: pci 0000:00:15.7: bridge window [io size 0x1000]: can't assign; no space Dec 16 13:31:48.724781 kernel: pci 0000:00:15.7: bridge window [io size 0x1000]: failed to assign Dec 16 13:31:48.724885 kernel: pci 0000:00:16.3: bridge window [io size 0x1000]: can't assign; no space Dec 16 13:31:48.724958 kernel: pci 0000:00:16.3: bridge window [io size 0x1000]: failed to assign Dec 16 13:31:48.725007 kernel: pci 0000:00:16.4: bridge window [io size 0x1000]: can't assign; no space Dec 16 13:31:48.725055 kernel: pci 0000:00:16.4: bridge window [io size 0x1000]: failed to assign Dec 16 13:31:48.725103 kernel: pci 0000:00:16.5: bridge window [io size 0x1000]: can't assign; no space Dec 16 13:31:48.725150 kernel: pci 0000:00:16.5: bridge window [io size 0x1000]: failed to assign Dec 16 13:31:48.725197 kernel: pci 0000:00:16.6: bridge window [io size 0x1000]: can't assign; no space Dec 16 13:31:48.725248 kernel: pci 0000:00:16.6: bridge window [io size 0x1000]: failed to assign Dec 16 13:31:48.725296 kernel: pci 0000:00:16.7: bridge window [io size 0x1000]: can't assign; no space Dec 16 13:31:48.725344 kernel: pci 0000:00:16.7: bridge window [io size 0x1000]: failed to assign Dec 16 13:31:48.725392 kernel: pci 0000:00:17.3: bridge window [io size 0x1000]: can't assign; no space Dec 16 13:31:48.725440 kernel: pci 0000:00:17.3: bridge window [io size 0x1000]: failed to assign Dec 16 13:31:48.725488 kernel: pci 0000:00:17.4: bridge window [io size 0x1000]: can't assign; no space Dec 16 13:31:48.725536 kernel: pci 0000:00:17.4: bridge window [io size 0x1000]: failed to assign Dec 16 13:31:48.725584 kernel: pci 0000:00:17.5: bridge window [io size 0x1000]: can't assign; no space Dec 16 13:31:48.725635 kernel: pci 0000:00:17.5: bridge window [io size 0x1000]: failed to assign Dec 16 13:31:48.725683 kernel: pci 0000:00:17.6: bridge window [io size 0x1000]: can't assign; no space Dec 16 13:31:48.725730 kernel: pci 0000:00:17.6: bridge window [io size 0x1000]: failed to assign Dec 16 13:31:48.725778 kernel: pci 0000:00:17.7: bridge window [io size 0x1000]: can't assign; no space Dec 16 13:31:48.725853 kernel: pci 0000:00:17.7: bridge window [io size 0x1000]: failed to assign Dec 16 13:31:48.725908 kernel: pci 0000:00:18.2: bridge window [io size 0x1000]: can't assign; no space Dec 16 13:31:48.725956 kernel: pci 0000:00:18.2: bridge window [io size 0x1000]: failed to assign Dec 16 13:31:48.726004 kernel: pci 0000:00:18.3: bridge window [io size 0x1000]: can't assign; no space Dec 16 13:31:48.726054 kernel: pci 0000:00:18.3: bridge window [io size 0x1000]: failed to assign Dec 16 13:31:48.726102 kernel: pci 0000:00:18.4: bridge window [io size 0x1000]: can't assign; no space Dec 16 13:31:48.726150 kernel: pci 0000:00:18.4: bridge window [io size 0x1000]: failed to assign Dec 16 13:31:48.726197 kernel: pci 0000:00:18.5: bridge window [io size 0x1000]: can't assign; no space Dec 16 13:31:48.726245 kernel: pci 0000:00:18.5: bridge window [io size 0x1000]: failed to assign Dec 16 13:31:48.726293 kernel: pci 0000:00:18.6: bridge window [io size 0x1000]: can't assign; no space Dec 16 13:31:48.726341 kernel: pci 0000:00:18.6: bridge window [io size 0x1000]: failed to assign Dec 16 13:31:48.726388 kernel: pci 0000:00:18.7: bridge window [io size 0x1000]: can't assign; no space Dec 16 13:31:48.726439 kernel: pci 0000:00:18.7: bridge window [io size 0x1000]: failed to assign Dec 16 13:31:48.726486 kernel: pci 0000:00:18.7: bridge window [io size 0x1000]: can't assign; no space Dec 16 13:31:48.726533 kernel: pci 0000:00:18.7: bridge window [io size 0x1000]: failed to assign Dec 16 13:31:48.726581 kernel: pci 0000:00:18.6: bridge window [io size 0x1000]: can't assign; no space Dec 16 13:31:48.726628 kernel: pci 0000:00:18.6: bridge window [io size 0x1000]: failed to assign Dec 16 13:31:48.726675 kernel: pci 0000:00:18.5: bridge window [io size 0x1000]: can't assign; no space Dec 16 13:31:48.726722 kernel: pci 0000:00:18.5: bridge window [io size 0x1000]: failed to assign Dec 16 13:31:48.726769 kernel: pci 0000:00:18.4: bridge window [io size 0x1000]: can't assign; no space Dec 16 13:31:48.727833 kernel: pci 0000:00:18.4: bridge window [io size 0x1000]: failed to assign Dec 16 13:31:48.727904 kernel: pci 0000:00:18.3: bridge window [io size 0x1000]: can't assign; no space Dec 16 13:31:48.727960 kernel: pci 0000:00:18.3: bridge window [io size 0x1000]: failed to assign Dec 16 13:31:48.728012 kernel: pci 0000:00:18.2: bridge window [io size 0x1000]: can't assign; no space Dec 16 13:31:48.728061 kernel: pci 0000:00:18.2: bridge window [io size 0x1000]: failed to assign Dec 16 13:31:48.728110 kernel: pci 0000:00:17.7: bridge window [io size 0x1000]: can't assign; no space Dec 16 13:31:48.728159 kernel: pci 0000:00:17.7: bridge window [io size 0x1000]: failed to assign Dec 16 13:31:48.728208 kernel: pci 0000:00:17.6: bridge window [io size 0x1000]: can't assign; no space Dec 16 13:31:48.728256 kernel: pci 0000:00:17.6: bridge window [io size 0x1000]: failed to assign Dec 16 13:31:48.728304 kernel: pci 0000:00:17.5: bridge window [io size 0x1000]: can't assign; no space Dec 16 13:31:48.728355 kernel: pci 0000:00:17.5: bridge window [io size 0x1000]: failed to assign Dec 16 13:31:48.728436 kernel: pci 0000:00:17.4: bridge window [io size 0x1000]: can't assign; no space Dec 16 13:31:48.728487 kernel: pci 0000:00:17.4: bridge window [io size 0x1000]: failed to assign Dec 16 13:31:48.728535 kernel: pci 0000:00:17.3: bridge window [io size 0x1000]: can't assign; no space Dec 16 13:31:48.728584 kernel: pci 0000:00:17.3: bridge window [io size 0x1000]: failed to assign Dec 16 13:31:48.728632 kernel: pci 0000:00:16.7: bridge window [io size 0x1000]: can't assign; no space Dec 16 13:31:48.728679 kernel: pci 0000:00:16.7: bridge window [io size 0x1000]: failed to assign Dec 16 13:31:48.728727 kernel: pci 0000:00:16.6: bridge window [io size 0x1000]: can't assign; no space Dec 16 13:31:48.728774 kernel: pci 0000:00:16.6: bridge window [io size 0x1000]: failed to assign Dec 16 13:31:48.728850 kernel: pci 0000:00:16.5: bridge window [io size 0x1000]: can't assign; no space Dec 16 13:31:48.728975 kernel: pci 0000:00:16.5: bridge window [io size 0x1000]: failed to assign Dec 16 13:31:48.729024 kernel: pci 0000:00:16.4: bridge window [io size 0x1000]: can't assign; no space Dec 16 13:31:48.729072 kernel: pci 0000:00:16.4: bridge window [io size 0x1000]: failed to assign Dec 16 13:31:48.729119 kernel: pci 0000:00:16.3: bridge window [io size 0x1000]: can't assign; no space Dec 16 13:31:48.729167 kernel: pci 0000:00:16.3: bridge window [io size 0x1000]: failed to assign Dec 16 13:31:48.729217 kernel: pci 0000:00:15.7: bridge window [io size 0x1000]: can't assign; no space Dec 16 13:31:48.729267 kernel: pci 0000:00:15.7: bridge window [io size 0x1000]: failed to assign Dec 16 13:31:48.729315 kernel: pci 0000:00:15.6: bridge window [io size 0x1000]: can't assign; no space Dec 16 13:31:48.729363 kernel: pci 0000:00:15.6: bridge window [io size 0x1000]: failed to assign Dec 16 13:31:48.729411 kernel: pci 0000:00:15.5: bridge window [io size 0x1000]: can't assign; no space Dec 16 13:31:48.729458 kernel: pci 0000:00:15.5: bridge window [io size 0x1000]: failed to assign Dec 16 13:31:48.729505 kernel: pci 0000:00:15.4: bridge window [io size 0x1000]: can't assign; no space Dec 16 13:31:48.729554 kernel: pci 0000:00:15.4: bridge window [io size 0x1000]: failed to assign Dec 16 13:31:48.729601 kernel: pci 0000:00:15.3: bridge window [io size 0x1000]: can't assign; no space Dec 16 13:31:48.729648 kernel: pci 0000:00:15.3: bridge window [io size 0x1000]: failed to assign Dec 16 13:31:48.729699 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Dec 16 13:31:48.729747 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] Dec 16 13:31:48.729795 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Dec 16 13:31:48.729866 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Dec 16 13:31:48.729917 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Dec 16 13:31:48.729968 kernel: pci 0000:03:00.0: ROM [mem 0xfd500000-0xfd50ffff pref]: assigned Dec 16 13:31:48.730017 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Dec 16 13:31:48.730065 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Dec 16 13:31:48.730113 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Dec 16 13:31:48.730165 kernel: pci 0000:00:15.0: bridge window [mem 0xc0000000-0xc01fffff 64bit pref] Dec 16 13:31:48.730215 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Dec 16 13:31:48.730263 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Dec 16 13:31:48.730311 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Dec 16 13:31:48.730360 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Dec 16 13:31:48.730408 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Dec 16 13:31:48.730456 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Dec 16 13:31:48.730503 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Dec 16 13:31:48.730550 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Dec 16 13:31:48.730601 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Dec 16 13:31:48.730649 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Dec 16 13:31:48.730696 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Dec 16 13:31:48.730743 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Dec 16 13:31:48.730792 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Dec 16 13:31:48.731930 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Dec 16 13:31:48.731988 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Dec 16 13:31:48.732040 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Dec 16 13:31:48.732093 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Dec 16 13:31:48.732142 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Dec 16 13:31:48.732192 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Dec 16 13:31:48.732240 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Dec 16 13:31:48.732289 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Dec 16 13:31:48.732337 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Dec 16 13:31:48.732385 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Dec 16 13:31:48.732439 kernel: pci 0000:0b:00.0: ROM [mem 0xfd400000-0xfd40ffff pref]: assigned Dec 16 13:31:48.732488 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Dec 16 13:31:48.732536 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Dec 16 13:31:48.732584 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Dec 16 13:31:48.732632 kernel: pci 0000:00:16.0: bridge window [mem 0xc0200000-0xc03fffff 64bit pref] Dec 16 13:31:48.732680 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Dec 16 13:31:48.732729 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Dec 16 13:31:48.732777 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Dec 16 13:31:48.732832 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Dec 16 13:31:48.732889 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Dec 16 13:31:48.732937 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Dec 16 13:31:48.732985 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Dec 16 13:31:48.733032 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Dec 16 13:31:48.733081 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Dec 16 13:31:48.733128 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Dec 16 13:31:48.733176 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Dec 16 13:31:48.733224 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Dec 16 13:31:48.733275 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Dec 16 13:31:48.733322 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Dec 16 13:31:48.733370 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Dec 16 13:31:48.733418 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Dec 16 13:31:48.733466 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Dec 16 13:31:48.733514 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Dec 16 13:31:48.733561 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Dec 16 13:31:48.733609 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Dec 16 13:31:48.733660 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Dec 16 13:31:48.733707 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Dec 16 13:31:48.733756 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Dec 16 13:31:48.733804 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Dec 16 13:31:48.733898 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Dec 16 13:31:48.733948 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Dec 16 13:31:48.733997 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Dec 16 13:31:48.734046 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Dec 16 13:31:48.734096 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Dec 16 13:31:48.734144 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Dec 16 13:31:48.734191 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Dec 16 13:31:48.734241 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Dec 16 13:31:48.734289 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Dec 16 13:31:48.734337 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Dec 16 13:31:48.734385 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Dec 16 13:31:48.734433 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Dec 16 13:31:48.734480 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Dec 16 13:31:48.734530 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Dec 16 13:31:48.734579 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Dec 16 13:31:48.734627 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Dec 16 13:31:48.734675 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Dec 16 13:31:48.734723 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Dec 16 13:31:48.734770 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Dec 16 13:31:48.734830 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Dec 16 13:31:48.734890 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Dec 16 13:31:48.734939 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Dec 16 13:31:48.734987 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Dec 16 13:31:48.735035 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Dec 16 13:31:48.735082 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Dec 16 13:31:48.735130 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Dec 16 13:31:48.735178 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Dec 16 13:31:48.735226 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Dec 16 13:31:48.735276 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Dec 16 13:31:48.735325 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Dec 16 13:31:48.735373 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Dec 16 13:31:48.735421 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Dec 16 13:31:48.735470 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Dec 16 13:31:48.735517 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Dec 16 13:31:48.735565 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Dec 16 13:31:48.735612 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Dec 16 13:31:48.735659 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Dec 16 13:31:48.735710 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Dec 16 13:31:48.735758 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Dec 16 13:31:48.735806 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Dec 16 13:31:48.735875 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Dec 16 13:31:48.735924 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Dec 16 13:31:48.735972 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Dec 16 13:31:48.736020 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Dec 16 13:31:48.736071 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Dec 16 13:31:48.736118 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Dec 16 13:31:48.736167 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Dec 16 13:31:48.736214 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Dec 16 13:31:48.736262 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Dec 16 13:31:48.736311 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Dec 16 13:31:48.736359 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Dec 16 13:31:48.736407 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Dec 16 13:31:48.736456 kernel: pci_bus 0000:00: resource 4 [mem 0x000a0000-0x000bffff window] Dec 16 13:31:48.736499 kernel: pci_bus 0000:00: resource 5 [mem 0x000cc000-0x000dbfff window] Dec 16 13:31:48.736541 kernel: pci_bus 0000:00: resource 6 [mem 0xc0000000-0xfebfffff window] Dec 16 13:31:48.736583 kernel: pci_bus 0000:00: resource 7 [io 0x0000-0x0cf7 window] Dec 16 13:31:48.736624 kernel: pci_bus 0000:00: resource 8 [io 0x0d00-0xfeff window] Dec 16 13:31:48.736671 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x3fff] Dec 16 13:31:48.736717 kernel: pci_bus 0000:02: resource 1 [mem 0xfd600000-0xfdffffff] Dec 16 13:31:48.736761 kernel: pci_bus 0000:02: resource 2 [mem 0xe7b00000-0xe7ffffff 64bit pref] Dec 16 13:31:48.736804 kernel: pci_bus 0000:02: resource 4 [mem 0x000a0000-0x000bffff window] Dec 16 13:31:48.736866 kernel: pci_bus 0000:02: resource 5 [mem 0x000cc000-0x000dbfff window] Dec 16 13:31:48.736946 kernel: pci_bus 0000:02: resource 6 [mem 0xc0000000-0xfebfffff window] Dec 16 13:31:48.736990 kernel: pci_bus 0000:02: resource 7 [io 0x0000-0x0cf7 window] Dec 16 13:31:48.737033 kernel: pci_bus 0000:02: resource 8 [io 0x0d00-0xfeff window] Dec 16 13:31:48.737081 kernel: pci_bus 0000:03: resource 0 [io 0x4000-0x4fff] Dec 16 13:31:48.737129 kernel: pci_bus 0000:03: resource 1 [mem 0xfd500000-0xfd5fffff] Dec 16 13:31:48.737173 kernel: pci_bus 0000:03: resource 2 [mem 0xc0000000-0xc01fffff 64bit pref] Dec 16 13:31:48.737220 kernel: pci_bus 0000:04: resource 0 [io 0x8000-0x8fff] Dec 16 13:31:48.737264 kernel: pci_bus 0000:04: resource 1 [mem 0xfd100000-0xfd1fffff] Dec 16 13:31:48.737308 kernel: pci_bus 0000:04: resource 2 [mem 0xe7800000-0xe78fffff 64bit pref] Dec 16 13:31:48.737358 kernel: pci_bus 0000:05: resource 0 [io 0xc000-0xcfff] Dec 16 13:31:48.737402 kernel: pci_bus 0000:05: resource 1 [mem 0xfcd00000-0xfcdfffff] Dec 16 13:31:48.737448 kernel: pci_bus 0000:05: resource 2 [mem 0xe7400000-0xe74fffff 64bit pref] Dec 16 13:31:48.737496 kernel: pci_bus 0000:06: resource 1 [mem 0xfc900000-0xfc9fffff] Dec 16 13:31:48.737540 kernel: pci_bus 0000:06: resource 2 [mem 0xe7000000-0xe70fffff 64bit pref] Dec 16 13:31:48.737586 kernel: pci_bus 0000:07: resource 1 [mem 0xfc500000-0xfc5fffff] Dec 16 13:31:48.737631 kernel: pci_bus 0000:07: resource 2 [mem 0xe6c00000-0xe6cfffff 64bit pref] Dec 16 13:31:48.737678 kernel: pci_bus 0000:08: resource 1 [mem 0xfc100000-0xfc1fffff] Dec 16 13:31:48.737724 kernel: pci_bus 0000:08: resource 2 [mem 0xe6800000-0xe68fffff 64bit pref] Dec 16 13:31:48.737774 kernel: pci_bus 0000:09: resource 1 [mem 0xfbd00000-0xfbdfffff] Dec 16 13:31:48.737826 kernel: pci_bus 0000:09: resource 2 [mem 0xe6400000-0xe64fffff 64bit pref] Dec 16 13:31:48.737879 kernel: pci_bus 0000:0a: resource 1 [mem 0xfb900000-0xfb9fffff] Dec 16 13:31:48.737924 kernel: pci_bus 0000:0a: resource 2 [mem 0xe6000000-0xe60fffff 64bit pref] Dec 16 13:31:48.737974 kernel: pci_bus 0000:0b: resource 0 [io 0x5000-0x5fff] Dec 16 13:31:48.738018 kernel: pci_bus 0000:0b: resource 1 [mem 0xfd400000-0xfd4fffff] Dec 16 13:31:48.738062 kernel: pci_bus 0000:0b: resource 2 [mem 0xc0200000-0xc03fffff 64bit pref] Dec 16 13:31:48.738109 kernel: pci_bus 0000:0c: resource 0 [io 0x9000-0x9fff] Dec 16 13:31:48.738152 kernel: pci_bus 0000:0c: resource 1 [mem 0xfd000000-0xfd0fffff] Dec 16 13:31:48.738196 kernel: pci_bus 0000:0c: resource 2 [mem 0xe7700000-0xe77fffff 64bit pref] Dec 16 13:31:48.738243 kernel: pci_bus 0000:0d: resource 0 [io 0xd000-0xdfff] Dec 16 13:31:48.738289 kernel: pci_bus 0000:0d: resource 1 [mem 0xfcc00000-0xfccfffff] Dec 16 13:31:48.738333 kernel: pci_bus 0000:0d: resource 2 [mem 0xe7300000-0xe73fffff 64bit pref] Dec 16 13:31:48.738382 kernel: pci_bus 0000:0e: resource 1 [mem 0xfc800000-0xfc8fffff] Dec 16 13:31:48.738426 kernel: pci_bus 0000:0e: resource 2 [mem 0xe6f00000-0xe6ffffff 64bit pref] Dec 16 13:31:48.738473 kernel: pci_bus 0000:0f: resource 1 [mem 0xfc400000-0xfc4fffff] Dec 16 13:31:48.738516 kernel: pci_bus 0000:0f: resource 2 [mem 0xe6b00000-0xe6bfffff 64bit pref] Dec 16 13:31:48.738566 kernel: pci_bus 0000:10: resource 1 [mem 0xfc000000-0xfc0fffff] Dec 16 13:31:48.738611 kernel: pci_bus 0000:10: resource 2 [mem 0xe6700000-0xe67fffff 64bit pref] Dec 16 13:31:48.738657 kernel: pci_bus 0000:11: resource 1 [mem 0xfbc00000-0xfbcfffff] Dec 16 13:31:48.738701 kernel: pci_bus 0000:11: resource 2 [mem 0xe6300000-0xe63fffff 64bit pref] Dec 16 13:31:48.738751 kernel: pci_bus 0000:12: resource 1 [mem 0xfb800000-0xfb8fffff] Dec 16 13:31:48.738795 kernel: pci_bus 0000:12: resource 2 [mem 0xe5f00000-0xe5ffffff 64bit pref] Dec 16 13:31:48.739166 kernel: pci_bus 0000:13: resource 0 [io 0x6000-0x6fff] Dec 16 13:31:48.739217 kernel: pci_bus 0000:13: resource 1 [mem 0xfd300000-0xfd3fffff] Dec 16 13:31:48.739261 kernel: pci_bus 0000:13: resource 2 [mem 0xe7a00000-0xe7afffff 64bit pref] Dec 16 13:31:48.739309 kernel: pci_bus 0000:14: resource 0 [io 0xa000-0xafff] Dec 16 13:31:48.739353 kernel: pci_bus 0000:14: resource 1 [mem 0xfcf00000-0xfcffffff] Dec 16 13:31:48.739396 kernel: pci_bus 0000:14: resource 2 [mem 0xe7600000-0xe76fffff 64bit pref] Dec 16 13:31:48.739444 kernel: pci_bus 0000:15: resource 0 [io 0xe000-0xefff] Dec 16 13:31:48.739491 kernel: pci_bus 0000:15: resource 1 [mem 0xfcb00000-0xfcbfffff] Dec 16 13:31:48.739534 kernel: pci_bus 0000:15: resource 2 [mem 0xe7200000-0xe72fffff 64bit pref] Dec 16 13:31:48.739677 kernel: pci_bus 0000:16: resource 1 [mem 0xfc700000-0xfc7fffff] Dec 16 13:31:48.739725 kernel: pci_bus 0000:16: resource 2 [mem 0xe6e00000-0xe6efffff 64bit pref] Dec 16 13:31:48.739773 kernel: pci_bus 0000:17: resource 1 [mem 0xfc300000-0xfc3fffff] Dec 16 13:31:48.739823 kernel: pci_bus 0000:17: resource 2 [mem 0xe6a00000-0xe6afffff 64bit pref] Dec 16 13:31:48.739878 kernel: pci_bus 0000:18: resource 1 [mem 0xfbf00000-0xfbffffff] Dec 16 13:31:48.739926 kernel: pci_bus 0000:18: resource 2 [mem 0xe6600000-0xe66fffff 64bit pref] Dec 16 13:31:48.739974 kernel: pci_bus 0000:19: resource 1 [mem 0xfbb00000-0xfbbfffff] Dec 16 13:31:48.740019 kernel: pci_bus 0000:19: resource 2 [mem 0xe6200000-0xe62fffff 64bit pref] Dec 16 13:31:48.740066 kernel: pci_bus 0000:1a: resource 1 [mem 0xfb700000-0xfb7fffff] Dec 16 13:31:48.740112 kernel: pci_bus 0000:1a: resource 2 [mem 0xe5e00000-0xe5efffff 64bit pref] Dec 16 13:31:48.740161 kernel: pci_bus 0000:1b: resource 0 [io 0x7000-0x7fff] Dec 16 13:31:48.740209 kernel: pci_bus 0000:1b: resource 1 [mem 0xfd200000-0xfd2fffff] Dec 16 13:31:48.740253 kernel: pci_bus 0000:1b: resource 2 [mem 0xe7900000-0xe79fffff 64bit pref] Dec 16 13:31:48.740326 kernel: pci_bus 0000:1c: resource 0 [io 0xb000-0xbfff] Dec 16 13:31:48.740373 kernel: pci_bus 0000:1c: resource 1 [mem 0xfce00000-0xfcefffff] Dec 16 13:31:48.740417 kernel: pci_bus 0000:1c: resource 2 [mem 0xe7500000-0xe75fffff 64bit pref] Dec 16 13:31:48.742355 kernel: pci_bus 0000:1d: resource 1 [mem 0xfca00000-0xfcafffff] Dec 16 13:31:48.742416 kernel: pci_bus 0000:1d: resource 2 [mem 0xe7100000-0xe71fffff 64bit pref] Dec 16 13:31:48.742472 kernel: pci_bus 0000:1e: resource 1 [mem 0xfc600000-0xfc6fffff] Dec 16 13:31:48.742518 kernel: pci_bus 0000:1e: resource 2 [mem 0xe6d00000-0xe6dfffff 64bit pref] Dec 16 13:31:48.742569 kernel: pci_bus 0000:1f: resource 1 [mem 0xfc200000-0xfc2fffff] Dec 16 13:31:48.742614 kernel: pci_bus 0000:1f: resource 2 [mem 0xe6900000-0xe69fffff 64bit pref] Dec 16 13:31:48.742662 kernel: pci_bus 0000:20: resource 1 [mem 0xfbe00000-0xfbefffff] Dec 16 13:31:48.742706 kernel: pci_bus 0000:20: resource 2 [mem 0xe6500000-0xe65fffff 64bit pref] Dec 16 13:31:48.742770 kernel: pci_bus 0000:21: resource 1 [mem 0xfba00000-0xfbafffff] Dec 16 13:31:48.742816 kernel: pci_bus 0000:21: resource 2 [mem 0xe6100000-0xe61fffff 64bit pref] Dec 16 13:31:48.742881 kernel: pci_bus 0000:22: resource 1 [mem 0xfb600000-0xfb6fffff] Dec 16 13:31:48.742927 kernel: pci_bus 0000:22: resource 2 [mem 0xe5d00000-0xe5dfffff 64bit pref] Dec 16 13:31:48.742981 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Dec 16 13:31:48.742990 kernel: PCI: CLS 32 bytes, default 64 Dec 16 13:31:48.742998 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Dec 16 13:31:48.743004 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Dec 16 13:31:48.743010 kernel: clocksource: Switched to clocksource tsc Dec 16 13:31:48.743015 kernel: Initialise system trusted keyrings Dec 16 13:31:48.743021 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Dec 16 13:31:48.743026 kernel: Key type asymmetric registered Dec 16 13:31:48.743032 kernel: Asymmetric key parser 'x509' registered Dec 16 13:31:48.743037 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Dec 16 13:31:48.743043 kernel: io scheduler mq-deadline registered Dec 16 13:31:48.743049 kernel: io scheduler kyber registered Dec 16 13:31:48.743055 kernel: io scheduler bfq registered Dec 16 13:31:48.743105 kernel: pcieport 0000:00:15.0: PME: Signaling with IRQ 24 Dec 16 13:31:48.743154 kernel: pcieport 0000:00:15.0: pciehp: Slot #160 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 16 13:31:48.743204 kernel: pcieport 0000:00:15.1: PME: Signaling with IRQ 25 Dec 16 13:31:48.743254 kernel: pcieport 0000:00:15.1: pciehp: Slot #161 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 16 13:31:48.743303 kernel: pcieport 0000:00:15.2: PME: Signaling with IRQ 26 Dec 16 13:31:48.743351 kernel: pcieport 0000:00:15.2: pciehp: Slot #162 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 16 13:31:48.743402 kernel: pcieport 0000:00:15.3: PME: Signaling with IRQ 27 Dec 16 13:31:48.743452 kernel: pcieport 0000:00:15.3: pciehp: Slot #163 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 16 13:31:48.743502 kernel: pcieport 0000:00:15.4: PME: Signaling with IRQ 28 Dec 16 13:31:48.743550 kernel: pcieport 0000:00:15.4: pciehp: Slot #164 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 16 13:31:48.743599 kernel: pcieport 0000:00:15.5: PME: Signaling with IRQ 29 Dec 16 13:31:48.743647 kernel: pcieport 0000:00:15.5: pciehp: Slot #165 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 16 13:31:48.743695 kernel: pcieport 0000:00:15.6: PME: Signaling with IRQ 30 Dec 16 13:31:48.743748 kernel: pcieport 0000:00:15.6: pciehp: Slot #166 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 16 13:31:48.743797 kernel: pcieport 0000:00:15.7: PME: Signaling with IRQ 31 Dec 16 13:31:48.743876 kernel: pcieport 0000:00:15.7: pciehp: Slot #167 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 16 13:31:48.743927 kernel: pcieport 0000:00:16.0: PME: Signaling with IRQ 32 Dec 16 13:31:48.743976 kernel: pcieport 0000:00:16.0: pciehp: Slot #192 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 16 13:31:48.744025 kernel: pcieport 0000:00:16.1: PME: Signaling with IRQ 33 Dec 16 13:31:48.744073 kernel: pcieport 0000:00:16.1: pciehp: Slot #193 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 16 13:31:48.744125 kernel: pcieport 0000:00:16.2: PME: Signaling with IRQ 34 Dec 16 13:31:48.744174 kernel: pcieport 0000:00:16.2: pciehp: Slot #194 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 16 13:31:48.744485 kernel: pcieport 0000:00:16.3: PME: Signaling with IRQ 35 Dec 16 13:31:48.744541 kernel: pcieport 0000:00:16.3: pciehp: Slot #195 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 16 13:31:48.744601 kernel: pcieport 0000:00:16.4: PME: Signaling with IRQ 36 Dec 16 13:31:48.744653 kernel: pcieport 0000:00:16.4: pciehp: Slot #196 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 16 13:31:48.744702 kernel: pcieport 0000:00:16.5: PME: Signaling with IRQ 37 Dec 16 13:31:48.744751 kernel: pcieport 0000:00:16.5: pciehp: Slot #197 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 16 13:31:48.744803 kernel: pcieport 0000:00:16.6: PME: Signaling with IRQ 38 Dec 16 13:31:48.744917 kernel: pcieport 0000:00:16.6: pciehp: Slot #198 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 16 13:31:48.744970 kernel: pcieport 0000:00:16.7: PME: Signaling with IRQ 39 Dec 16 13:31:48.745019 kernel: pcieport 0000:00:16.7: pciehp: Slot #199 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 16 13:31:48.745069 kernel: pcieport 0000:00:17.0: PME: Signaling with IRQ 40 Dec 16 13:31:48.745117 kernel: pcieport 0000:00:17.0: pciehp: Slot #224 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 16 13:31:48.745166 kernel: pcieport 0000:00:17.1: PME: Signaling with IRQ 41 Dec 16 13:31:48.745234 kernel: pcieport 0000:00:17.1: pciehp: Slot #225 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 16 13:31:48.745296 kernel: pcieport 0000:00:17.2: PME: Signaling with IRQ 42 Dec 16 13:31:48.745345 kernel: pcieport 0000:00:17.2: pciehp: Slot #226 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 16 13:31:48.745394 kernel: pcieport 0000:00:17.3: PME: Signaling with IRQ 43 Dec 16 13:31:48.745443 kernel: pcieport 0000:00:17.3: pciehp: Slot #227 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 16 13:31:48.745501 kernel: pcieport 0000:00:17.4: PME: Signaling with IRQ 44 Dec 16 13:31:48.745551 kernel: pcieport 0000:00:17.4: pciehp: Slot #228 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 16 13:31:48.745604 kernel: pcieport 0000:00:17.5: PME: Signaling with IRQ 45 Dec 16 13:31:48.745653 kernel: pcieport 0000:00:17.5: pciehp: Slot #229 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 16 13:31:48.745702 kernel: pcieport 0000:00:17.6: PME: Signaling with IRQ 46 Dec 16 13:31:48.745751 kernel: pcieport 0000:00:17.6: pciehp: Slot #230 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 16 13:31:48.745801 kernel: pcieport 0000:00:17.7: PME: Signaling with IRQ 47 Dec 16 13:31:48.745864 kernel: pcieport 0000:00:17.7: pciehp: Slot #231 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 16 13:31:48.745916 kernel: pcieport 0000:00:18.0: PME: Signaling with IRQ 48 Dec 16 13:31:48.745964 kernel: pcieport 0000:00:18.0: pciehp: Slot #256 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 16 13:31:48.746017 kernel: pcieport 0000:00:18.1: PME: Signaling with IRQ 49 Dec 16 13:31:48.746066 kernel: pcieport 0000:00:18.1: pciehp: Slot #257 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 16 13:31:48.746115 kernel: pcieport 0000:00:18.2: PME: Signaling with IRQ 50 Dec 16 13:31:48.746163 kernel: pcieport 0000:00:18.2: pciehp: Slot #258 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 16 13:31:48.746213 kernel: pcieport 0000:00:18.3: PME: Signaling with IRQ 51 Dec 16 13:31:48.746261 kernel: pcieport 0000:00:18.3: pciehp: Slot #259 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 16 13:31:48.746310 kernel: pcieport 0000:00:18.4: PME: Signaling with IRQ 52 Dec 16 13:31:48.746361 kernel: pcieport 0000:00:18.4: pciehp: Slot #260 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 16 13:31:48.746410 kernel: pcieport 0000:00:18.5: PME: Signaling with IRQ 53 Dec 16 13:31:48.746460 kernel: pcieport 0000:00:18.5: pciehp: Slot #261 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 16 13:31:48.746510 kernel: pcieport 0000:00:18.6: PME: Signaling with IRQ 54 Dec 16 13:31:48.746559 kernel: pcieport 0000:00:18.6: pciehp: Slot #262 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 16 13:31:48.746608 kernel: pcieport 0000:00:18.7: PME: Signaling with IRQ 55 Dec 16 13:31:48.746657 kernel: pcieport 0000:00:18.7: pciehp: Slot #263 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Dec 16 13:31:48.746668 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 16 13:31:48.746674 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 16 13:31:48.746680 kernel: 00:05: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 16 13:31:48.746686 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBC,PNP0f13:MOUS] at 0x60,0x64 irq 1,12 Dec 16 13:31:48.746693 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 16 13:31:48.746698 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 16 13:31:48.746750 kernel: rtc_cmos 00:01: registered as rtc0 Dec 16 13:31:48.746796 kernel: rtc_cmos 00:01: setting system clock to 2025-12-16T13:31:48 UTC (1765891908) Dec 16 13:31:48.746807 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 16 13:31:48.746864 kernel: rtc_cmos 00:01: alarms up to one month, y3k, 114 bytes nvram Dec 16 13:31:48.746891 kernel: intel_pstate: CPU model not supported Dec 16 13:31:48.746897 kernel: NET: Registered PF_INET6 protocol family Dec 16 13:31:48.746905 kernel: Segment Routing with IPv6 Dec 16 13:31:48.746912 kernel: In-situ OAM (IOAM) with IPv6 Dec 16 13:31:48.746931 kernel: NET: Registered PF_PACKET protocol family Dec 16 13:31:48.746937 kernel: Key type dns_resolver registered Dec 16 13:31:48.746943 kernel: IPI shorthand broadcast: enabled Dec 16 13:31:48.746950 kernel: sched_clock: Marking stable (2456003177, 159525070)->(2631546170, -16017923) Dec 16 13:31:48.746956 kernel: registered taskstats version 1 Dec 16 13:31:48.746961 kernel: Loading compiled-in X.509 certificates Dec 16 13:31:48.746967 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.61-flatcar: 0d0c78e6590cb40d27f1cef749ef9f2f3425f38d' Dec 16 13:31:48.746973 kernel: Demotion targets for Node 0: null Dec 16 13:31:48.746978 kernel: Key type .fscrypt registered Dec 16 13:31:48.746984 kernel: Key type fscrypt-provisioning registered Dec 16 13:31:48.746990 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 16 13:31:48.746997 kernel: ima: Allocated hash algorithm: sha1 Dec 16 13:31:48.747003 kernel: ima: No architecture policies found Dec 16 13:31:48.747009 kernel: clk: Disabling unused clocks Dec 16 13:31:48.747014 kernel: Warning: unable to open an initial console. Dec 16 13:31:48.747020 kernel: Freeing unused kernel image (initmem) memory: 46188K Dec 16 13:31:48.747026 kernel: Write protecting the kernel read-only data: 40960k Dec 16 13:31:48.747032 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Dec 16 13:31:48.747038 kernel: Run /init as init process Dec 16 13:31:48.747044 kernel: with arguments: Dec 16 13:31:48.747050 kernel: /init Dec 16 13:31:48.747056 kernel: with environment: Dec 16 13:31:48.747062 kernel: HOME=/ Dec 16 13:31:48.747067 kernel: TERM=linux Dec 16 13:31:48.747074 systemd[1]: Successfully made /usr/ read-only. Dec 16 13:31:48.747082 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 16 13:31:48.747088 systemd[1]: Detected virtualization vmware. Dec 16 13:31:48.747094 systemd[1]: Detected architecture x86-64. Dec 16 13:31:48.747101 systemd[1]: Running in initrd. Dec 16 13:31:48.747107 systemd[1]: No hostname configured, using default hostname. Dec 16 13:31:48.747113 systemd[1]: Hostname set to . Dec 16 13:31:48.747119 systemd[1]: Initializing machine ID from random generator. Dec 16 13:31:48.747125 systemd[1]: Queued start job for default target initrd.target. Dec 16 13:31:48.747131 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 13:31:48.747137 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 13:31:48.747143 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 16 13:31:48.747150 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 16 13:31:48.747156 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 16 13:31:48.747162 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 16 13:31:48.747169 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 16 13:31:48.747175 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 16 13:31:48.747181 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 13:31:48.747187 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 16 13:31:48.747195 systemd[1]: Reached target paths.target - Path Units. Dec 16 13:31:48.747201 systemd[1]: Reached target slices.target - Slice Units. Dec 16 13:31:48.747207 systemd[1]: Reached target swap.target - Swaps. Dec 16 13:31:48.747212 systemd[1]: Reached target timers.target - Timer Units. Dec 16 13:31:48.747218 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 16 13:31:48.747224 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 16 13:31:48.747230 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 16 13:31:48.747236 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Dec 16 13:31:48.747242 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 16 13:31:48.747250 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 16 13:31:48.747256 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 13:31:48.747262 systemd[1]: Reached target sockets.target - Socket Units. Dec 16 13:31:48.747268 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 16 13:31:48.747274 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 16 13:31:48.747280 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 16 13:31:48.747286 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Dec 16 13:31:48.747293 systemd[1]: Starting systemd-fsck-usr.service... Dec 16 13:31:48.747300 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 16 13:31:48.747306 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 16 13:31:48.747313 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 13:31:48.747319 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 16 13:31:48.747325 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 13:31:48.747332 systemd[1]: Finished systemd-fsck-usr.service. Dec 16 13:31:48.747339 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 16 13:31:48.747357 systemd-journald[226]: Collecting audit messages is disabled. Dec 16 13:31:48.747375 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 13:31:48.747381 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 16 13:31:48.747388 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:31:48.747394 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 16 13:31:48.747400 kernel: Bridge firewalling registered Dec 16 13:31:48.747406 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 16 13:31:48.747412 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 16 13:31:48.747418 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 13:31:48.747424 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 13:31:48.747431 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 13:31:48.747437 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 16 13:31:48.747443 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 13:31:48.747450 systemd-journald[226]: Journal started Dec 16 13:31:48.747464 systemd-journald[226]: Runtime Journal (/run/log/journal/d5a04f144c0b4c48a3e3c02dbb67d891) is 4.8M, max 38.5M, 33.7M free. Dec 16 13:31:48.696784 systemd-modules-load[227]: Inserted module 'overlay' Dec 16 13:31:48.719699 systemd-modules-load[227]: Inserted module 'br_netfilter' Dec 16 13:31:48.750829 systemd[1]: Started systemd-journald.service - Journal Service. Dec 16 13:31:48.753890 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 16 13:31:48.757637 dracut-cmdline[252]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=a214a2d85e162c493e8b13db2df50a43e1005a0e4854a1ae089a14f442a30022 Dec 16 13:31:48.762697 systemd-tmpfiles[265]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Dec 16 13:31:48.764362 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 13:31:48.765651 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 16 13:31:48.792578 systemd-resolved[288]: Positive Trust Anchors: Dec 16 13:31:48.792808 systemd-resolved[288]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 16 13:31:48.793025 systemd-resolved[288]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 16 13:31:48.795034 systemd-resolved[288]: Defaulting to hostname 'linux'. Dec 16 13:31:48.795669 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 16 13:31:48.795839 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 16 13:31:48.812829 kernel: SCSI subsystem initialized Dec 16 13:31:48.828832 kernel: Loading iSCSI transport class v2.0-870. Dec 16 13:31:48.836831 kernel: iscsi: registered transport (tcp) Dec 16 13:31:48.856860 kernel: iscsi: registered transport (qla4xxx) Dec 16 13:31:48.856897 kernel: QLogic iSCSI HBA Driver Dec 16 13:31:48.867450 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 16 13:31:48.877332 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 13:31:48.878396 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 16 13:31:48.899944 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 16 13:31:48.900991 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 16 13:31:48.936865 kernel: raid6: avx2x4 gen() 49592 MB/s Dec 16 13:31:48.953863 kernel: raid6: avx2x2 gen() 55387 MB/s Dec 16 13:31:48.970974 kernel: raid6: avx2x1 gen() 46719 MB/s Dec 16 13:31:48.970992 kernel: raid6: using algorithm avx2x2 gen() 55387 MB/s Dec 16 13:31:48.989039 kernel: raid6: .... xor() 33481 MB/s, rmw enabled Dec 16 13:31:48.989059 kernel: raid6: using avx2x2 recovery algorithm Dec 16 13:31:49.001829 kernel: xor: automatically using best checksumming function avx Dec 16 13:31:49.100848 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 16 13:31:49.104310 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 16 13:31:49.105222 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 13:31:49.126571 systemd-udevd[474]: Using default interface naming scheme 'v255'. Dec 16 13:31:49.129926 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 13:31:49.130589 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 16 13:31:49.143917 dracut-pre-trigger[475]: rd.md=0: removing MD RAID activation Dec 16 13:31:49.156800 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 16 13:31:49.157713 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 16 13:31:49.229443 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 13:31:49.231141 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 16 13:31:49.291836 kernel: VMware PVSCSI driver - version 1.0.7.0-k Dec 16 13:31:49.296225 kernel: vmw_pvscsi: using 64bit dma Dec 16 13:31:49.296248 kernel: vmw_pvscsi: max_id: 16 Dec 16 13:31:49.296256 kernel: vmw_pvscsi: setting ring_pages to 8 Dec 16 13:31:49.303834 kernel: vmw_pvscsi: enabling reqCallThreshold Dec 16 13:31:49.303850 kernel: vmw_pvscsi: driver-based request coalescing enabled Dec 16 13:31:49.303862 kernel: vmw_pvscsi: using MSI-X Dec 16 13:31:49.309836 kernel: scsi host0: VMware PVSCSI storage adapter rev 2, req/cmp/msg rings: 8/8/1 pages, cmd_per_lun=254 Dec 16 13:31:49.314837 kernel: vmw_pvscsi 0000:03:00.0: VMware PVSCSI rev 2 host #0 Dec 16 13:31:49.314931 kernel: scsi 0:0:0:0: Direct-Access VMware Virtual disk 2.0 PQ: 0 ANSI: 6 Dec 16 13:31:49.315831 kernel: libata version 3.00 loaded. Dec 16 13:31:49.325884 kernel: VMware vmxnet3 virtual NIC driver - version 1.9.0.0-k-NAPI Dec 16 13:31:49.330969 kernel: vmxnet3 0000:0b:00.0: # of Tx queues : 2, # of Rx queues : 2 Dec 16 13:31:49.332646 (udev-worker)[532]: id: Truncating stdout of 'dmi_memory_id' up to 16384 byte. Dec 16 13:31:49.334840 kernel: vmxnet3 0000:0b:00.0 eth0: NIC Link is Up 10000 Mbps Dec 16 13:31:49.336829 kernel: cryptd: max_cpu_qlen set to 1000 Dec 16 13:31:49.341139 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 13:31:49.341871 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:31:49.342461 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 13:31:49.343028 kernel: ata_piix 0000:00:07.1: version 2.13 Dec 16 13:31:49.346332 kernel: vmxnet3 0000:0b:00.0 ens192: renamed from eth0 Dec 16 13:31:49.346425 kernel: sd 0:0:0:0: [sda] 17805312 512-byte logical blocks: (9.12 GB/8.49 GiB) Dec 16 13:31:49.344858 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 13:31:49.348031 kernel: scsi host1: ata_piix Dec 16 13:31:49.348135 kernel: sd 0:0:0:0: [sda] Write Protect is off Dec 16 13:31:49.348203 kernel: sd 0:0:0:0: [sda] Mode Sense: 31 00 00 00 Dec 16 13:31:49.349766 kernel: scsi host2: ata_piix Dec 16 13:31:49.349850 kernel: ata1: PATA max UDMA/33 cmd 0x1f0 ctl 0x3f6 bmdma 0x1060 irq 14 lpm-pol 0 Dec 16 13:31:49.349861 kernel: ata2: PATA max UDMA/33 cmd 0x170 ctl 0x376 bmdma 0x1068 irq 15 lpm-pol 0 Dec 16 13:31:49.349869 kernel: sd 0:0:0:0: [sda] Cache data unavailable Dec 16 13:31:49.352140 kernel: sd 0:0:0:0: [sda] Assuming drive cache: write through Dec 16 13:31:49.358834 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input2 Dec 16 13:31:49.363089 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 16 13:31:49.363107 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Dec 16 13:31:49.368921 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:31:49.517844 kernel: ata2.00: ATAPI: VMware Virtual IDE CDROM Drive, 00000001, max UDMA/33 Dec 16 13:31:49.523866 kernel: scsi 2:0:0:0: CD-ROM NECVMWar VMware IDE CDR10 1.00 PQ: 0 ANSI: 5 Dec 16 13:31:49.530832 kernel: AES CTR mode by8 optimization enabled Dec 16 13:31:49.551848 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 1x/1x writer dvd-ram cd/rw xa/form2 cdda tray Dec 16 13:31:49.552014 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Dec 16 13:31:49.567832 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Dec 16 13:31:49.578124 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_disk EFI-SYSTEM. Dec 16 13:31:49.583167 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_disk ROOT. Dec 16 13:31:49.588232 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. Dec 16 13:31:49.592365 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_disk USR-A. Dec 16 13:31:49.592496 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_disk USR-A. Dec 16 13:31:49.593139 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 16 13:31:49.630841 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 16 13:31:49.642851 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 16 13:31:49.788389 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 16 13:31:49.788900 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 16 13:31:49.789030 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 13:31:49.789133 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 16 13:31:49.790049 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 16 13:31:49.798908 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 16 13:31:50.646873 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 16 13:31:50.647669 disk-uuid[629]: The operation has completed successfully. Dec 16 13:31:50.687254 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 16 13:31:50.687322 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 16 13:31:50.697700 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 16 13:31:50.708877 sh[660]: Success Dec 16 13:31:50.722945 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 16 13:31:50.722964 kernel: device-mapper: uevent: version 1.0.3 Dec 16 13:31:50.724127 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Dec 16 13:31:50.730838 kernel: device-mapper: verity: sha256 using shash "sha256-avx2" Dec 16 13:31:50.757976 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 16 13:31:50.758531 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 16 13:31:50.767638 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 16 13:31:50.778737 kernel: BTRFS: device fsid a6ae7f96-a076-4d3c-81ed-46dd341492f8 devid 1 transid 37 /dev/mapper/usr (254:0) scanned by mount (672) Dec 16 13:31:50.778759 kernel: BTRFS info (device dm-0): first mount of filesystem a6ae7f96-a076-4d3c-81ed-46dd341492f8 Dec 16 13:31:50.778771 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 16 13:31:50.787354 kernel: BTRFS info (device dm-0): enabling ssd optimizations Dec 16 13:31:50.787372 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 16 13:31:50.787380 kernel: BTRFS info (device dm-0): enabling free space tree Dec 16 13:31:50.789386 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 16 13:31:50.789628 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Dec 16 13:31:50.790245 systemd[1]: Starting afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments... Dec 16 13:31:50.791878 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 16 13:31:50.825882 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (695) Dec 16 13:31:50.827832 kernel: BTRFS info (device sda6): first mount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 16 13:31:50.829831 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 16 13:31:50.836871 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 16 13:31:50.836890 kernel: BTRFS info (device sda6): enabling free space tree Dec 16 13:31:50.841900 kernel: BTRFS info (device sda6): last unmount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 16 13:31:50.845340 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 16 13:31:50.846024 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 16 13:31:50.867918 systemd[1]: Finished afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Dec 16 13:31:50.868645 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 16 13:31:50.927696 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 16 13:31:50.929039 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 16 13:31:50.959634 systemd-networkd[849]: lo: Link UP Dec 16 13:31:50.959873 systemd-networkd[849]: lo: Gained carrier Dec 16 13:31:50.960642 systemd-networkd[849]: Enumeration completed Dec 16 13:31:50.960906 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 16 13:31:50.961047 systemd[1]: Reached target network.target - Network. Dec 16 13:31:50.961704 systemd-networkd[849]: ens192: Configuring with /etc/systemd/network/10-dracut-cmdline-99.network. Dec 16 13:31:50.964791 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Dec 16 13:31:50.964938 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Dec 16 13:31:50.965468 systemd-networkd[849]: ens192: Link UP Dec 16 13:31:50.965472 systemd-networkd[849]: ens192: Gained carrier Dec 16 13:31:50.971992 ignition[714]: Ignition 2.22.0 Dec 16 13:31:50.972006 ignition[714]: Stage: fetch-offline Dec 16 13:31:50.972022 ignition[714]: no configs at "/usr/lib/ignition/base.d" Dec 16 13:31:50.972027 ignition[714]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Dec 16 13:31:50.972071 ignition[714]: parsed url from cmdline: "" Dec 16 13:31:50.972073 ignition[714]: no config URL provided Dec 16 13:31:50.972076 ignition[714]: reading system config file "/usr/lib/ignition/user.ign" Dec 16 13:31:50.972079 ignition[714]: no config at "/usr/lib/ignition/user.ign" Dec 16 13:31:50.972417 ignition[714]: config successfully fetched Dec 16 13:31:50.972434 ignition[714]: parsing config with SHA512: 4f2d1db513ed576456caf28c6dcc653dd59a15ca8ca3967bae4bd5f3e66c6b9d71aa7263595d9eeedb8eb1b0abca04e6099d4bbc821c1b2645274e99d0045ba1 Dec 16 13:31:50.975059 unknown[714]: fetched base config from "system" Dec 16 13:31:50.975357 ignition[714]: fetch-offline: fetch-offline passed Dec 16 13:31:50.975072 unknown[714]: fetched user config from "vmware" Dec 16 13:31:50.975396 ignition[714]: Ignition finished successfully Dec 16 13:31:50.976861 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 16 13:31:50.977210 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Dec 16 13:31:50.977805 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 16 13:31:50.997882 ignition[858]: Ignition 2.22.0 Dec 16 13:31:50.997891 ignition[858]: Stage: kargs Dec 16 13:31:50.997969 ignition[858]: no configs at "/usr/lib/ignition/base.d" Dec 16 13:31:50.997975 ignition[858]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Dec 16 13:31:50.998577 ignition[858]: kargs: kargs passed Dec 16 13:31:50.998610 ignition[858]: Ignition finished successfully Dec 16 13:31:51.000093 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 16 13:31:51.000871 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 16 13:31:51.019345 ignition[864]: Ignition 2.22.0 Dec 16 13:31:51.019355 ignition[864]: Stage: disks Dec 16 13:31:51.019441 ignition[864]: no configs at "/usr/lib/ignition/base.d" Dec 16 13:31:51.019448 ignition[864]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Dec 16 13:31:51.020108 ignition[864]: disks: disks passed Dec 16 13:31:51.020140 ignition[864]: Ignition finished successfully Dec 16 13:31:51.020974 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 16 13:31:51.021357 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 16 13:31:51.021481 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 16 13:31:51.021671 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 16 13:31:51.021863 systemd[1]: Reached target sysinit.target - System Initialization. Dec 16 13:31:51.022041 systemd[1]: Reached target basic.target - Basic System. Dec 16 13:31:51.022713 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 16 13:31:51.039593 systemd-fsck[873]: ROOT: clean, 15/1628000 files, 120826/1617920 blocks Dec 16 13:31:51.041239 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 16 13:31:51.042141 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 16 13:31:51.113172 kernel: EXT4-fs (sda9): mounted filesystem e48ca59c-1206-4abd-b121-5e9b35e49852 r/w with ordered data mode. Quota mode: none. Dec 16 13:31:51.112730 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 16 13:31:51.113039 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 16 13:31:51.114071 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 16 13:31:51.114856 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 16 13:31:51.116093 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 16 13:31:51.116290 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 16 13:31:51.116473 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 16 13:31:51.124878 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 16 13:31:51.125545 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 16 13:31:51.129830 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (881) Dec 16 13:31:51.132853 kernel: BTRFS info (device sda6): first mount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 16 13:31:51.132874 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 16 13:31:51.136833 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 16 13:31:51.136852 kernel: BTRFS info (device sda6): enabling free space tree Dec 16 13:31:51.138379 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 16 13:31:51.160103 initrd-setup-root[905]: cut: /sysroot/etc/passwd: No such file or directory Dec 16 13:31:51.163213 initrd-setup-root[912]: cut: /sysroot/etc/group: No such file or directory Dec 16 13:31:51.165705 initrd-setup-root[919]: cut: /sysroot/etc/shadow: No such file or directory Dec 16 13:31:51.167807 initrd-setup-root[926]: cut: /sysroot/etc/gshadow: No such file or directory Dec 16 13:31:51.228354 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 16 13:31:51.228998 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 16 13:31:51.229893 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 16 13:31:51.241839 kernel: BTRFS info (device sda6): last unmount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 16 13:31:51.253559 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 16 13:31:51.259799 ignition[994]: INFO : Ignition 2.22.0 Dec 16 13:31:51.260032 ignition[994]: INFO : Stage: mount Dec 16 13:31:51.260229 ignition[994]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 13:31:51.260368 ignition[994]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Dec 16 13:31:51.261026 ignition[994]: INFO : mount: mount passed Dec 16 13:31:51.261167 ignition[994]: INFO : Ignition finished successfully Dec 16 13:31:51.261984 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 16 13:31:51.262580 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 16 13:31:51.777180 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 16 13:31:51.778636 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 16 13:31:51.793832 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (1006) Dec 16 13:31:51.796914 kernel: BTRFS info (device sda6): first mount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 16 13:31:51.796934 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 16 13:31:51.800980 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 16 13:31:51.800999 kernel: BTRFS info (device sda6): enabling free space tree Dec 16 13:31:51.802236 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 16 13:31:51.828318 ignition[1023]: INFO : Ignition 2.22.0 Dec 16 13:31:51.828318 ignition[1023]: INFO : Stage: files Dec 16 13:31:51.828669 ignition[1023]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 13:31:51.828669 ignition[1023]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Dec 16 13:31:51.829015 ignition[1023]: DEBUG : files: compiled without relabeling support, skipping Dec 16 13:31:51.829792 ignition[1023]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 16 13:31:51.829792 ignition[1023]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 16 13:31:51.831404 ignition[1023]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 16 13:31:51.831547 ignition[1023]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 16 13:31:51.831688 ignition[1023]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 16 13:31:51.831630 unknown[1023]: wrote ssh authorized keys file for user: core Dec 16 13:31:51.832971 ignition[1023]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Dec 16 13:31:51.833200 ignition[1023]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Dec 16 13:31:51.869747 ignition[1023]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 16 13:31:51.943213 ignition[1023]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Dec 16 13:31:51.943489 ignition[1023]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 16 13:31:51.943489 ignition[1023]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 16 13:31:51.943489 ignition[1023]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 16 13:31:51.943489 ignition[1023]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 16 13:31:51.943489 ignition[1023]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 16 13:31:51.943489 ignition[1023]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 16 13:31:51.943489 ignition[1023]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 16 13:31:51.944856 ignition[1023]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 16 13:31:51.945157 ignition[1023]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 16 13:31:51.945360 ignition[1023]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 16 13:31:51.945360 ignition[1023]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Dec 16 13:31:51.947531 ignition[1023]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Dec 16 13:31:51.947531 ignition[1023]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Dec 16 13:31:51.947936 ignition[1023]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Dec 16 13:31:52.375550 ignition[1023]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 16 13:31:52.547112 ignition[1023]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Dec 16 13:31:52.547556 ignition[1023]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/systemd/network/00-vmware.network" Dec 16 13:31:52.548232 ignition[1023]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/systemd/network/00-vmware.network" Dec 16 13:31:52.548413 ignition[1023]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Dec 16 13:31:52.548676 ignition[1023]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 16 13:31:52.549117 ignition[1023]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 16 13:31:52.549117 ignition[1023]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Dec 16 13:31:52.549117 ignition[1023]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Dec 16 13:31:52.549559 ignition[1023]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 16 13:31:52.549559 ignition[1023]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 16 13:31:52.549559 ignition[1023]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Dec 16 13:31:52.549559 ignition[1023]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Dec 16 13:31:52.570280 ignition[1023]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Dec 16 13:31:52.572004 ignition[1023]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Dec 16 13:31:52.572161 ignition[1023]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Dec 16 13:31:52.572161 ignition[1023]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Dec 16 13:31:52.572161 ignition[1023]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Dec 16 13:31:52.572161 ignition[1023]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 16 13:31:52.573458 ignition[1023]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 16 13:31:52.573458 ignition[1023]: INFO : files: files passed Dec 16 13:31:52.573458 ignition[1023]: INFO : Ignition finished successfully Dec 16 13:31:52.573060 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 16 13:31:52.574170 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 16 13:31:52.574926 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 16 13:31:52.574991 systemd-networkd[849]: ens192: Gained IPv6LL Dec 16 13:31:52.589325 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 16 13:31:52.589482 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 16 13:31:52.591799 initrd-setup-root-after-ignition[1055]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 16 13:31:52.591799 initrd-setup-root-after-ignition[1055]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 16 13:31:52.592624 initrd-setup-root-after-ignition[1059]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 16 13:31:52.593497 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 16 13:31:52.593779 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 16 13:31:52.594354 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 16 13:31:52.617323 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 16 13:31:52.617398 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 16 13:31:52.617760 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 16 13:31:52.618147 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 16 13:31:52.618416 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 16 13:31:52.619001 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 16 13:31:52.630739 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 16 13:31:52.631609 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 16 13:31:52.641212 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 16 13:31:52.641478 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 13:31:52.641794 systemd[1]: Stopped target timers.target - Timer Units. Dec 16 13:31:52.642132 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 16 13:31:52.642294 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 16 13:31:52.642664 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 16 13:31:52.642905 systemd[1]: Stopped target basic.target - Basic System. Dec 16 13:31:52.643159 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 16 13:31:52.643418 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 16 13:31:52.643661 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 16 13:31:52.643947 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Dec 16 13:31:52.644217 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 16 13:31:52.644445 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 16 13:31:52.644744 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 16 13:31:52.645041 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 16 13:31:52.645289 systemd[1]: Stopped target swap.target - Swaps. Dec 16 13:31:52.645474 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 16 13:31:52.645538 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 16 13:31:52.646035 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 16 13:31:52.646232 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 13:31:52.646393 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 16 13:31:52.646447 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 13:31:52.646620 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 16 13:31:52.646701 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 16 13:31:52.646996 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 16 13:31:52.647078 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 16 13:31:52.647373 systemd[1]: Stopped target paths.target - Path Units. Dec 16 13:31:52.647520 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 16 13:31:52.651922 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 13:31:52.652214 systemd[1]: Stopped target slices.target - Slice Units. Dec 16 13:31:52.652433 systemd[1]: Stopped target sockets.target - Socket Units. Dec 16 13:31:52.652703 systemd[1]: iscsid.socket: Deactivated successfully. Dec 16 13:31:52.652790 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 16 13:31:52.653066 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 16 13:31:52.653125 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 16 13:31:52.653485 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 16 13:31:52.653598 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 16 13:31:52.653923 systemd[1]: ignition-files.service: Deactivated successfully. Dec 16 13:31:52.654027 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 16 13:31:52.654938 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 16 13:31:52.655109 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 16 13:31:52.655223 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 13:31:52.657045 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 16 13:31:52.657268 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 16 13:31:52.657376 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 13:31:52.657700 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 16 13:31:52.657796 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 16 13:31:52.661403 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 16 13:31:52.662420 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 16 13:31:52.670749 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 16 13:31:52.674828 ignition[1079]: INFO : Ignition 2.22.0 Dec 16 13:31:52.674828 ignition[1079]: INFO : Stage: umount Dec 16 13:31:52.674828 ignition[1079]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 13:31:52.674828 ignition[1079]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Dec 16 13:31:52.675598 ignition[1079]: INFO : umount: umount passed Dec 16 13:31:52.675723 ignition[1079]: INFO : Ignition finished successfully Dec 16 13:31:52.676516 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 16 13:31:52.676584 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 16 13:31:52.676991 systemd[1]: Stopped target network.target - Network. Dec 16 13:31:52.677112 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 16 13:31:52.677140 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 16 13:31:52.677276 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 16 13:31:52.677298 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 16 13:31:52.677434 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 16 13:31:52.677454 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 16 13:31:52.677601 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 16 13:31:52.677620 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 16 13:31:52.677982 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 16 13:31:52.678193 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 16 13:31:52.679520 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 16 13:31:52.679581 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 16 13:31:52.680645 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Dec 16 13:31:52.681053 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 16 13:31:52.681097 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 13:31:52.681906 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Dec 16 13:31:52.688149 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 16 13:31:52.688226 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 16 13:31:52.688719 systemd[1]: Stopped target network-pre.target - Preparation for Network. Dec 16 13:31:52.688922 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 16 13:31:52.688945 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 16 13:31:52.689872 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 16 13:31:52.690027 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 16 13:31:52.690062 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 16 13:31:52.691035 systemd[1]: afterburn-network-kargs.service: Deactivated successfully. Dec 16 13:31:52.691064 systemd[1]: Stopped afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Dec 16 13:31:52.691258 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 16 13:31:52.691283 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 16 13:31:52.691606 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 16 13:31:52.691633 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 16 13:31:52.693598 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 13:31:52.703962 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 16 13:31:52.704030 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 16 13:31:52.705090 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 16 13:31:52.705187 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 13:31:52.705508 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 16 13:31:52.705542 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 16 13:31:52.705658 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 16 13:31:52.705673 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 13:31:52.705849 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 16 13:31:52.705872 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 16 13:31:52.706166 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 16 13:31:52.706190 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 16 13:31:52.706466 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 16 13:31:52.706489 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 13:31:52.708877 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 16 13:31:52.709003 systemd[1]: systemd-network-generator.service: Deactivated successfully. Dec 16 13:31:52.709030 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 13:31:52.709333 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 16 13:31:52.709357 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 13:31:52.710019 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 13:31:52.710046 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:31:52.729065 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 16 13:31:52.729256 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 16 13:31:52.729569 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 16 13:31:52.729753 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 16 13:31:52.730351 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 16 13:31:52.730446 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 16 13:31:52.730474 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 16 13:31:52.731335 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 16 13:31:52.743612 systemd[1]: Switching root. Dec 16 13:31:52.771752 systemd-journald[226]: Journal stopped Dec 16 13:31:53.920632 systemd-journald[226]: Received SIGTERM from PID 1 (systemd). Dec 16 13:31:53.920652 kernel: SELinux: policy capability network_peer_controls=1 Dec 16 13:31:53.920660 kernel: SELinux: policy capability open_perms=1 Dec 16 13:31:53.920665 kernel: SELinux: policy capability extended_socket_class=1 Dec 16 13:31:53.920670 kernel: SELinux: policy capability always_check_network=0 Dec 16 13:31:53.920676 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 16 13:31:53.920681 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 16 13:31:53.920688 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 16 13:31:53.920694 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 16 13:31:53.920699 kernel: SELinux: policy capability userspace_initial_context=0 Dec 16 13:31:53.920704 kernel: audit: type=1403 audit(1765891913.422:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 16 13:31:53.920712 systemd[1]: Successfully loaded SELinux policy in 53.541ms. Dec 16 13:31:53.920718 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 3.645ms. Dec 16 13:31:53.920725 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 16 13:31:53.920733 systemd[1]: Detected virtualization vmware. Dec 16 13:31:53.920739 systemd[1]: Detected architecture x86-64. Dec 16 13:31:53.920745 systemd[1]: Detected first boot. Dec 16 13:31:53.920752 systemd[1]: Initializing machine ID from random generator. Dec 16 13:31:53.920759 zram_generator::config[1122]: No configuration found. Dec 16 13:31:53.921847 kernel: vmw_vmci 0000:00:07.7: Using capabilities 0xc Dec 16 13:31:53.921860 kernel: Guest personality initialized and is active Dec 16 13:31:53.921867 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Dec 16 13:31:53.921873 kernel: Initialized host personality Dec 16 13:31:53.921879 kernel: NET: Registered PF_VSOCK protocol family Dec 16 13:31:53.921887 systemd[1]: Populated /etc with preset unit settings. Dec 16 13:31:53.921895 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Dec 16 13:31:53.921902 systemd[1]: COREOS_CUSTOM_PUBLIC_IPV4=$(ip addr show ens192 | grep -v "inet 10." | grep -Po "inet \K[\d.]+")" > ${OUTPUT}" Dec 16 13:31:53.921909 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Dec 16 13:31:53.921915 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 16 13:31:53.921921 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 16 13:31:53.921927 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 16 13:31:53.921935 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 16 13:31:53.921942 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 16 13:31:53.921948 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 16 13:31:53.921955 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 16 13:31:53.921962 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 16 13:31:53.921969 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 16 13:31:53.921976 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 16 13:31:53.921982 systemd[1]: Created slice user.slice - User and Session Slice. Dec 16 13:31:53.921990 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 13:31:53.921997 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 13:31:53.922005 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 16 13:31:53.922012 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 16 13:31:53.922019 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 16 13:31:53.922026 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 16 13:31:53.922032 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 16 13:31:53.922039 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 13:31:53.922046 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 16 13:31:53.922053 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 16 13:31:53.922059 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 16 13:31:53.922066 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 16 13:31:53.922072 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 16 13:31:53.922079 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 13:31:53.922085 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 16 13:31:53.922092 systemd[1]: Reached target slices.target - Slice Units. Dec 16 13:31:53.922099 systemd[1]: Reached target swap.target - Swaps. Dec 16 13:31:53.922107 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 16 13:31:53.922114 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 16 13:31:53.922120 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Dec 16 13:31:53.922127 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 16 13:31:53.922135 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 16 13:31:53.922141 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 13:31:53.922148 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 16 13:31:53.922155 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 16 13:31:53.922161 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 16 13:31:53.922168 systemd[1]: Mounting media.mount - External Media Directory... Dec 16 13:31:53.922175 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:31:53.922181 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 16 13:31:53.922189 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 16 13:31:53.922196 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 16 13:31:53.922203 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 16 13:31:53.922209 systemd[1]: Reached target machines.target - Containers. Dec 16 13:31:53.922216 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 16 13:31:53.922222 systemd[1]: Starting ignition-delete-config.service - Ignition (delete config)... Dec 16 13:31:53.922229 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 16 13:31:53.922236 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 16 13:31:53.922243 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 13:31:53.922250 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 16 13:31:53.922257 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 13:31:53.922264 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 16 13:31:53.922270 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 13:31:53.922277 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 16 13:31:53.922284 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 16 13:31:53.922290 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 16 13:31:53.922297 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 16 13:31:53.922305 systemd[1]: Stopped systemd-fsck-usr.service. Dec 16 13:31:53.922312 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 13:31:53.922318 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 16 13:31:53.922325 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 16 13:31:53.922332 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 16 13:31:53.922338 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 16 13:31:53.922345 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Dec 16 13:31:53.922351 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 16 13:31:53.922360 systemd[1]: verity-setup.service: Deactivated successfully. Dec 16 13:31:53.922366 systemd[1]: Stopped verity-setup.service. Dec 16 13:31:53.922373 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:31:53.922392 systemd-journald[1212]: Collecting audit messages is disabled. Dec 16 13:31:53.922409 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 16 13:31:53.922417 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 16 13:31:53.922424 systemd[1]: Mounted media.mount - External Media Directory. Dec 16 13:31:53.922431 systemd-journald[1212]: Journal started Dec 16 13:31:53.922445 systemd-journald[1212]: Runtime Journal (/run/log/journal/65b471cc6b014ddba795ff6bcf172c4b) is 4.8M, max 38.5M, 33.7M free. Dec 16 13:31:53.750792 systemd[1]: Queued start job for default target multi-user.target. Dec 16 13:31:53.763419 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Dec 16 13:31:53.763704 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 16 13:31:53.922935 jq[1192]: true Dec 16 13:31:53.923269 systemd[1]: Started systemd-journald.service - Journal Service. Dec 16 13:31:53.924297 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 16 13:31:53.924801 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 16 13:31:53.924962 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 16 13:31:53.925859 kernel: loop: module loaded Dec 16 13:31:53.926033 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 13:31:53.926263 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 16 13:31:53.926596 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 16 13:31:53.927268 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 13:31:53.927378 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 13:31:53.927715 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 13:31:53.927811 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 13:31:53.928494 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 13:31:53.928593 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 13:31:53.929943 kernel: fuse: init (API version 7.41) Dec 16 13:31:53.929653 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 16 13:31:53.931274 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 16 13:31:53.931554 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 16 13:31:53.940360 jq[1227]: true Dec 16 13:31:53.940859 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 16 13:31:53.949109 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 16 13:31:53.953207 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 16 13:31:53.953331 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 16 13:31:53.953351 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 16 13:31:53.954023 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Dec 16 13:31:53.958833 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 16 13:31:53.959039 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 13:31:53.961888 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 16 13:31:53.963633 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 16 13:31:53.963766 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 13:31:53.965922 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 16 13:31:53.966050 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 13:31:53.975927 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 13:31:53.977203 systemd-journald[1212]: Time spent on flushing to /var/log/journal/65b471cc6b014ddba795ff6bcf172c4b is 47.419ms for 1745 entries. Dec 16 13:31:53.977203 systemd-journald[1212]: System Journal (/var/log/journal/65b471cc6b014ddba795ff6bcf172c4b) is 8M, max 584.8M, 576.8M free. Dec 16 13:31:54.028727 systemd-journald[1212]: Received client request to flush runtime journal. Dec 16 13:31:54.018004 ignition[1245]: Ignition 2.22.0 Dec 16 13:31:53.980979 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 16 13:31:54.018204 ignition[1245]: deleting config from guestinfo properties Dec 16 13:31:53.982851 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 16 13:31:54.020635 ignition[1245]: Successfully deleted config Dec 16 13:31:53.983148 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 13:31:53.983402 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Dec 16 13:31:53.983567 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 16 13:31:53.983895 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 16 13:31:53.987844 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 16 13:31:53.994786 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 16 13:31:53.998850 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 16 13:31:53.999273 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 16 13:31:54.002979 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Dec 16 13:31:54.024017 systemd[1]: Finished ignition-delete-config.service - Ignition (delete config). Dec 16 13:31:54.035089 kernel: loop0: detected capacity change from 0 to 110984 Dec 16 13:31:54.035133 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 16 13:31:54.037038 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 13:31:54.050833 kernel: ACPI: bus type drm_connector registered Dec 16 13:31:54.051224 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 16 13:31:54.051981 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 16 13:31:54.054039 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Dec 16 13:31:54.076835 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 16 13:31:54.089865 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 16 13:31:54.091105 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 16 13:31:54.098987 kernel: loop1: detected capacity change from 0 to 2960 Dec 16 13:31:54.123833 kernel: loop2: detected capacity change from 0 to 128560 Dec 16 13:31:54.128209 systemd-tmpfiles[1291]: ACLs are not supported, ignoring. Dec 16 13:31:54.128367 systemd-tmpfiles[1291]: ACLs are not supported, ignoring. Dec 16 13:31:54.137409 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 13:31:54.149829 kernel: loop3: detected capacity change from 0 to 229808 Dec 16 13:31:54.168393 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 13:31:54.182834 kernel: loop4: detected capacity change from 0 to 110984 Dec 16 13:31:54.237846 kernel: loop5: detected capacity change from 0 to 2960 Dec 16 13:31:54.253836 kernel: loop6: detected capacity change from 0 to 128560 Dec 16 13:31:54.280836 kernel: loop7: detected capacity change from 0 to 229808 Dec 16 13:31:54.297040 (sd-merge)[1298]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-vmware'. Dec 16 13:31:54.297303 (sd-merge)[1298]: Merged extensions into '/usr'. Dec 16 13:31:54.301064 systemd[1]: Reload requested from client PID 1268 ('systemd-sysext') (unit systemd-sysext.service)... Dec 16 13:31:54.301072 systemd[1]: Reloading... Dec 16 13:31:54.369831 zram_generator::config[1327]: No configuration found. Dec 16 13:31:54.479813 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Dec 16 13:31:54.525226 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 16 13:31:54.525390 systemd[1]: Reloading finished in 224 ms. Dec 16 13:31:54.539037 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 16 13:31:54.539410 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 16 13:31:54.546937 systemd[1]: Starting ensure-sysext.service... Dec 16 13:31:54.547744 ldconfig[1258]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 16 13:31:54.548888 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 16 13:31:54.549892 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 13:31:54.555942 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 16 13:31:54.561252 systemd[1]: Reload requested from client PID 1380 ('systemctl') (unit ensure-sysext.service)... Dec 16 13:31:54.561260 systemd[1]: Reloading... Dec 16 13:31:54.565073 systemd-tmpfiles[1381]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Dec 16 13:31:54.565416 systemd-tmpfiles[1381]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Dec 16 13:31:54.565608 systemd-tmpfiles[1381]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 16 13:31:54.565841 systemd-tmpfiles[1381]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 16 13:31:54.566355 systemd-tmpfiles[1381]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 16 13:31:54.566565 systemd-tmpfiles[1381]: ACLs are not supported, ignoring. Dec 16 13:31:54.566638 systemd-tmpfiles[1381]: ACLs are not supported, ignoring. Dec 16 13:31:54.568741 systemd-tmpfiles[1381]: Detected autofs mount point /boot during canonicalization of boot. Dec 16 13:31:54.568788 systemd-tmpfiles[1381]: Skipping /boot Dec 16 13:31:54.572479 systemd-tmpfiles[1381]: Detected autofs mount point /boot during canonicalization of boot. Dec 16 13:31:54.572526 systemd-tmpfiles[1381]: Skipping /boot Dec 16 13:31:54.579006 systemd-udevd[1382]: Using default interface naming scheme 'v255'. Dec 16 13:31:54.604849 zram_generator::config[1409]: No configuration found. Dec 16 13:31:54.719043 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Dec 16 13:31:54.758840 kernel: mousedev: PS/2 mouse device common for all mice Dec 16 13:31:54.767939 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Dec 16 13:31:54.771846 kernel: ACPI: button: Power Button [PWRF] Dec 16 13:31:54.780512 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Dec 16 13:31:54.780726 systemd[1]: Reloading finished in 219 ms. Dec 16 13:31:54.787359 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 13:31:54.793875 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 13:31:54.805950 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 16 13:31:54.807983 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 16 13:31:54.809492 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 16 13:31:54.811937 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 16 13:31:54.816477 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 16 13:31:54.817951 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 16 13:31:54.826080 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 16 13:31:54.827180 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:31:54.829379 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 13:31:54.834155 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 13:31:54.839231 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 13:31:54.839394 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 13:31:54.839459 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 13:31:54.839520 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:31:54.848852 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 16 13:31:54.852271 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:31:54.852388 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 13:31:54.852474 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 13:31:54.854390 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 16 13:31:54.854841 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:31:54.855357 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 16 13:31:54.858582 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. Dec 16 13:31:54.871211 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:31:54.872897 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 16 13:31:54.873072 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 13:31:54.874617 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 16 13:31:54.874723 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 13:31:54.874813 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:31:54.875860 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 16 13:31:54.876780 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 16 13:31:54.878322 systemd[1]: Finished ensure-sysext.service. Dec 16 13:31:54.887900 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 16 13:31:54.888251 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 13:31:54.888400 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 13:31:54.888681 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 16 13:31:54.888811 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 16 13:31:54.902835 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 13:31:54.903710 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 13:31:54.904133 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 13:31:54.906263 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 13:31:54.906392 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 13:31:54.907793 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 13:31:54.908588 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 16 13:31:54.912204 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 16 13:31:54.912619 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 16 13:31:54.933783 augenrules[1555]: No rules Dec 16 13:31:54.934864 systemd[1]: audit-rules.service: Deactivated successfully. Dec 16 13:31:54.935006 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 16 13:31:54.947583 systemd-networkd[1506]: lo: Link UP Dec 16 13:31:54.949357 systemd-networkd[1506]: lo: Gained carrier Dec 16 13:31:54.951149 systemd-networkd[1506]: Enumeration completed Dec 16 13:31:54.951205 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 16 13:31:54.951982 systemd-networkd[1506]: ens192: Configuring with /etc/systemd/network/00-vmware.network. Dec 16 13:31:54.952932 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Dec 16 13:31:54.955833 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Dec 16 13:31:54.955954 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Dec 16 13:31:54.954879 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 16 13:31:54.958123 systemd-networkd[1506]: ens192: Link UP Dec 16 13:31:54.958242 systemd-networkd[1506]: ens192: Gained carrier Dec 16 13:31:54.977429 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Dec 16 13:31:54.979086 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 16 13:31:54.979236 systemd[1]: Reached target time-set.target - System Time Set. Dec 16 13:31:54.980047 systemd-resolved[1507]: Positive Trust Anchors: Dec 16 13:31:54.980183 systemd-resolved[1507]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 16 13:31:54.980270 systemd-resolved[1507]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 16 13:31:54.983012 systemd-resolved[1507]: Defaulting to hostname 'linux'. Dec 16 13:31:54.983891 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 16 13:31:54.984140 systemd[1]: Reached target network.target - Network. Dec 16 13:31:54.984350 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 16 13:31:54.984469 systemd[1]: Reached target sysinit.target - System Initialization. Dec 16 13:31:54.984741 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 16 13:31:54.984965 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 16 13:31:54.985196 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Dec 16 13:31:54.985386 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 16 13:31:54.985688 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 16 13:31:54.985809 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 16 13:31:54.986028 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 16 13:31:54.986050 systemd[1]: Reached target paths.target - Path Units. Dec 16 13:31:54.986248 systemd[1]: Reached target timers.target - Timer Units. Dec 16 13:31:54.987103 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 16 13:31:54.988026 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 16 13:31:54.990375 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Dec 16 13:31:54.991093 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Dec 16 13:31:54.991357 systemd[1]: Reached target ssh-access.target - SSH Access Available. Dec 16 13:31:54.993065 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 16 13:31:54.993612 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Dec 16 13:31:54.994215 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 16 13:31:54.995155 systemd[1]: Reached target sockets.target - Socket Units. Dec 16 13:31:54.995788 systemd[1]: Reached target basic.target - Basic System. Dec 16 13:31:54.996016 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 16 13:31:54.996034 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 16 13:31:54.997331 systemd[1]: Starting containerd.service - containerd container runtime... Dec 16 13:31:54.998459 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 16 13:31:55.000234 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 16 13:31:55.002912 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 16 13:31:55.003924 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 16 13:31:55.004041 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 16 13:31:55.005449 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Dec 16 13:31:55.009157 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 16 13:31:55.014115 jq[1571]: false Dec 16 13:31:55.014773 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 16 13:31:55.015666 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 16 13:31:55.021067 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 16 13:31:55.030945 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 16 13:31:55.031574 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 16 13:31:55.033332 google_oslogin_nss_cache[1573]: oslogin_cache_refresh[1573]: Refreshing passwd entry cache Dec 16 13:31:55.033337 oslogin_cache_refresh[1573]: Refreshing passwd entry cache Dec 16 13:31:55.034085 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 16 13:31:55.034861 systemd[1]: Starting update-engine.service - Update Engine... Dec 16 13:31:55.038876 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 16 13:31:55.042175 systemd[1]: Starting vgauthd.service - VGAuth Service for open-vm-tools... Dec 16 13:31:55.044488 google_oslogin_nss_cache[1573]: oslogin_cache_refresh[1573]: Failure getting users, quitting Dec 16 13:31:55.044488 google_oslogin_nss_cache[1573]: oslogin_cache_refresh[1573]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Dec 16 13:31:55.044354 oslogin_cache_refresh[1573]: Failure getting users, quitting Dec 16 13:31:55.044365 oslogin_cache_refresh[1573]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Dec 16 13:31:55.044646 google_oslogin_nss_cache[1573]: oslogin_cache_refresh[1573]: Refreshing group entry cache Dec 16 13:31:55.044641 oslogin_cache_refresh[1573]: Refreshing group entry cache Dec 16 13:31:55.044713 extend-filesystems[1572]: Found /dev/sda6 Dec 16 13:31:55.047537 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 16 13:31:55.047832 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 16 13:31:55.048846 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 16 13:31:55.051480 google_oslogin_nss_cache[1573]: oslogin_cache_refresh[1573]: Failure getting groups, quitting Dec 16 13:31:55.051480 google_oslogin_nss_cache[1573]: oslogin_cache_refresh[1573]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Dec 16 13:31:55.049250 oslogin_cache_refresh[1573]: Failure getting groups, quitting Dec 16 13:31:55.050008 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Dec 16 13:31:55.049256 oslogin_cache_refresh[1573]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Dec 16 13:31:55.050156 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Dec 16 13:31:55.050406 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 16 13:31:55.050518 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 16 13:31:55.053861 extend-filesystems[1572]: Found /dev/sda9 Dec 16 13:31:55.058163 extend-filesystems[1572]: Checking size of /dev/sda9 Dec 16 13:33:37.404452 systemd-timesyncd[1530]: Contacted time server 50.218.103.254:123 (0.flatcar.pool.ntp.org). Dec 16 13:33:37.404512 systemd-timesyncd[1530]: Initial clock synchronization to Tue 2025-12-16 13:33:37.404285 UTC. Dec 16 13:33:37.408328 systemd-resolved[1507]: Clock change detected. Flushing caches. Dec 16 13:33:37.409186 systemd[1]: motdgen.service: Deactivated successfully. Dec 16 13:33:37.410176 jq[1585]: true Dec 16 13:33:37.409511 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 16 13:33:37.428751 update_engine[1584]: I20251216 13:33:37.427376 1584 main.cc:92] Flatcar Update Engine starting Dec 16 13:33:37.427744 (ntainerd)[1607]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 16 13:33:37.428991 jq[1606]: true Dec 16 13:33:37.437288 tar[1595]: linux-amd64/LICENSE Dec 16 13:33:37.437426 extend-filesystems[1572]: Old size kept for /dev/sda9 Dec 16 13:33:37.437984 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 16 13:33:37.438138 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 16 13:33:37.438379 kernel: piix4_smbus 0000:00:07.3: SMBus Host Controller not enabled! Dec 16 13:33:37.438496 tar[1595]: linux-amd64/helm Dec 16 13:33:37.443981 systemd[1]: Started vgauthd.service - VGAuth Service for open-vm-tools. Dec 16 13:33:37.445325 systemd[1]: Starting vmtoolsd.service - Service for virtual machines hosted on VMware... Dec 16 13:33:37.455096 dbus-daemon[1569]: [system] SELinux support is enabled Dec 16 13:33:37.455183 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 16 13:33:37.457718 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 16 13:33:37.457737 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 16 13:33:37.457874 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 16 13:33:37.457885 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 16 13:33:37.472091 systemd[1]: Started update-engine.service - Update Engine. Dec 16 13:33:37.472378 update_engine[1584]: I20251216 13:33:37.472353 1584 update_check_scheduler.cc:74] Next update check in 10m51s Dec 16 13:33:37.483345 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 16 13:33:37.509436 systemd[1]: Started vmtoolsd.service - Service for virtual machines hosted on VMware. Dec 16 13:33:37.521862 unknown[1618]: Pref_Init: Using '/etc/vmware-tools/vgauth.conf' as preferences filepath Dec 16 13:33:37.531602 bash[1634]: Updated "/home/core/.ssh/authorized_keys" Dec 16 13:33:37.532445 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 16 13:33:37.533581 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Dec 16 13:33:37.535365 unknown[1618]: Core dump limit set to -1 Dec 16 13:33:37.544550 systemd-logind[1583]: New seat seat0. Dec 16 13:33:37.545036 systemd[1]: Started systemd-logind.service - User Login Management. Dec 16 13:33:37.639953 (udev-worker)[1435]: id: Truncating stdout of 'dmi_memory_id' up to 16384 byte. Dec 16 13:33:37.645324 systemd-logind[1583]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 16 13:33:37.645693 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 13:33:37.691070 sshd_keygen[1599]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 16 13:33:37.715908 systemd-logind[1583]: Watching system buttons on /dev/input/event2 (Power Button) Dec 16 13:33:37.719343 containerd[1607]: time="2025-12-16T13:33:37Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Dec 16 13:33:37.721503 containerd[1607]: time="2025-12-16T13:33:37.721452761Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Dec 16 13:33:37.744747 locksmithd[1627]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 16 13:33:37.750581 containerd[1607]: time="2025-12-16T13:33:37.750545132Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="11.424µs" Dec 16 13:33:37.750663 containerd[1607]: time="2025-12-16T13:33:37.750653675Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Dec 16 13:33:37.750704 containerd[1607]: time="2025-12-16T13:33:37.750696125Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Dec 16 13:33:37.750862 containerd[1607]: time="2025-12-16T13:33:37.750852496Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Dec 16 13:33:37.750919 containerd[1607]: time="2025-12-16T13:33:37.750910815Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Dec 16 13:33:37.750963 containerd[1607]: time="2025-12-16T13:33:37.750955452Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 16 13:33:37.751037 containerd[1607]: time="2025-12-16T13:33:37.751027488Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 16 13:33:37.751402 containerd[1607]: time="2025-12-16T13:33:37.751391591Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 16 13:33:37.751891 containerd[1607]: time="2025-12-16T13:33:37.751878506Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 16 13:33:37.752134 containerd[1607]: time="2025-12-16T13:33:37.751983077Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 16 13:33:37.752134 containerd[1607]: time="2025-12-16T13:33:37.751995261Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 16 13:33:37.752134 containerd[1607]: time="2025-12-16T13:33:37.752001286Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Dec 16 13:33:37.752134 containerd[1607]: time="2025-12-16T13:33:37.752043361Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Dec 16 13:33:37.752678 containerd[1607]: time="2025-12-16T13:33:37.752544583Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 16 13:33:37.752729 containerd[1607]: time="2025-12-16T13:33:37.752720084Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 16 13:33:37.752838 containerd[1607]: time="2025-12-16T13:33:37.752790691Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Dec 16 13:33:37.753017 containerd[1607]: time="2025-12-16T13:33:37.752891193Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Dec 16 13:33:37.753490 containerd[1607]: time="2025-12-16T13:33:37.753403619Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Dec 16 13:33:37.753490 containerd[1607]: time="2025-12-16T13:33:37.753454465Z" level=info msg="metadata content store policy set" policy=shared Dec 16 13:33:37.755361 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 16 13:33:37.757070 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 16 13:33:37.779552 systemd[1]: issuegen.service: Deactivated successfully. Dec 16 13:33:37.779699 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 16 13:33:37.782478 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 16 13:33:37.787965 containerd[1607]: time="2025-12-16T13:33:37.787945141Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Dec 16 13:33:37.788298 containerd[1607]: time="2025-12-16T13:33:37.788037726Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Dec 16 13:33:37.788298 containerd[1607]: time="2025-12-16T13:33:37.788051561Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Dec 16 13:33:37.788298 containerd[1607]: time="2025-12-16T13:33:37.788059296Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Dec 16 13:33:37.788298 containerd[1607]: time="2025-12-16T13:33:37.788066899Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Dec 16 13:33:37.788298 containerd[1607]: time="2025-12-16T13:33:37.788078310Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Dec 16 13:33:37.788298 containerd[1607]: time="2025-12-16T13:33:37.788118451Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Dec 16 13:33:37.788298 containerd[1607]: time="2025-12-16T13:33:37.788128591Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Dec 16 13:33:37.788298 containerd[1607]: time="2025-12-16T13:33:37.788134933Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Dec 16 13:33:37.788298 containerd[1607]: time="2025-12-16T13:33:37.788140577Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Dec 16 13:33:37.788298 containerd[1607]: time="2025-12-16T13:33:37.788146251Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Dec 16 13:33:37.788298 containerd[1607]: time="2025-12-16T13:33:37.788156845Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Dec 16 13:33:37.788581 containerd[1607]: time="2025-12-16T13:33:37.788422806Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Dec 16 13:33:37.788955 containerd[1607]: time="2025-12-16T13:33:37.788614111Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Dec 16 13:33:37.788955 containerd[1607]: time="2025-12-16T13:33:37.788652175Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Dec 16 13:33:37.788955 containerd[1607]: time="2025-12-16T13:33:37.788662253Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Dec 16 13:33:37.788955 containerd[1607]: time="2025-12-16T13:33:37.788669449Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Dec 16 13:33:37.788955 containerd[1607]: time="2025-12-16T13:33:37.788726762Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Dec 16 13:33:37.788955 containerd[1607]: time="2025-12-16T13:33:37.788734823Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Dec 16 13:33:37.788955 containerd[1607]: time="2025-12-16T13:33:37.788740198Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Dec 16 13:33:37.788955 containerd[1607]: time="2025-12-16T13:33:37.788745891Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Dec 16 13:33:37.788955 containerd[1607]: time="2025-12-16T13:33:37.788751747Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Dec 16 13:33:37.788955 containerd[1607]: time="2025-12-16T13:33:37.788758288Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Dec 16 13:33:37.789249 containerd[1607]: time="2025-12-16T13:33:37.789106916Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Dec 16 13:33:37.789249 containerd[1607]: time="2025-12-16T13:33:37.789121971Z" level=info msg="Start snapshots syncer" Dec 16 13:33:37.789249 containerd[1607]: time="2025-12-16T13:33:37.789137841Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Dec 16 13:33:37.791691 containerd[1607]: time="2025-12-16T13:33:37.790941404Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Dec 16 13:33:37.791691 containerd[1607]: time="2025-12-16T13:33:37.790971659Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Dec 16 13:33:37.791786 containerd[1607]: time="2025-12-16T13:33:37.791011774Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Dec 16 13:33:37.791786 containerd[1607]: time="2025-12-16T13:33:37.791080723Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Dec 16 13:33:37.791786 containerd[1607]: time="2025-12-16T13:33:37.791093546Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Dec 16 13:33:37.791786 containerd[1607]: time="2025-12-16T13:33:37.791099833Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Dec 16 13:33:37.791786 containerd[1607]: time="2025-12-16T13:33:37.791105430Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Dec 16 13:33:37.791786 containerd[1607]: time="2025-12-16T13:33:37.791112617Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Dec 16 13:33:37.791786 containerd[1607]: time="2025-12-16T13:33:37.791118375Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Dec 16 13:33:37.791786 containerd[1607]: time="2025-12-16T13:33:37.791124831Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Dec 16 13:33:37.791786 containerd[1607]: time="2025-12-16T13:33:37.791138196Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Dec 16 13:33:37.791786 containerd[1607]: time="2025-12-16T13:33:37.791154023Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Dec 16 13:33:37.791786 containerd[1607]: time="2025-12-16T13:33:37.791160779Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Dec 16 13:33:37.791786 containerd[1607]: time="2025-12-16T13:33:37.791183741Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 16 13:33:37.791786 containerd[1607]: time="2025-12-16T13:33:37.791193739Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 16 13:33:37.791786 containerd[1607]: time="2025-12-16T13:33:37.791198965Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 16 13:33:37.791979 containerd[1607]: time="2025-12-16T13:33:37.791204492Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 16 13:33:37.791979 containerd[1607]: time="2025-12-16T13:33:37.791208658Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Dec 16 13:33:37.791979 containerd[1607]: time="2025-12-16T13:33:37.791213691Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Dec 16 13:33:37.791979 containerd[1607]: time="2025-12-16T13:33:37.791236641Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Dec 16 13:33:37.791979 containerd[1607]: time="2025-12-16T13:33:37.791247820Z" level=info msg="runtime interface created" Dec 16 13:33:37.791979 containerd[1607]: time="2025-12-16T13:33:37.791251213Z" level=info msg="created NRI interface" Dec 16 13:33:37.791979 containerd[1607]: time="2025-12-16T13:33:37.791255648Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Dec 16 13:33:37.791979 containerd[1607]: time="2025-12-16T13:33:37.791261665Z" level=info msg="Connect containerd service" Dec 16 13:33:37.791979 containerd[1607]: time="2025-12-16T13:33:37.791272577Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 16 13:33:37.793471 containerd[1607]: time="2025-12-16T13:33:37.793329772Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 16 13:33:37.821762 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 16 13:33:37.840983 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 16 13:33:37.852498 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 16 13:33:37.868563 systemd[1]: Reached target getty.target - Login Prompts. Dec 16 13:33:37.869883 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:33:37.904073 containerd[1607]: time="2025-12-16T13:33:37.904020072Z" level=info msg="Start subscribing containerd event" Dec 16 13:33:37.904259 containerd[1607]: time="2025-12-16T13:33:37.904153113Z" level=info msg="Start recovering state" Dec 16 13:33:37.904259 containerd[1607]: time="2025-12-16T13:33:37.904212371Z" level=info msg="Start event monitor" Dec 16 13:33:37.904259 containerd[1607]: time="2025-12-16T13:33:37.904220438Z" level=info msg="Start cni network conf syncer for default" Dec 16 13:33:37.904339 containerd[1607]: time="2025-12-16T13:33:37.904331109Z" level=info msg="Start streaming server" Dec 16 13:33:37.904373 containerd[1607]: time="2025-12-16T13:33:37.904366770Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Dec 16 13:33:37.904471 containerd[1607]: time="2025-12-16T13:33:37.904398205Z" level=info msg="runtime interface starting up..." Dec 16 13:33:37.904471 containerd[1607]: time="2025-12-16T13:33:37.904403855Z" level=info msg="starting plugins..." Dec 16 13:33:37.904471 containerd[1607]: time="2025-12-16T13:33:37.904413195Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Dec 16 13:33:37.904667 containerd[1607]: time="2025-12-16T13:33:37.904651067Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 16 13:33:37.904700 containerd[1607]: time="2025-12-16T13:33:37.904682005Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 16 13:33:37.904719 containerd[1607]: time="2025-12-16T13:33:37.904714913Z" level=info msg="containerd successfully booted in 0.188365s" Dec 16 13:33:37.905315 systemd[1]: Started containerd.service - containerd container runtime. Dec 16 13:33:37.980245 tar[1595]: linux-amd64/README.md Dec 16 13:33:37.991895 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 16 13:33:39.144492 systemd-networkd[1506]: ens192: Gained IPv6LL Dec 16 13:33:39.146305 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 16 13:33:39.146817 systemd[1]: Reached target network-online.target - Network is Online. Dec 16 13:33:39.148075 systemd[1]: Starting coreos-metadata.service - VMware metadata agent... Dec 16 13:33:39.151284 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:33:39.153530 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 16 13:33:39.190546 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 16 13:33:39.191651 systemd[1]: coreos-metadata.service: Deactivated successfully. Dec 16 13:33:39.191770 systemd[1]: Finished coreos-metadata.service - VMware metadata agent. Dec 16 13:33:39.192548 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 16 13:33:40.005630 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:33:40.006524 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 16 13:33:40.007025 systemd[1]: Startup finished in 2.489s (kernel) + 4.829s (initrd) + 4.290s (userspace) = 11.609s. Dec 16 13:33:40.018834 (kubelet)[1784]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 13:33:40.046115 login[1716]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Dec 16 13:33:40.046336 login[1707]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Dec 16 13:33:40.052492 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 16 13:33:40.053553 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 16 13:33:40.058514 systemd-logind[1583]: New session 2 of user core. Dec 16 13:33:40.063395 systemd-logind[1583]: New session 1 of user core. Dec 16 13:33:40.068277 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 16 13:33:40.070213 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 16 13:33:40.080815 (systemd)[1791]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 16 13:33:40.082630 systemd-logind[1583]: New session c1 of user core. Dec 16 13:33:40.177328 systemd[1791]: Queued start job for default target default.target. Dec 16 13:33:40.187238 systemd[1791]: Created slice app.slice - User Application Slice. Dec 16 13:33:40.187263 systemd[1791]: Reached target paths.target - Paths. Dec 16 13:33:40.187300 systemd[1791]: Reached target timers.target - Timers. Dec 16 13:33:40.189292 systemd[1791]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 16 13:33:40.195826 systemd[1791]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 16 13:33:40.196326 systemd[1791]: Reached target sockets.target - Sockets. Dec 16 13:33:40.196390 systemd[1791]: Reached target basic.target - Basic System. Dec 16 13:33:40.196426 systemd[1791]: Reached target default.target - Main User Target. Dec 16 13:33:40.196444 systemd[1791]: Startup finished in 110ms. Dec 16 13:33:40.196444 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 16 13:33:40.202301 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 16 13:33:40.202850 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 16 13:33:40.522536 kubelet[1784]: E1216 13:33:40.522504 1784 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 13:33:40.524040 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 13:33:40.524123 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 13:33:40.524370 systemd[1]: kubelet.service: Consumed 647ms CPU time, 264.7M memory peak. Dec 16 13:33:50.774695 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 16 13:33:50.775959 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:33:51.130268 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:33:51.133317 (kubelet)[1834]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 13:33:51.182075 kubelet[1834]: E1216 13:33:51.182033 1834 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 13:33:51.185368 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 13:33:51.185546 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 13:33:51.186016 systemd[1]: kubelet.service: Consumed 109ms CPU time, 110.6M memory peak. Dec 16 13:34:01.436066 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 16 13:34:01.437743 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:34:01.794858 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:34:01.799536 (kubelet)[1849]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 13:34:01.830729 kubelet[1849]: E1216 13:34:01.830692 1849 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 13:34:01.832357 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 13:34:01.832508 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 13:34:01.832886 systemd[1]: kubelet.service: Consumed 103ms CPU time, 110.4M memory peak. Dec 16 13:34:07.745411 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 16 13:34:07.746802 systemd[1]: Started sshd@0-139.178.70.100:22-139.178.89.65:60784.service - OpenSSH per-connection server daemon (139.178.89.65:60784). Dec 16 13:34:07.798081 sshd[1857]: Accepted publickey for core from 139.178.89.65 port 60784 ssh2: RSA SHA256:ptIxrQTrYo0+chtuEVmg9+TwAphmKe0c6qeNmkN5+wE Dec 16 13:34:07.798885 sshd-session[1857]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:34:07.802086 systemd-logind[1583]: New session 3 of user core. Dec 16 13:34:07.812392 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 16 13:34:07.866500 systemd[1]: Started sshd@1-139.178.70.100:22-139.178.89.65:60792.service - OpenSSH per-connection server daemon (139.178.89.65:60792). Dec 16 13:34:07.904984 sshd[1863]: Accepted publickey for core from 139.178.89.65 port 60792 ssh2: RSA SHA256:ptIxrQTrYo0+chtuEVmg9+TwAphmKe0c6qeNmkN5+wE Dec 16 13:34:07.905614 sshd-session[1863]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:34:07.909003 systemd-logind[1583]: New session 4 of user core. Dec 16 13:34:07.916405 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 16 13:34:07.965866 sshd[1866]: Connection closed by 139.178.89.65 port 60792 Dec 16 13:34:07.965588 sshd-session[1863]: pam_unix(sshd:session): session closed for user core Dec 16 13:34:07.976763 systemd[1]: sshd@1-139.178.70.100:22-139.178.89.65:60792.service: Deactivated successfully. Dec 16 13:34:07.978097 systemd[1]: session-4.scope: Deactivated successfully. Dec 16 13:34:07.979599 systemd-logind[1583]: Session 4 logged out. Waiting for processes to exit. Dec 16 13:34:07.980773 systemd[1]: Started sshd@2-139.178.70.100:22-139.178.89.65:60804.service - OpenSSH per-connection server daemon (139.178.89.65:60804). Dec 16 13:34:07.981945 systemd-logind[1583]: Removed session 4. Dec 16 13:34:08.020747 sshd[1872]: Accepted publickey for core from 139.178.89.65 port 60804 ssh2: RSA SHA256:ptIxrQTrYo0+chtuEVmg9+TwAphmKe0c6qeNmkN5+wE Dec 16 13:34:08.021367 sshd-session[1872]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:34:08.025389 systemd-logind[1583]: New session 5 of user core. Dec 16 13:34:08.031406 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 16 13:34:08.077164 sshd[1876]: Connection closed by 139.178.89.65 port 60804 Dec 16 13:34:08.077516 sshd-session[1872]: pam_unix(sshd:session): session closed for user core Dec 16 13:34:08.082800 systemd[1]: sshd@2-139.178.70.100:22-139.178.89.65:60804.service: Deactivated successfully. Dec 16 13:34:08.083802 systemd[1]: session-5.scope: Deactivated successfully. Dec 16 13:34:08.084716 systemd-logind[1583]: Session 5 logged out. Waiting for processes to exit. Dec 16 13:34:08.085864 systemd[1]: Started sshd@3-139.178.70.100:22-139.178.89.65:60816.service - OpenSSH per-connection server daemon (139.178.89.65:60816). Dec 16 13:34:08.088572 systemd-logind[1583]: Removed session 5. Dec 16 13:34:08.126610 sshd[1882]: Accepted publickey for core from 139.178.89.65 port 60816 ssh2: RSA SHA256:ptIxrQTrYo0+chtuEVmg9+TwAphmKe0c6qeNmkN5+wE Dec 16 13:34:08.127366 sshd-session[1882]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:34:08.130655 systemd-logind[1583]: New session 6 of user core. Dec 16 13:34:08.140394 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 16 13:34:08.189985 sshd[1885]: Connection closed by 139.178.89.65 port 60816 Dec 16 13:34:08.190786 sshd-session[1882]: pam_unix(sshd:session): session closed for user core Dec 16 13:34:08.196822 systemd[1]: sshd@3-139.178.70.100:22-139.178.89.65:60816.service: Deactivated successfully. Dec 16 13:34:08.197926 systemd[1]: session-6.scope: Deactivated successfully. Dec 16 13:34:08.198573 systemd-logind[1583]: Session 6 logged out. Waiting for processes to exit. Dec 16 13:34:08.200106 systemd[1]: Started sshd@4-139.178.70.100:22-139.178.89.65:60820.service - OpenSSH per-connection server daemon (139.178.89.65:60820). Dec 16 13:34:08.201657 systemd-logind[1583]: Removed session 6. Dec 16 13:34:08.241623 sshd[1891]: Accepted publickey for core from 139.178.89.65 port 60820 ssh2: RSA SHA256:ptIxrQTrYo0+chtuEVmg9+TwAphmKe0c6qeNmkN5+wE Dec 16 13:34:08.242380 sshd-session[1891]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:34:08.245636 systemd-logind[1583]: New session 7 of user core. Dec 16 13:34:08.255396 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 16 13:34:08.312548 sudo[1895]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 16 13:34:08.312761 sudo[1895]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 13:34:08.325774 sudo[1895]: pam_unix(sudo:session): session closed for user root Dec 16 13:34:08.326629 sshd[1894]: Connection closed by 139.178.89.65 port 60820 Dec 16 13:34:08.327039 sshd-session[1891]: pam_unix(sshd:session): session closed for user core Dec 16 13:34:08.333781 systemd[1]: sshd@4-139.178.70.100:22-139.178.89.65:60820.service: Deactivated successfully. Dec 16 13:34:08.334783 systemd[1]: session-7.scope: Deactivated successfully. Dec 16 13:34:08.335723 systemd-logind[1583]: Session 7 logged out. Waiting for processes to exit. Dec 16 13:34:08.337060 systemd[1]: Started sshd@5-139.178.70.100:22-139.178.89.65:60832.service - OpenSSH per-connection server daemon (139.178.89.65:60832). Dec 16 13:34:08.338852 systemd-logind[1583]: Removed session 7. Dec 16 13:34:08.381516 sshd[1901]: Accepted publickey for core from 139.178.89.65 port 60832 ssh2: RSA SHA256:ptIxrQTrYo0+chtuEVmg9+TwAphmKe0c6qeNmkN5+wE Dec 16 13:34:08.382317 sshd-session[1901]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:34:08.385488 systemd-logind[1583]: New session 8 of user core. Dec 16 13:34:08.393389 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 16 13:34:08.441707 sudo[1906]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 16 13:34:08.442091 sudo[1906]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 13:34:08.444937 sudo[1906]: pam_unix(sudo:session): session closed for user root Dec 16 13:34:08.448645 sudo[1905]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Dec 16 13:34:08.448833 sudo[1905]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 13:34:08.456126 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 16 13:34:08.482651 augenrules[1928]: No rules Dec 16 13:34:08.483316 systemd[1]: audit-rules.service: Deactivated successfully. Dec 16 13:34:08.483569 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 16 13:34:08.484346 sudo[1905]: pam_unix(sudo:session): session closed for user root Dec 16 13:34:08.486264 sshd[1904]: Connection closed by 139.178.89.65 port 60832 Dec 16 13:34:08.485740 sshd-session[1901]: pam_unix(sshd:session): session closed for user core Dec 16 13:34:08.495726 systemd[1]: sshd@5-139.178.70.100:22-139.178.89.65:60832.service: Deactivated successfully. Dec 16 13:34:08.496832 systemd[1]: session-8.scope: Deactivated successfully. Dec 16 13:34:08.497478 systemd-logind[1583]: Session 8 logged out. Waiting for processes to exit. Dec 16 13:34:08.498794 systemd[1]: Started sshd@6-139.178.70.100:22-139.178.89.65:60834.service - OpenSSH per-connection server daemon (139.178.89.65:60834). Dec 16 13:34:08.499771 systemd-logind[1583]: Removed session 8. Dec 16 13:34:08.532199 sshd[1937]: Accepted publickey for core from 139.178.89.65 port 60834 ssh2: RSA SHA256:ptIxrQTrYo0+chtuEVmg9+TwAphmKe0c6qeNmkN5+wE Dec 16 13:34:08.532939 sshd-session[1937]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:34:08.536186 systemd-logind[1583]: New session 9 of user core. Dec 16 13:34:08.544421 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 16 13:34:08.593105 sudo[1941]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 16 13:34:08.593700 sudo[1941]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 13:34:08.895949 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 16 13:34:08.906538 (dockerd)[1959]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 16 13:34:09.120805 dockerd[1959]: time="2025-12-16T13:34:09.120655834Z" level=info msg="Starting up" Dec 16 13:34:09.121303 dockerd[1959]: time="2025-12-16T13:34:09.121292830Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Dec 16 13:34:09.126930 dockerd[1959]: time="2025-12-16T13:34:09.126897097Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Dec 16 13:34:09.153016 dockerd[1959]: time="2025-12-16T13:34:09.152937066Z" level=info msg="Loading containers: start." Dec 16 13:34:09.160251 kernel: Initializing XFRM netlink socket Dec 16 13:34:09.295543 systemd-networkd[1506]: docker0: Link UP Dec 16 13:34:09.296826 dockerd[1959]: time="2025-12-16T13:34:09.296807303Z" level=info msg="Loading containers: done." Dec 16 13:34:09.304110 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck927028906-merged.mount: Deactivated successfully. Dec 16 13:34:09.306394 dockerd[1959]: time="2025-12-16T13:34:09.306367772Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 16 13:34:09.306450 dockerd[1959]: time="2025-12-16T13:34:09.306417194Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Dec 16 13:34:09.306469 dockerd[1959]: time="2025-12-16T13:34:09.306455874Z" level=info msg="Initializing buildkit" Dec 16 13:34:09.315612 dockerd[1959]: time="2025-12-16T13:34:09.315598136Z" level=info msg="Completed buildkit initialization" Dec 16 13:34:09.320025 dockerd[1959]: time="2025-12-16T13:34:09.320010915Z" level=info msg="Daemon has completed initialization" Dec 16 13:34:09.320095 dockerd[1959]: time="2025-12-16T13:34:09.320078365Z" level=info msg="API listen on /run/docker.sock" Dec 16 13:34:09.320188 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 16 13:34:09.982437 containerd[1607]: time="2025-12-16T13:34:09.982405739Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\"" Dec 16 13:34:10.636684 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1796871992.mount: Deactivated successfully. Dec 16 13:34:11.684777 containerd[1607]: time="2025-12-16T13:34:11.684752770Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:34:11.685204 containerd[1607]: time="2025-12-16T13:34:11.685187326Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.7: active requests=0, bytes read=30114712" Dec 16 13:34:11.686016 containerd[1607]: time="2025-12-16T13:34:11.686004021Z" level=info msg="ImageCreate event name:\"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:34:11.687202 containerd[1607]: time="2025-12-16T13:34:11.687192541Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:34:11.687873 containerd[1607]: time="2025-12-16T13:34:11.687755948Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.7\" with image id \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\", size \"30111311\" in 1.705235849s" Dec 16 13:34:11.687903 containerd[1607]: time="2025-12-16T13:34:11.687877706Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\" returns image reference \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\"" Dec 16 13:34:11.688246 containerd[1607]: time="2025-12-16T13:34:11.688236638Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\"" Dec 16 13:34:12.082922 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Dec 16 13:34:12.084192 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:34:12.297311 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:34:12.307614 (kubelet)[2234]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 13:34:12.360039 kubelet[2234]: E1216 13:34:12.359828 2234 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 13:34:12.361157 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 13:34:12.361388 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 13:34:12.361798 systemd[1]: kubelet.service: Consumed 102ms CPU time, 108.2M memory peak. Dec 16 13:34:13.228210 containerd[1607]: time="2025-12-16T13:34:13.228183331Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:34:13.236607 containerd[1607]: time="2025-12-16T13:34:13.236593609Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.7: active requests=0, bytes read=26016781" Dec 16 13:34:13.242032 containerd[1607]: time="2025-12-16T13:34:13.242018273Z" level=info msg="ImageCreate event name:\"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:34:13.252710 containerd[1607]: time="2025-12-16T13:34:13.252695449Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:34:13.253900 containerd[1607]: time="2025-12-16T13:34:13.253883415Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.7\" with image id \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\", size \"27673815\" in 1.565595423s" Dec 16 13:34:13.253962 containerd[1607]: time="2025-12-16T13:34:13.253952180Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\" returns image reference \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\"" Dec 16 13:34:13.254507 containerd[1607]: time="2025-12-16T13:34:13.254487028Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\"" Dec 16 13:34:14.588960 containerd[1607]: time="2025-12-16T13:34:14.588464288Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:34:14.588960 containerd[1607]: time="2025-12-16T13:34:14.588832632Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.7: active requests=0, bytes read=20158102" Dec 16 13:34:14.588960 containerd[1607]: time="2025-12-16T13:34:14.588936575Z" level=info msg="ImageCreate event name:\"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:34:14.590319 containerd[1607]: time="2025-12-16T13:34:14.590305176Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:34:14.590917 containerd[1607]: time="2025-12-16T13:34:14.590901688Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.7\" with image id \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\", size \"21815154\" in 1.33639288s" Dec 16 13:34:14.590945 containerd[1607]: time="2025-12-16T13:34:14.590917527Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\" returns image reference \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\"" Dec 16 13:34:14.591187 containerd[1607]: time="2025-12-16T13:34:14.591154166Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\"" Dec 16 13:34:15.547928 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3304899102.mount: Deactivated successfully. Dec 16 13:34:15.893075 containerd[1607]: time="2025-12-16T13:34:15.892858215Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:34:15.901300 containerd[1607]: time="2025-12-16T13:34:15.901269453Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.7: active requests=0, bytes read=31930096" Dec 16 13:34:15.908775 containerd[1607]: time="2025-12-16T13:34:15.908733535Z" level=info msg="ImageCreate event name:\"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:34:15.918021 containerd[1607]: time="2025-12-16T13:34:15.917993107Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:34:15.918653 containerd[1607]: time="2025-12-16T13:34:15.918438600Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.7\" with image id \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\", repo tag \"registry.k8s.io/kube-proxy:v1.33.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\", size \"31929115\" in 1.327267178s" Dec 16 13:34:15.918653 containerd[1607]: time="2025-12-16T13:34:15.918459526Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\" returns image reference \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\"" Dec 16 13:34:15.919145 containerd[1607]: time="2025-12-16T13:34:15.919005061Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Dec 16 13:34:16.507130 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1582839456.mount: Deactivated successfully. Dec 16 13:34:17.318556 containerd[1607]: time="2025-12-16T13:34:17.318518874Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:34:17.320912 containerd[1607]: time="2025-12-16T13:34:17.320891912Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Dec 16 13:34:17.331296 containerd[1607]: time="2025-12-16T13:34:17.331275118Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:34:17.341148 containerd[1607]: time="2025-12-16T13:34:17.341125405Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:34:17.341902 containerd[1607]: time="2025-12-16T13:34:17.341881079Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.422856624s" Dec 16 13:34:17.341946 containerd[1607]: time="2025-12-16T13:34:17.341904228Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Dec 16 13:34:17.342594 containerd[1607]: time="2025-12-16T13:34:17.342554417Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Dec 16 13:34:17.978213 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount373548838.mount: Deactivated successfully. Dec 16 13:34:17.980098 containerd[1607]: time="2025-12-16T13:34:17.980065203Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 13:34:17.980534 containerd[1607]: time="2025-12-16T13:34:17.980518483Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Dec 16 13:34:17.980564 containerd[1607]: time="2025-12-16T13:34:17.980551771Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 13:34:17.981919 containerd[1607]: time="2025-12-16T13:34:17.981897183Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 13:34:17.982245 containerd[1607]: time="2025-12-16T13:34:17.982178395Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 639.603753ms" Dec 16 13:34:17.982245 containerd[1607]: time="2025-12-16T13:34:17.982192946Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Dec 16 13:34:17.982646 containerd[1607]: time="2025-12-16T13:34:17.982634064Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Dec 16 13:34:18.580625 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2809311465.mount: Deactivated successfully. Dec 16 13:34:22.283955 containerd[1607]: time="2025-12-16T13:34:22.283925214Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:34:22.284786 containerd[1607]: time="2025-12-16T13:34:22.284772135Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58926227" Dec 16 13:34:22.285029 containerd[1607]: time="2025-12-16T13:34:22.285014599Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:34:22.286996 containerd[1607]: time="2025-12-16T13:34:22.286982386Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:34:22.287464 containerd[1607]: time="2025-12-16T13:34:22.287294932Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 4.304646152s" Dec 16 13:34:22.287464 containerd[1607]: time="2025-12-16T13:34:22.287313098Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Dec 16 13:34:22.410570 update_engine[1584]: I20251216 13:34:22.410524 1584 update_attempter.cc:509] Updating boot flags... Dec 16 13:34:22.421692 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Dec 16 13:34:22.423533 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:34:22.702705 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:34:22.705131 (kubelet)[2414]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 13:34:22.885615 kubelet[2414]: E1216 13:34:22.885588 2414 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 13:34:22.887335 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 13:34:22.887438 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 13:34:22.887645 systemd[1]: kubelet.service: Consumed 97ms CPU time, 107.7M memory peak. Dec 16 13:34:24.506504 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:34:24.506756 systemd[1]: kubelet.service: Consumed 97ms CPU time, 107.7M memory peak. Dec 16 13:34:24.510361 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:34:24.527186 systemd[1]: Reload requested from client PID 2436 ('systemctl') (unit session-9.scope)... Dec 16 13:34:24.527272 systemd[1]: Reloading... Dec 16 13:34:24.594264 zram_generator::config[2489]: No configuration found. Dec 16 13:34:24.660846 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Dec 16 13:34:24.726057 systemd[1]: Reloading finished in 198 ms. Dec 16 13:34:24.765202 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 16 13:34:24.765283 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 16 13:34:24.765510 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:34:24.766936 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:34:25.135025 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:34:25.144526 (kubelet)[2547]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 16 13:34:25.194156 kubelet[2547]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 13:34:25.194156 kubelet[2547]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 16 13:34:25.194156 kubelet[2547]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 13:34:25.210377 kubelet[2547]: I1216 13:34:25.210324 2547 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 16 13:34:25.373142 kubelet[2547]: I1216 13:34:25.373126 2547 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Dec 16 13:34:25.373142 kubelet[2547]: I1216 13:34:25.373139 2547 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 16 13:34:25.373290 kubelet[2547]: I1216 13:34:25.373279 2547 server.go:956] "Client rotation is on, will bootstrap in background" Dec 16 13:34:25.398029 kubelet[2547]: E1216 13:34:25.397857 2547 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://139.178.70.100:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 139.178.70.100:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Dec 16 13:34:25.398289 kubelet[2547]: I1216 13:34:25.398191 2547 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 13:34:25.421054 kubelet[2547]: I1216 13:34:25.421037 2547 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 16 13:34:25.427223 kubelet[2547]: I1216 13:34:25.427162 2547 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 16 13:34:25.431024 kubelet[2547]: I1216 13:34:25.430909 2547 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 16 13:34:25.433847 kubelet[2547]: I1216 13:34:25.430925 2547 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 16 13:34:25.434077 kubelet[2547]: I1216 13:34:25.433963 2547 topology_manager.go:138] "Creating topology manager with none policy" Dec 16 13:34:25.434077 kubelet[2547]: I1216 13:34:25.433977 2547 container_manager_linux.go:303] "Creating device plugin manager" Dec 16 13:34:25.434077 kubelet[2547]: I1216 13:34:25.434051 2547 state_mem.go:36] "Initialized new in-memory state store" Dec 16 13:34:25.436933 kubelet[2547]: I1216 13:34:25.436923 2547 kubelet.go:480] "Attempting to sync node with API server" Dec 16 13:34:25.436995 kubelet[2547]: I1216 13:34:25.436988 2547 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 16 13:34:25.437053 kubelet[2547]: I1216 13:34:25.437047 2547 kubelet.go:386] "Adding apiserver pod source" Dec 16 13:34:25.438869 kubelet[2547]: I1216 13:34:25.438824 2547 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 16 13:34:25.446404 kubelet[2547]: I1216 13:34:25.446390 2547 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Dec 16 13:34:25.447747 kubelet[2547]: I1216 13:34:25.447729 2547 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 16 13:34:25.448712 kubelet[2547]: W1216 13:34:25.448697 2547 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 16 13:34:25.453020 kubelet[2547]: I1216 13:34:25.452506 2547 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 16 13:34:25.453020 kubelet[2547]: I1216 13:34:25.452530 2547 server.go:1289] "Started kubelet" Dec 16 13:34:25.453020 kubelet[2547]: E1216 13:34:25.452617 2547 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://139.178.70.100:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 139.178.70.100:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 16 13:34:25.454354 kubelet[2547]: E1216 13:34:25.454343 2547 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://139.178.70.100:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.70.100:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 16 13:34:25.455552 kubelet[2547]: I1216 13:34:25.455535 2547 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 16 13:34:25.456701 kubelet[2547]: I1216 13:34:25.456281 2547 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 16 13:34:25.456701 kubelet[2547]: I1216 13:34:25.456441 2547 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 16 13:34:25.457408 kubelet[2547]: I1216 13:34:25.457398 2547 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 16 13:34:25.459986 kubelet[2547]: E1216 13:34:25.457689 2547 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://139.178.70.100:6443/api/v1/namespaces/default/events\": dial tcp 139.178.70.100:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1881b57a38fa66ee default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-12-16 13:34:25.452517102 +0000 UTC m=+0.305240421,LastTimestamp:2025-12-16 13:34:25.452517102 +0000 UTC m=+0.305240421,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Dec 16 13:34:25.463645 kubelet[2547]: I1216 13:34:25.463637 2547 server.go:317] "Adding debug handlers to kubelet server" Dec 16 13:34:25.465233 kubelet[2547]: I1216 13:34:25.464832 2547 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 16 13:34:25.465233 kubelet[2547]: E1216 13:34:25.465115 2547 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 13:34:25.467206 kubelet[2547]: I1216 13:34:25.467194 2547 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 16 13:34:25.467820 kubelet[2547]: I1216 13:34:25.467814 2547 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 16 13:34:25.467880 kubelet[2547]: I1216 13:34:25.467875 2547 reconciler.go:26] "Reconciler: start to sync state" Dec 16 13:34:25.474898 kubelet[2547]: E1216 13:34:25.474193 2547 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.100:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.100:6443: connect: connection refused" interval="200ms" Dec 16 13:34:25.474898 kubelet[2547]: I1216 13:34:25.474389 2547 factory.go:223] Registration of the systemd container factory successfully Dec 16 13:34:25.474898 kubelet[2547]: I1216 13:34:25.474422 2547 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 16 13:34:25.474898 kubelet[2547]: E1216 13:34:25.474841 2547 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://139.178.70.100:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.70.100:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 16 13:34:25.475063 kubelet[2547]: I1216 13:34:25.475055 2547 factory.go:223] Registration of the containerd container factory successfully Dec 16 13:34:25.475387 kubelet[2547]: I1216 13:34:25.475372 2547 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Dec 16 13:34:25.475945 kubelet[2547]: I1216 13:34:25.475935 2547 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Dec 16 13:34:25.475968 kubelet[2547]: I1216 13:34:25.475946 2547 status_manager.go:230] "Starting to sync pod status with apiserver" Dec 16 13:34:25.475968 kubelet[2547]: I1216 13:34:25.475956 2547 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 16 13:34:25.475968 kubelet[2547]: I1216 13:34:25.475960 2547 kubelet.go:2436] "Starting kubelet main sync loop" Dec 16 13:34:25.476016 kubelet[2547]: E1216 13:34:25.475979 2547 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 16 13:34:25.480840 kubelet[2547]: E1216 13:34:25.480826 2547 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://139.178.70.100:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 139.178.70.100:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 16 13:34:25.481850 kubelet[2547]: E1216 13:34:25.481839 2547 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 16 13:34:25.503995 kubelet[2547]: I1216 13:34:25.503976 2547 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 16 13:34:25.503995 kubelet[2547]: I1216 13:34:25.503991 2547 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 16 13:34:25.504083 kubelet[2547]: I1216 13:34:25.504005 2547 state_mem.go:36] "Initialized new in-memory state store" Dec 16 13:34:25.505096 kubelet[2547]: I1216 13:34:25.505083 2547 policy_none.go:49] "None policy: Start" Dec 16 13:34:25.505096 kubelet[2547]: I1216 13:34:25.505097 2547 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 16 13:34:25.505162 kubelet[2547]: I1216 13:34:25.505105 2547 state_mem.go:35] "Initializing new in-memory state store" Dec 16 13:34:25.513047 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 16 13:34:25.525885 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 16 13:34:25.529416 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 16 13:34:25.549025 kubelet[2547]: E1216 13:34:25.548929 2547 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 16 13:34:25.549249 kubelet[2547]: I1216 13:34:25.549240 2547 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 16 13:34:25.549285 kubelet[2547]: I1216 13:34:25.549250 2547 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 16 13:34:25.549409 kubelet[2547]: I1216 13:34:25.549397 2547 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 16 13:34:25.550397 kubelet[2547]: E1216 13:34:25.550353 2547 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 16 13:34:25.550397 kubelet[2547]: E1216 13:34:25.550378 2547 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Dec 16 13:34:25.584811 systemd[1]: Created slice kubepods-burstable-pod0d1c9c50755eee58e1ebb02ac7e972ec.slice - libcontainer container kubepods-burstable-pod0d1c9c50755eee58e1ebb02ac7e972ec.slice. Dec 16 13:34:25.605461 kubelet[2547]: E1216 13:34:25.605415 2547 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 13:34:25.609001 systemd[1]: Created slice kubepods-burstable-pod66e26b992bcd7ea6fb75e339cf7a3f7d.slice - libcontainer container kubepods-burstable-pod66e26b992bcd7ea6fb75e339cf7a3f7d.slice. Dec 16 13:34:25.610565 kubelet[2547]: E1216 13:34:25.610548 2547 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 13:34:25.613155 systemd[1]: Created slice kubepods-burstable-pod6e6cfcfb327385445a9bb0d2bc2fd5d4.slice - libcontainer container kubepods-burstable-pod6e6cfcfb327385445a9bb0d2bc2fd5d4.slice. Dec 16 13:34:25.614460 kubelet[2547]: E1216 13:34:25.614445 2547 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 13:34:25.650236 kubelet[2547]: I1216 13:34:25.650166 2547 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 16 13:34:25.650823 kubelet[2547]: E1216 13:34:25.650720 2547 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.70.100:6443/api/v1/nodes\": dial tcp 139.178.70.100:6443: connect: connection refused" node="localhost" Dec 16 13:34:25.669292 kubelet[2547]: I1216 13:34:25.669278 2547 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0d1c9c50755eee58e1ebb02ac7e972ec-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"0d1c9c50755eee58e1ebb02ac7e972ec\") " pod="kube-system/kube-apiserver-localhost" Dec 16 13:34:25.669392 kubelet[2547]: I1216 13:34:25.669381 2547 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 13:34:25.669519 kubelet[2547]: I1216 13:34:25.669443 2547 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 13:34:25.669519 kubelet[2547]: I1216 13:34:25.669458 2547 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 13:34:25.669519 kubelet[2547]: I1216 13:34:25.669469 2547 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 13:34:25.669519 kubelet[2547]: I1216 13:34:25.669481 2547 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0d1c9c50755eee58e1ebb02ac7e972ec-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"0d1c9c50755eee58e1ebb02ac7e972ec\") " pod="kube-system/kube-apiserver-localhost" Dec 16 13:34:25.669680 kubelet[2547]: I1216 13:34:25.669491 2547 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 13:34:25.669680 kubelet[2547]: I1216 13:34:25.669646 2547 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6e6cfcfb327385445a9bb0d2bc2fd5d4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6e6cfcfb327385445a9bb0d2bc2fd5d4\") " pod="kube-system/kube-scheduler-localhost" Dec 16 13:34:25.669680 kubelet[2547]: I1216 13:34:25.669660 2547 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0d1c9c50755eee58e1ebb02ac7e972ec-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"0d1c9c50755eee58e1ebb02ac7e972ec\") " pod="kube-system/kube-apiserver-localhost" Dec 16 13:34:25.674749 kubelet[2547]: E1216 13:34:25.674724 2547 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.100:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.100:6443: connect: connection refused" interval="400ms" Dec 16 13:34:25.852401 kubelet[2547]: I1216 13:34:25.852353 2547 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 16 13:34:25.852685 kubelet[2547]: E1216 13:34:25.852662 2547 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.70.100:6443/api/v1/nodes\": dial tcp 139.178.70.100:6443: connect: connection refused" node="localhost" Dec 16 13:34:25.906838 containerd[1607]: time="2025-12-16T13:34:25.906760599Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:0d1c9c50755eee58e1ebb02ac7e972ec,Namespace:kube-system,Attempt:0,}" Dec 16 13:34:25.918688 containerd[1607]: time="2025-12-16T13:34:25.917864660Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:66e26b992bcd7ea6fb75e339cf7a3f7d,Namespace:kube-system,Attempt:0,}" Dec 16 13:34:25.918895 containerd[1607]: time="2025-12-16T13:34:25.918836461Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6e6cfcfb327385445a9bb0d2bc2fd5d4,Namespace:kube-system,Attempt:0,}" Dec 16 13:34:25.986090 containerd[1607]: time="2025-12-16T13:34:25.986065134Z" level=info msg="connecting to shim 65048d2b1fe7d2ca0f53a2f053b8829a2d80dc3a61cd047a372ad877acd46fff" address="unix:///run/containerd/s/e39e39dfb9b5159d8e45c14df536a4348b53959ba4555a1f073ff67fa128b15d" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:34:25.986315 containerd[1607]: time="2025-12-16T13:34:25.986159049Z" level=info msg="connecting to shim ff12a85faa1a4e829cc7cf6458b37b3033bf2c30c3fab29d34e89dd3a3292cb8" address="unix:///run/containerd/s/ce18d91b8fce0c5c1a6867635b65f4864c3c2a593106941fbec20332eaac16f6" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:34:25.993917 containerd[1607]: time="2025-12-16T13:34:25.993904008Z" level=info msg="connecting to shim 10c208c44dd61157ca690d7df9b196571082f60873b7705e3812d35559a31a0c" address="unix:///run/containerd/s/476e1be1ecfdd5e980fef4cfb39a82ced8685cc687c5f665720f8844ae3e75ef" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:34:26.075831 kubelet[2547]: E1216 13:34:26.075803 2547 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.100:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.100:6443: connect: connection refused" interval="800ms" Dec 16 13:34:26.126421 systemd[1]: Started cri-containerd-10c208c44dd61157ca690d7df9b196571082f60873b7705e3812d35559a31a0c.scope - libcontainer container 10c208c44dd61157ca690d7df9b196571082f60873b7705e3812d35559a31a0c. Dec 16 13:34:26.127748 systemd[1]: Started cri-containerd-65048d2b1fe7d2ca0f53a2f053b8829a2d80dc3a61cd047a372ad877acd46fff.scope - libcontainer container 65048d2b1fe7d2ca0f53a2f053b8829a2d80dc3a61cd047a372ad877acd46fff. Dec 16 13:34:26.130831 systemd[1]: Started cri-containerd-ff12a85faa1a4e829cc7cf6458b37b3033bf2c30c3fab29d34e89dd3a3292cb8.scope - libcontainer container ff12a85faa1a4e829cc7cf6458b37b3033bf2c30c3fab29d34e89dd3a3292cb8. Dec 16 13:34:26.188467 containerd[1607]: time="2025-12-16T13:34:26.188069194Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6e6cfcfb327385445a9bb0d2bc2fd5d4,Namespace:kube-system,Attempt:0,} returns sandbox id \"65048d2b1fe7d2ca0f53a2f053b8829a2d80dc3a61cd047a372ad877acd46fff\"" Dec 16 13:34:26.192805 containerd[1607]: time="2025-12-16T13:34:26.192787714Z" level=info msg="CreateContainer within sandbox \"65048d2b1fe7d2ca0f53a2f053b8829a2d80dc3a61cd047a372ad877acd46fff\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 16 13:34:26.198869 containerd[1607]: time="2025-12-16T13:34:26.198812035Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:0d1c9c50755eee58e1ebb02ac7e972ec,Namespace:kube-system,Attempt:0,} returns sandbox id \"10c208c44dd61157ca690d7df9b196571082f60873b7705e3812d35559a31a0c\"" Dec 16 13:34:26.199288 containerd[1607]: time="2025-12-16T13:34:26.199252814Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:66e26b992bcd7ea6fb75e339cf7a3f7d,Namespace:kube-system,Attempt:0,} returns sandbox id \"ff12a85faa1a4e829cc7cf6458b37b3033bf2c30c3fab29d34e89dd3a3292cb8\"" Dec 16 13:34:26.201176 containerd[1607]: time="2025-12-16T13:34:26.201121217Z" level=info msg="CreateContainer within sandbox \"10c208c44dd61157ca690d7df9b196571082f60873b7705e3812d35559a31a0c\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 16 13:34:26.201625 containerd[1607]: time="2025-12-16T13:34:26.201613783Z" level=info msg="CreateContainer within sandbox \"ff12a85faa1a4e829cc7cf6458b37b3033bf2c30c3fab29d34e89dd3a3292cb8\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 16 13:34:26.205721 containerd[1607]: time="2025-12-16T13:34:26.205705341Z" level=info msg="Container 463d29c007ba88d877b413985d210c3d8b9a6a2e360a26e502f7a5d1d89344f3: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:34:26.206179 containerd[1607]: time="2025-12-16T13:34:26.206066637Z" level=info msg="Container 6d14bf94a96d81e97e45cf81aaa1bc04d7af6a23a978aa95c4ca1d172f8fb9f4: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:34:26.208565 containerd[1607]: time="2025-12-16T13:34:26.208551068Z" level=info msg="Container 02131098a60c300da411c6b9187c1952146c29266a45e4d5bb16470dd75fef2f: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:34:26.210337 containerd[1607]: time="2025-12-16T13:34:26.210324952Z" level=info msg="CreateContainer within sandbox \"10c208c44dd61157ca690d7df9b196571082f60873b7705e3812d35559a31a0c\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"6d14bf94a96d81e97e45cf81aaa1bc04d7af6a23a978aa95c4ca1d172f8fb9f4\"" Dec 16 13:34:26.211547 containerd[1607]: time="2025-12-16T13:34:26.211536736Z" level=info msg="StartContainer for \"6d14bf94a96d81e97e45cf81aaa1bc04d7af6a23a978aa95c4ca1d172f8fb9f4\"" Dec 16 13:34:26.212004 containerd[1607]: time="2025-12-16T13:34:26.211993065Z" level=info msg="CreateContainer within sandbox \"65048d2b1fe7d2ca0f53a2f053b8829a2d80dc3a61cd047a372ad877acd46fff\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"463d29c007ba88d877b413985d210c3d8b9a6a2e360a26e502f7a5d1d89344f3\"" Dec 16 13:34:26.212387 containerd[1607]: time="2025-12-16T13:34:26.212337924Z" level=info msg="StartContainer for \"463d29c007ba88d877b413985d210c3d8b9a6a2e360a26e502f7a5d1d89344f3\"" Dec 16 13:34:26.213658 containerd[1607]: time="2025-12-16T13:34:26.213645876Z" level=info msg="connecting to shim 463d29c007ba88d877b413985d210c3d8b9a6a2e360a26e502f7a5d1d89344f3" address="unix:///run/containerd/s/e39e39dfb9b5159d8e45c14df536a4348b53959ba4555a1f073ff67fa128b15d" protocol=ttrpc version=3 Dec 16 13:34:26.215234 containerd[1607]: time="2025-12-16T13:34:26.215171649Z" level=info msg="connecting to shim 6d14bf94a96d81e97e45cf81aaa1bc04d7af6a23a978aa95c4ca1d172f8fb9f4" address="unix:///run/containerd/s/476e1be1ecfdd5e980fef4cfb39a82ced8685cc687c5f665720f8844ae3e75ef" protocol=ttrpc version=3 Dec 16 13:34:26.215470 containerd[1607]: time="2025-12-16T13:34:26.215459366Z" level=info msg="CreateContainer within sandbox \"ff12a85faa1a4e829cc7cf6458b37b3033bf2c30c3fab29d34e89dd3a3292cb8\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"02131098a60c300da411c6b9187c1952146c29266a45e4d5bb16470dd75fef2f\"" Dec 16 13:34:26.215821 containerd[1607]: time="2025-12-16T13:34:26.215796519Z" level=info msg="StartContainer for \"02131098a60c300da411c6b9187c1952146c29266a45e4d5bb16470dd75fef2f\"" Dec 16 13:34:26.217136 containerd[1607]: time="2025-12-16T13:34:26.217108525Z" level=info msg="connecting to shim 02131098a60c300da411c6b9187c1952146c29266a45e4d5bb16470dd75fef2f" address="unix:///run/containerd/s/ce18d91b8fce0c5c1a6867635b65f4864c3c2a593106941fbec20332eaac16f6" protocol=ttrpc version=3 Dec 16 13:34:26.229321 systemd[1]: Started cri-containerd-6d14bf94a96d81e97e45cf81aaa1bc04d7af6a23a978aa95c4ca1d172f8fb9f4.scope - libcontainer container 6d14bf94a96d81e97e45cf81aaa1bc04d7af6a23a978aa95c4ca1d172f8fb9f4. Dec 16 13:34:26.232248 systemd[1]: Started cri-containerd-463d29c007ba88d877b413985d210c3d8b9a6a2e360a26e502f7a5d1d89344f3.scope - libcontainer container 463d29c007ba88d877b413985d210c3d8b9a6a2e360a26e502f7a5d1d89344f3. Dec 16 13:34:26.236336 systemd[1]: Started cri-containerd-02131098a60c300da411c6b9187c1952146c29266a45e4d5bb16470dd75fef2f.scope - libcontainer container 02131098a60c300da411c6b9187c1952146c29266a45e4d5bb16470dd75fef2f. Dec 16 13:34:26.253966 kubelet[2547]: I1216 13:34:26.253943 2547 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 16 13:34:26.254384 kubelet[2547]: E1216 13:34:26.254121 2547 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.70.100:6443/api/v1/nodes\": dial tcp 139.178.70.100:6443: connect: connection refused" node="localhost" Dec 16 13:34:26.280727 containerd[1607]: time="2025-12-16T13:34:26.280665375Z" level=info msg="StartContainer for \"02131098a60c300da411c6b9187c1952146c29266a45e4d5bb16470dd75fef2f\" returns successfully" Dec 16 13:34:26.287420 containerd[1607]: time="2025-12-16T13:34:26.287404552Z" level=info msg="StartContainer for \"6d14bf94a96d81e97e45cf81aaa1bc04d7af6a23a978aa95c4ca1d172f8fb9f4\" returns successfully" Dec 16 13:34:26.297679 containerd[1607]: time="2025-12-16T13:34:26.297656700Z" level=info msg="StartContainer for \"463d29c007ba88d877b413985d210c3d8b9a6a2e360a26e502f7a5d1d89344f3\" returns successfully" Dec 16 13:34:26.375678 kubelet[2547]: E1216 13:34:26.375656 2547 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://139.178.70.100:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 139.178.70.100:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 16 13:34:26.488837 kubelet[2547]: E1216 13:34:26.488765 2547 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 13:34:26.490097 kubelet[2547]: E1216 13:34:26.490084 2547 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 13:34:26.492959 kubelet[2547]: E1216 13:34:26.492944 2547 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 13:34:26.694678 kubelet[2547]: E1216 13:34:26.694655 2547 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://139.178.70.100:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.70.100:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 16 13:34:27.055413 kubelet[2547]: I1216 13:34:27.055398 2547 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 16 13:34:27.494615 kubelet[2547]: E1216 13:34:27.494496 2547 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 13:34:27.494998 kubelet[2547]: E1216 13:34:27.494933 2547 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 13:34:27.759545 kubelet[2547]: E1216 13:34:27.759039 2547 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Dec 16 13:34:27.861267 kubelet[2547]: I1216 13:34:27.861240 2547 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Dec 16 13:34:27.861267 kubelet[2547]: E1216 13:34:27.861264 2547 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Dec 16 13:34:27.868807 kubelet[2547]: E1216 13:34:27.868774 2547 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 13:34:27.969836 kubelet[2547]: E1216 13:34:27.969818 2547 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 13:34:28.071068 kubelet[2547]: E1216 13:34:28.070846 2547 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 13:34:28.166065 kubelet[2547]: I1216 13:34:28.166031 2547 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 16 13:34:28.171590 kubelet[2547]: E1216 13:34:28.171174 2547 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Dec 16 13:34:28.171590 kubelet[2547]: I1216 13:34:28.171462 2547 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Dec 16 13:34:28.173126 kubelet[2547]: E1216 13:34:28.173114 2547 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Dec 16 13:34:28.173184 kubelet[2547]: I1216 13:34:28.173178 2547 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Dec 16 13:34:28.174497 kubelet[2547]: E1216 13:34:28.174469 2547 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Dec 16 13:34:28.454997 kubelet[2547]: I1216 13:34:28.454942 2547 apiserver.go:52] "Watching apiserver" Dec 16 13:34:28.468494 kubelet[2547]: I1216 13:34:28.468469 2547 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 16 13:34:29.050247 kubelet[2547]: I1216 13:34:29.050160 2547 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Dec 16 13:34:29.689640 systemd[1]: Reload requested from client PID 2825 ('systemctl') (unit session-9.scope)... Dec 16 13:34:29.689648 systemd[1]: Reloading... Dec 16 13:34:29.743301 zram_generator::config[2872]: No configuration found. Dec 16 13:34:29.818007 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Dec 16 13:34:29.890265 systemd[1]: Reloading finished in 200 ms. Dec 16 13:34:29.913371 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:34:29.925017 systemd[1]: kubelet.service: Deactivated successfully. Dec 16 13:34:29.925201 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:34:29.925253 systemd[1]: kubelet.service: Consumed 441ms CPU time, 127.1M memory peak. Dec 16 13:34:29.926790 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:34:30.166059 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:34:30.176486 (kubelet)[2936]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 16 13:34:30.254935 kubelet[2936]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 13:34:30.254935 kubelet[2936]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 16 13:34:30.254935 kubelet[2936]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 13:34:30.254935 kubelet[2936]: I1216 13:34:30.254398 2936 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 16 13:34:30.258920 kubelet[2936]: I1216 13:34:30.258910 2936 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Dec 16 13:34:30.258971 kubelet[2936]: I1216 13:34:30.258966 2936 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 16 13:34:30.259149 kubelet[2936]: I1216 13:34:30.259141 2936 server.go:956] "Client rotation is on, will bootstrap in background" Dec 16 13:34:30.259843 kubelet[2936]: I1216 13:34:30.259834 2936 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Dec 16 13:34:30.261571 kubelet[2936]: I1216 13:34:30.261563 2936 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 13:34:30.265638 kubelet[2936]: I1216 13:34:30.265629 2936 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 16 13:34:30.267617 kubelet[2936]: I1216 13:34:30.267609 2936 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 16 13:34:30.267803 kubelet[2936]: I1216 13:34:30.267791 2936 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 16 13:34:30.267927 kubelet[2936]: I1216 13:34:30.267838 2936 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 16 13:34:30.268786 kubelet[2936]: I1216 13:34:30.268778 2936 topology_manager.go:138] "Creating topology manager with none policy" Dec 16 13:34:30.268823 kubelet[2936]: I1216 13:34:30.268819 2936 container_manager_linux.go:303] "Creating device plugin manager" Dec 16 13:34:30.268885 kubelet[2936]: I1216 13:34:30.268879 2936 state_mem.go:36] "Initialized new in-memory state store" Dec 16 13:34:30.269105 kubelet[2936]: I1216 13:34:30.269045 2936 kubelet.go:480] "Attempting to sync node with API server" Dec 16 13:34:30.269105 kubelet[2936]: I1216 13:34:30.269055 2936 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 16 13:34:30.269848 kubelet[2936]: I1216 13:34:30.269617 2936 kubelet.go:386] "Adding apiserver pod source" Dec 16 13:34:30.269848 kubelet[2936]: I1216 13:34:30.269630 2936 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 16 13:34:30.274025 kubelet[2936]: I1216 13:34:30.274013 2936 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Dec 16 13:34:30.275423 kubelet[2936]: I1216 13:34:30.275409 2936 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 16 13:34:30.278787 kubelet[2936]: I1216 13:34:30.278761 2936 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 16 13:34:30.278787 kubelet[2936]: I1216 13:34:30.278786 2936 server.go:1289] "Started kubelet" Dec 16 13:34:30.281715 kubelet[2936]: I1216 13:34:30.281698 2936 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 16 13:34:30.289405 kubelet[2936]: I1216 13:34:30.289074 2936 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 16 13:34:30.289474 kubelet[2936]: I1216 13:34:30.289428 2936 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 16 13:34:30.290525 kubelet[2936]: I1216 13:34:30.290511 2936 server.go:317] "Adding debug handlers to kubelet server" Dec 16 13:34:30.291258 kubelet[2936]: I1216 13:34:30.290782 2936 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 16 13:34:30.291258 kubelet[2936]: I1216 13:34:30.290881 2936 reconciler.go:26] "Reconciler: start to sync state" Dec 16 13:34:30.293761 kubelet[2936]: I1216 13:34:30.293360 2936 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 16 13:34:30.293761 kubelet[2936]: I1216 13:34:30.293478 2936 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 16 13:34:30.297662 kubelet[2936]: I1216 13:34:30.297653 2936 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 16 13:34:30.298431 kubelet[2936]: I1216 13:34:30.298423 2936 factory.go:223] Registration of the systemd container factory successfully Dec 16 13:34:30.298528 kubelet[2936]: I1216 13:34:30.298513 2936 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 16 13:34:30.298834 kubelet[2936]: E1216 13:34:30.298824 2936 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 16 13:34:30.300033 kubelet[2936]: I1216 13:34:30.300006 2936 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Dec 16 13:34:30.301366 kubelet[2936]: I1216 13:34:30.301358 2936 factory.go:223] Registration of the containerd container factory successfully Dec 16 13:34:30.303895 kubelet[2936]: I1216 13:34:30.301509 2936 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Dec 16 13:34:30.303895 kubelet[2936]: I1216 13:34:30.303897 2936 status_manager.go:230] "Starting to sync pod status with apiserver" Dec 16 13:34:30.303965 kubelet[2936]: I1216 13:34:30.303910 2936 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 16 13:34:30.303965 kubelet[2936]: I1216 13:34:30.303914 2936 kubelet.go:2436] "Starting kubelet main sync loop" Dec 16 13:34:30.303965 kubelet[2936]: E1216 13:34:30.303934 2936 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 16 13:34:30.336945 kubelet[2936]: I1216 13:34:30.336928 2936 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 16 13:34:30.337069 kubelet[2936]: I1216 13:34:30.337040 2936 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 16 13:34:30.337113 kubelet[2936]: I1216 13:34:30.337108 2936 state_mem.go:36] "Initialized new in-memory state store" Dec 16 13:34:30.337218 kubelet[2936]: I1216 13:34:30.337207 2936 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 16 13:34:30.337283 kubelet[2936]: I1216 13:34:30.337271 2936 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 16 13:34:30.337320 kubelet[2936]: I1216 13:34:30.337315 2936 policy_none.go:49] "None policy: Start" Dec 16 13:34:30.337353 kubelet[2936]: I1216 13:34:30.337349 2936 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 16 13:34:30.337384 kubelet[2936]: I1216 13:34:30.337380 2936 state_mem.go:35] "Initializing new in-memory state store" Dec 16 13:34:30.337473 kubelet[2936]: I1216 13:34:30.337467 2936 state_mem.go:75] "Updated machine memory state" Dec 16 13:34:30.341242 kubelet[2936]: E1216 13:34:30.340600 2936 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 16 13:34:30.341242 kubelet[2936]: I1216 13:34:30.340687 2936 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 16 13:34:30.341242 kubelet[2936]: I1216 13:34:30.340695 2936 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 16 13:34:30.341242 kubelet[2936]: I1216 13:34:30.341101 2936 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 16 13:34:30.342869 kubelet[2936]: E1216 13:34:30.342857 2936 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 16 13:34:30.405213 kubelet[2936]: I1216 13:34:30.405171 2936 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Dec 16 13:34:30.405554 kubelet[2936]: I1216 13:34:30.405537 2936 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 16 13:34:30.406865 kubelet[2936]: I1216 13:34:30.405657 2936 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Dec 16 13:34:30.410980 kubelet[2936]: E1216 13:34:30.410958 2936 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Dec 16 13:34:30.448092 kubelet[2936]: I1216 13:34:30.448045 2936 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 16 13:34:30.452644 kubelet[2936]: I1216 13:34:30.452319 2936 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Dec 16 13:34:30.452783 kubelet[2936]: I1216 13:34:30.452772 2936 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Dec 16 13:34:30.494062 kubelet[2936]: I1216 13:34:30.493982 2936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0d1c9c50755eee58e1ebb02ac7e972ec-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"0d1c9c50755eee58e1ebb02ac7e972ec\") " pod="kube-system/kube-apiserver-localhost" Dec 16 13:34:30.494062 kubelet[2936]: I1216 13:34:30.494005 2936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 13:34:30.494062 kubelet[2936]: I1216 13:34:30.494020 2936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0d1c9c50755eee58e1ebb02ac7e972ec-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"0d1c9c50755eee58e1ebb02ac7e972ec\") " pod="kube-system/kube-apiserver-localhost" Dec 16 13:34:30.494062 kubelet[2936]: I1216 13:34:30.494033 2936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0d1c9c50755eee58e1ebb02ac7e972ec-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"0d1c9c50755eee58e1ebb02ac7e972ec\") " pod="kube-system/kube-apiserver-localhost" Dec 16 13:34:30.494062 kubelet[2936]: I1216 13:34:30.494046 2936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 13:34:30.494364 kubelet[2936]: I1216 13:34:30.494283 2936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 13:34:30.494364 kubelet[2936]: I1216 13:34:30.494308 2936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 13:34:30.494364 kubelet[2936]: I1216 13:34:30.494321 2936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 13:34:30.494364 kubelet[2936]: I1216 13:34:30.494337 2936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6e6cfcfb327385445a9bb0d2bc2fd5d4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6e6cfcfb327385445a9bb0d2bc2fd5d4\") " pod="kube-system/kube-scheduler-localhost" Dec 16 13:34:31.273937 kubelet[2936]: I1216 13:34:31.273820 2936 apiserver.go:52] "Watching apiserver" Dec 16 13:34:31.291921 kubelet[2936]: I1216 13:34:31.291882 2936 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 16 13:34:31.328237 kubelet[2936]: I1216 13:34:31.328192 2936 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Dec 16 13:34:31.328783 kubelet[2936]: I1216 13:34:31.328707 2936 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 16 13:34:31.331608 kubelet[2936]: E1216 13:34:31.331596 2936 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Dec 16 13:34:31.334469 kubelet[2936]: E1216 13:34:31.334457 2936 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Dec 16 13:34:31.351672 kubelet[2936]: I1216 13:34:31.351566 2936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.3515572750000002 podStartE2EDuration="2.351557275s" podCreationTimestamp="2025-12-16 13:34:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:34:31.351414717 +0000 UTC m=+1.129912818" watchObservedRunningTime="2025-12-16 13:34:31.351557275 +0000 UTC m=+1.130055370" Dec 16 13:34:31.351672 kubelet[2936]: I1216 13:34:31.351625 2936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.351622172 podStartE2EDuration="1.351622172s" podCreationTimestamp="2025-12-16 13:34:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:34:31.346145578 +0000 UTC m=+1.124643678" watchObservedRunningTime="2025-12-16 13:34:31.351622172 +0000 UTC m=+1.130120266" Dec 16 13:34:31.356960 kubelet[2936]: I1216 13:34:31.356934 2936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.356913343 podStartE2EDuration="1.356913343s" podCreationTimestamp="2025-12-16 13:34:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:34:31.356366664 +0000 UTC m=+1.134864765" watchObservedRunningTime="2025-12-16 13:34:31.356913343 +0000 UTC m=+1.135411436" Dec 16 13:34:36.052863 kubelet[2936]: I1216 13:34:36.052762 2936 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 16 13:34:36.053141 containerd[1607]: time="2025-12-16T13:34:36.052996675Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 16 13:34:36.053381 kubelet[2936]: I1216 13:34:36.053118 2936 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 16 13:34:37.039790 systemd[1]: Created slice kubepods-besteffort-pod20609c45_5e79_4317_aff6_0b1efc6a2fd0.slice - libcontainer container kubepods-besteffort-pod20609c45_5e79_4317_aff6_0b1efc6a2fd0.slice. Dec 16 13:34:37.137033 kubelet[2936]: I1216 13:34:37.137002 2936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/20609c45-5e79-4317-aff6-0b1efc6a2fd0-lib-modules\") pod \"kube-proxy-lk89v\" (UID: \"20609c45-5e79-4317-aff6-0b1efc6a2fd0\") " pod="kube-system/kube-proxy-lk89v" Dec 16 13:34:37.137033 kubelet[2936]: I1216 13:34:37.137032 2936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgvgm\" (UniqueName: \"kubernetes.io/projected/20609c45-5e79-4317-aff6-0b1efc6a2fd0-kube-api-access-lgvgm\") pod \"kube-proxy-lk89v\" (UID: \"20609c45-5e79-4317-aff6-0b1efc6a2fd0\") " pod="kube-system/kube-proxy-lk89v" Dec 16 13:34:37.137362 kubelet[2936]: I1216 13:34:37.137050 2936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/20609c45-5e79-4317-aff6-0b1efc6a2fd0-kube-proxy\") pod \"kube-proxy-lk89v\" (UID: \"20609c45-5e79-4317-aff6-0b1efc6a2fd0\") " pod="kube-system/kube-proxy-lk89v" Dec 16 13:34:37.137362 kubelet[2936]: I1216 13:34:37.137062 2936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/20609c45-5e79-4317-aff6-0b1efc6a2fd0-xtables-lock\") pod \"kube-proxy-lk89v\" (UID: \"20609c45-5e79-4317-aff6-0b1efc6a2fd0\") " pod="kube-system/kube-proxy-lk89v" Dec 16 13:34:37.256987 kubelet[2936]: I1216 13:34:37.256339 2936 status_manager.go:895] "Failed to get status for pod" podUID="9252e29e-932b-49d6-8312-6600aee865fd" pod="tigera-operator/tigera-operator-7dcd859c48-4zjln" err="pods \"tigera-operator-7dcd859c48-4zjln\" is forbidden: User \"system:node:localhost\" cannot get resource \"pods\" in API group \"\" in the namespace \"tigera-operator\": no relationship found between node 'localhost' and this object" Dec 16 13:34:37.260730 systemd[1]: Created slice kubepods-besteffort-pod9252e29e_932b_49d6_8312_6600aee865fd.slice - libcontainer container kubepods-besteffort-pod9252e29e_932b_49d6_8312_6600aee865fd.slice. Dec 16 13:34:37.337766 kubelet[2936]: I1216 13:34:37.337664 2936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8bg5\" (UniqueName: \"kubernetes.io/projected/9252e29e-932b-49d6-8312-6600aee865fd-kube-api-access-t8bg5\") pod \"tigera-operator-7dcd859c48-4zjln\" (UID: \"9252e29e-932b-49d6-8312-6600aee865fd\") " pod="tigera-operator/tigera-operator-7dcd859c48-4zjln" Dec 16 13:34:37.337766 kubelet[2936]: I1216 13:34:37.337692 2936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/9252e29e-932b-49d6-8312-6600aee865fd-var-lib-calico\") pod \"tigera-operator-7dcd859c48-4zjln\" (UID: \"9252e29e-932b-49d6-8312-6600aee865fd\") " pod="tigera-operator/tigera-operator-7dcd859c48-4zjln" Dec 16 13:34:37.355463 containerd[1607]: time="2025-12-16T13:34:37.355436142Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lk89v,Uid:20609c45-5e79-4317-aff6-0b1efc6a2fd0,Namespace:kube-system,Attempt:0,}" Dec 16 13:34:37.368834 containerd[1607]: time="2025-12-16T13:34:37.368777572Z" level=info msg="connecting to shim da9c7763967bb689e35edb7f330e816c997ef242d2eaf1a375095d357823d127" address="unix:///run/containerd/s/9f48a5a0cd795f9d47ff4b7c30622bdae3bfd0dbc628ca071f9408f34edf0288" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:34:37.391412 systemd[1]: Started cri-containerd-da9c7763967bb689e35edb7f330e816c997ef242d2eaf1a375095d357823d127.scope - libcontainer container da9c7763967bb689e35edb7f330e816c997ef242d2eaf1a375095d357823d127. Dec 16 13:34:37.406604 containerd[1607]: time="2025-12-16T13:34:37.406580411Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lk89v,Uid:20609c45-5e79-4317-aff6-0b1efc6a2fd0,Namespace:kube-system,Attempt:0,} returns sandbox id \"da9c7763967bb689e35edb7f330e816c997ef242d2eaf1a375095d357823d127\"" Dec 16 13:34:37.410631 containerd[1607]: time="2025-12-16T13:34:37.410589964Z" level=info msg="CreateContainer within sandbox \"da9c7763967bb689e35edb7f330e816c997ef242d2eaf1a375095d357823d127\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 16 13:34:37.416571 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2534504159.mount: Deactivated successfully. Dec 16 13:34:37.417484 containerd[1607]: time="2025-12-16T13:34:37.417048717Z" level=info msg="Container 73dbbd96b29de356347ecf5dd8e183974f1f8a49b41838e4dd47c78b88903030: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:34:37.420104 containerd[1607]: time="2025-12-16T13:34:37.420086277Z" level=info msg="CreateContainer within sandbox \"da9c7763967bb689e35edb7f330e816c997ef242d2eaf1a375095d357823d127\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"73dbbd96b29de356347ecf5dd8e183974f1f8a49b41838e4dd47c78b88903030\"" Dec 16 13:34:37.420494 containerd[1607]: time="2025-12-16T13:34:37.420480832Z" level=info msg="StartContainer for \"73dbbd96b29de356347ecf5dd8e183974f1f8a49b41838e4dd47c78b88903030\"" Dec 16 13:34:37.421630 containerd[1607]: time="2025-12-16T13:34:37.421613589Z" level=info msg="connecting to shim 73dbbd96b29de356347ecf5dd8e183974f1f8a49b41838e4dd47c78b88903030" address="unix:///run/containerd/s/9f48a5a0cd795f9d47ff4b7c30622bdae3bfd0dbc628ca071f9408f34edf0288" protocol=ttrpc version=3 Dec 16 13:34:37.433309 systemd[1]: Started cri-containerd-73dbbd96b29de356347ecf5dd8e183974f1f8a49b41838e4dd47c78b88903030.scope - libcontainer container 73dbbd96b29de356347ecf5dd8e183974f1f8a49b41838e4dd47c78b88903030. Dec 16 13:34:37.481420 containerd[1607]: time="2025-12-16T13:34:37.481400939Z" level=info msg="StartContainer for \"73dbbd96b29de356347ecf5dd8e183974f1f8a49b41838e4dd47c78b88903030\" returns successfully" Dec 16 13:34:37.563110 containerd[1607]: time="2025-12-16T13:34:37.563053509Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-4zjln,Uid:9252e29e-932b-49d6-8312-6600aee865fd,Namespace:tigera-operator,Attempt:0,}" Dec 16 13:34:37.576169 containerd[1607]: time="2025-12-16T13:34:37.576107898Z" level=info msg="connecting to shim d7944383ec8f66d41770539d35f1e231e6a8c8365f2d1639dba5c36ae3e03075" address="unix:///run/containerd/s/f736f45ef8579f6f60916653e978510f7cb50d26304477e450685a5fd5e0dc4c" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:34:37.597310 systemd[1]: Started cri-containerd-d7944383ec8f66d41770539d35f1e231e6a8c8365f2d1639dba5c36ae3e03075.scope - libcontainer container d7944383ec8f66d41770539d35f1e231e6a8c8365f2d1639dba5c36ae3e03075. Dec 16 13:34:37.630649 containerd[1607]: time="2025-12-16T13:34:37.630585648Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-4zjln,Uid:9252e29e-932b-49d6-8312-6600aee865fd,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"d7944383ec8f66d41770539d35f1e231e6a8c8365f2d1639dba5c36ae3e03075\"" Dec 16 13:34:37.632434 containerd[1607]: time="2025-12-16T13:34:37.632412370Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Dec 16 13:34:38.344945 kubelet[2936]: I1216 13:34:38.344743 2936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-lk89v" podStartSLOduration=1.344731441 podStartE2EDuration="1.344731441s" podCreationTimestamp="2025-12-16 13:34:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:34:38.3443079 +0000 UTC m=+8.122806010" watchObservedRunningTime="2025-12-16 13:34:38.344731441 +0000 UTC m=+8.123229543" Dec 16 13:34:39.131644 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount971459462.mount: Deactivated successfully. Dec 16 13:34:39.491077 containerd[1607]: time="2025-12-16T13:34:39.491053606Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:34:39.491862 containerd[1607]: time="2025-12-16T13:34:39.491848886Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Dec 16 13:34:39.492176 containerd[1607]: time="2025-12-16T13:34:39.492161488Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:34:39.493188 containerd[1607]: time="2025-12-16T13:34:39.493176303Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:34:39.493549 containerd[1607]: time="2025-12-16T13:34:39.493420552Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 1.860975721s" Dec 16 13:34:39.493549 containerd[1607]: time="2025-12-16T13:34:39.493503139Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Dec 16 13:34:39.495519 containerd[1607]: time="2025-12-16T13:34:39.495456946Z" level=info msg="CreateContainer within sandbox \"d7944383ec8f66d41770539d35f1e231e6a8c8365f2d1639dba5c36ae3e03075\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Dec 16 13:34:39.501238 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2966512498.mount: Deactivated successfully. Dec 16 13:34:39.502336 containerd[1607]: time="2025-12-16T13:34:39.502312896Z" level=info msg="Container 9e09aaa83f99aafe420d6cf7088b6499949aa9d08a3e0d384bdce7a95713143e: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:34:39.514400 containerd[1607]: time="2025-12-16T13:34:39.514376502Z" level=info msg="CreateContainer within sandbox \"d7944383ec8f66d41770539d35f1e231e6a8c8365f2d1639dba5c36ae3e03075\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"9e09aaa83f99aafe420d6cf7088b6499949aa9d08a3e0d384bdce7a95713143e\"" Dec 16 13:34:39.515581 containerd[1607]: time="2025-12-16T13:34:39.514739978Z" level=info msg="StartContainer for \"9e09aaa83f99aafe420d6cf7088b6499949aa9d08a3e0d384bdce7a95713143e\"" Dec 16 13:34:39.516248 containerd[1607]: time="2025-12-16T13:34:39.516176850Z" level=info msg="connecting to shim 9e09aaa83f99aafe420d6cf7088b6499949aa9d08a3e0d384bdce7a95713143e" address="unix:///run/containerd/s/f736f45ef8579f6f60916653e978510f7cb50d26304477e450685a5fd5e0dc4c" protocol=ttrpc version=3 Dec 16 13:34:39.538366 systemd[1]: Started cri-containerd-9e09aaa83f99aafe420d6cf7088b6499949aa9d08a3e0d384bdce7a95713143e.scope - libcontainer container 9e09aaa83f99aafe420d6cf7088b6499949aa9d08a3e0d384bdce7a95713143e. Dec 16 13:34:39.565113 containerd[1607]: time="2025-12-16T13:34:39.565088855Z" level=info msg="StartContainer for \"9e09aaa83f99aafe420d6cf7088b6499949aa9d08a3e0d384bdce7a95713143e\" returns successfully" Dec 16 13:34:40.351489 kubelet[2936]: I1216 13:34:40.351453 2936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-4zjln" podStartSLOduration=1.48869135 podStartE2EDuration="3.351437288s" podCreationTimestamp="2025-12-16 13:34:37 +0000 UTC" firstStartedPulling="2025-12-16 13:34:37.631301297 +0000 UTC m=+7.409799386" lastFinishedPulling="2025-12-16 13:34:39.494047232 +0000 UTC m=+9.272545324" observedRunningTime="2025-12-16 13:34:40.348775813 +0000 UTC m=+10.127273916" watchObservedRunningTime="2025-12-16 13:34:40.351437288 +0000 UTC m=+10.129935390" Dec 16 13:34:44.419222 sudo[1941]: pam_unix(sudo:session): session closed for user root Dec 16 13:34:44.420867 sshd[1940]: Connection closed by 139.178.89.65 port 60834 Dec 16 13:34:44.421220 sshd-session[1937]: pam_unix(sshd:session): session closed for user core Dec 16 13:34:44.425536 systemd[1]: sshd@6-139.178.70.100:22-139.178.89.65:60834.service: Deactivated successfully. Dec 16 13:34:44.426367 systemd-logind[1583]: Session 9 logged out. Waiting for processes to exit. Dec 16 13:34:44.427727 systemd[1]: session-9.scope: Deactivated successfully. Dec 16 13:34:44.428499 systemd[1]: session-9.scope: Consumed 3.083s CPU time, 151.7M memory peak. Dec 16 13:34:44.431438 systemd-logind[1583]: Removed session 9. Dec 16 13:34:48.510879 systemd[1]: Created slice kubepods-besteffort-pod6f45ec84_56b3_4381_8f50_dff6c0240acd.slice - libcontainer container kubepods-besteffort-pod6f45ec84_56b3_4381_8f50_dff6c0240acd.slice. Dec 16 13:34:48.611503 kubelet[2936]: I1216 13:34:48.611410 2936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6f45ec84-56b3-4381-8f50-dff6c0240acd-tigera-ca-bundle\") pod \"calico-typha-5b86945f84-lgzvm\" (UID: \"6f45ec84-56b3-4381-8f50-dff6c0240acd\") " pod="calico-system/calico-typha-5b86945f84-lgzvm" Dec 16 13:34:48.611503 kubelet[2936]: I1216 13:34:48.611445 2936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/6f45ec84-56b3-4381-8f50-dff6c0240acd-typha-certs\") pod \"calico-typha-5b86945f84-lgzvm\" (UID: \"6f45ec84-56b3-4381-8f50-dff6c0240acd\") " pod="calico-system/calico-typha-5b86945f84-lgzvm" Dec 16 13:34:48.611503 kubelet[2936]: I1216 13:34:48.611459 2936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4889z\" (UniqueName: \"kubernetes.io/projected/6f45ec84-56b3-4381-8f50-dff6c0240acd-kube-api-access-4889z\") pod \"calico-typha-5b86945f84-lgzvm\" (UID: \"6f45ec84-56b3-4381-8f50-dff6c0240acd\") " pod="calico-system/calico-typha-5b86945f84-lgzvm" Dec 16 13:34:48.716620 systemd[1]: Created slice kubepods-besteffort-pod6a725d7f_a956_4a53_b654_c7a59697d300.slice - libcontainer container kubepods-besteffort-pod6a725d7f_a956_4a53_b654_c7a59697d300.slice. Dec 16 13:34:48.812123 kubelet[2936]: I1216 13:34:48.811737 2936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/6a725d7f-a956-4a53-b654-c7a59697d300-cni-net-dir\") pod \"calico-node-j2ck4\" (UID: \"6a725d7f-a956-4a53-b654-c7a59697d300\") " pod="calico-system/calico-node-j2ck4" Dec 16 13:34:48.812123 kubelet[2936]: I1216 13:34:48.811760 2936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/6a725d7f-a956-4a53-b654-c7a59697d300-node-certs\") pod \"calico-node-j2ck4\" (UID: \"6a725d7f-a956-4a53-b654-c7a59697d300\") " pod="calico-system/calico-node-j2ck4" Dec 16 13:34:48.812123 kubelet[2936]: I1216 13:34:48.811771 2936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/6a725d7f-a956-4a53-b654-c7a59697d300-policysync\") pod \"calico-node-j2ck4\" (UID: \"6a725d7f-a956-4a53-b654-c7a59697d300\") " pod="calico-system/calico-node-j2ck4" Dec 16 13:34:48.812123 kubelet[2936]: I1216 13:34:48.811782 2936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/6a725d7f-a956-4a53-b654-c7a59697d300-flexvol-driver-host\") pod \"calico-node-j2ck4\" (UID: \"6a725d7f-a956-4a53-b654-c7a59697d300\") " pod="calico-system/calico-node-j2ck4" Dec 16 13:34:48.812123 kubelet[2936]: I1216 13:34:48.812010 2936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a725d7f-a956-4a53-b654-c7a59697d300-tigera-ca-bundle\") pod \"calico-node-j2ck4\" (UID: \"6a725d7f-a956-4a53-b654-c7a59697d300\") " pod="calico-system/calico-node-j2ck4" Dec 16 13:34:48.812289 kubelet[2936]: I1216 13:34:48.812022 2936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/6a725d7f-a956-4a53-b654-c7a59697d300-cni-bin-dir\") pod \"calico-node-j2ck4\" (UID: \"6a725d7f-a956-4a53-b654-c7a59697d300\") " pod="calico-system/calico-node-j2ck4" Dec 16 13:34:48.812289 kubelet[2936]: I1216 13:34:48.812031 2936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/6a725d7f-a956-4a53-b654-c7a59697d300-cni-log-dir\") pod \"calico-node-j2ck4\" (UID: \"6a725d7f-a956-4a53-b654-c7a59697d300\") " pod="calico-system/calico-node-j2ck4" Dec 16 13:34:48.812289 kubelet[2936]: I1216 13:34:48.812041 2936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/6a725d7f-a956-4a53-b654-c7a59697d300-var-lib-calico\") pod \"calico-node-j2ck4\" (UID: \"6a725d7f-a956-4a53-b654-c7a59697d300\") " pod="calico-system/calico-node-j2ck4" Dec 16 13:34:48.812289 kubelet[2936]: I1216 13:34:48.812049 2936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6a725d7f-a956-4a53-b654-c7a59697d300-xtables-lock\") pod \"calico-node-j2ck4\" (UID: \"6a725d7f-a956-4a53-b654-c7a59697d300\") " pod="calico-system/calico-node-j2ck4" Dec 16 13:34:48.812289 kubelet[2936]: I1216 13:34:48.812059 2936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rlj7\" (UniqueName: \"kubernetes.io/projected/6a725d7f-a956-4a53-b654-c7a59697d300-kube-api-access-8rlj7\") pod \"calico-node-j2ck4\" (UID: \"6a725d7f-a956-4a53-b654-c7a59697d300\") " pod="calico-system/calico-node-j2ck4" Dec 16 13:34:48.812371 kubelet[2936]: I1216 13:34:48.812069 2936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6a725d7f-a956-4a53-b654-c7a59697d300-lib-modules\") pod \"calico-node-j2ck4\" (UID: \"6a725d7f-a956-4a53-b654-c7a59697d300\") " pod="calico-system/calico-node-j2ck4" Dec 16 13:34:48.812371 kubelet[2936]: I1216 13:34:48.812077 2936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/6a725d7f-a956-4a53-b654-c7a59697d300-var-run-calico\") pod \"calico-node-j2ck4\" (UID: \"6a725d7f-a956-4a53-b654-c7a59697d300\") " pod="calico-system/calico-node-j2ck4" Dec 16 13:34:48.815239 containerd[1607]: time="2025-12-16T13:34:48.815211171Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5b86945f84-lgzvm,Uid:6f45ec84-56b3-4381-8f50-dff6c0240acd,Namespace:calico-system,Attempt:0,}" Dec 16 13:34:48.855451 containerd[1607]: time="2025-12-16T13:34:48.855416504Z" level=info msg="connecting to shim 12e60d7e503ac7f7a5154a5385bb7b02e965ba6b97190968bbc1c4b4efb62f37" address="unix:///run/containerd/s/dd277212cd183bbad7f753757ae948b253416f9afbe433825bb35253118bb507" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:34:48.892582 systemd[1]: Started cri-containerd-12e60d7e503ac7f7a5154a5385bb7b02e965ba6b97190968bbc1c4b4efb62f37.scope - libcontainer container 12e60d7e503ac7f7a5154a5385bb7b02e965ba6b97190968bbc1c4b4efb62f37. Dec 16 13:34:48.896242 kubelet[2936]: E1216 13:34:48.895982 2936 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vq4wz" podUID="6f1d1d03-3f5c-4209-adf6-a2cec01d4b01" Dec 16 13:34:48.912964 kubelet[2936]: I1216 13:34:48.912932 2936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6f1d1d03-3f5c-4209-adf6-a2cec01d4b01-kubelet-dir\") pod \"csi-node-driver-vq4wz\" (UID: \"6f1d1d03-3f5c-4209-adf6-a2cec01d4b01\") " pod="calico-system/csi-node-driver-vq4wz" Dec 16 13:34:48.913284 kubelet[2936]: I1216 13:34:48.913268 2936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/6f1d1d03-3f5c-4209-adf6-a2cec01d4b01-socket-dir\") pod \"csi-node-driver-vq4wz\" (UID: \"6f1d1d03-3f5c-4209-adf6-a2cec01d4b01\") " pod="calico-system/csi-node-driver-vq4wz" Dec 16 13:34:48.913320 kubelet[2936]: I1216 13:34:48.913286 2936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/6f1d1d03-3f5c-4209-adf6-a2cec01d4b01-varrun\") pod \"csi-node-driver-vq4wz\" (UID: \"6f1d1d03-3f5c-4209-adf6-a2cec01d4b01\") " pod="calico-system/csi-node-driver-vq4wz" Dec 16 13:34:48.913320 kubelet[2936]: I1216 13:34:48.913296 2936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hllc7\" (UniqueName: \"kubernetes.io/projected/6f1d1d03-3f5c-4209-adf6-a2cec01d4b01-kube-api-access-hllc7\") pod \"csi-node-driver-vq4wz\" (UID: \"6f1d1d03-3f5c-4209-adf6-a2cec01d4b01\") " pod="calico-system/csi-node-driver-vq4wz" Dec 16 13:34:48.913320 kubelet[2936]: I1216 13:34:48.913311 2936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/6f1d1d03-3f5c-4209-adf6-a2cec01d4b01-registration-dir\") pod \"csi-node-driver-vq4wz\" (UID: \"6f1d1d03-3f5c-4209-adf6-a2cec01d4b01\") " pod="calico-system/csi-node-driver-vq4wz" Dec 16 13:34:48.919925 kubelet[2936]: E1216 13:34:48.919892 2936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:34:48.919925 kubelet[2936]: W1216 13:34:48.919917 2936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:34:48.922560 kubelet[2936]: E1216 13:34:48.922540 2936 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:34:48.923865 kubelet[2936]: E1216 13:34:48.923853 2936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:34:48.923865 kubelet[2936]: W1216 13:34:48.923862 2936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:34:48.923952 kubelet[2936]: E1216 13:34:48.923873 2936 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:34:48.949311 containerd[1607]: time="2025-12-16T13:34:48.949273189Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5b86945f84-lgzvm,Uid:6f45ec84-56b3-4381-8f50-dff6c0240acd,Namespace:calico-system,Attempt:0,} returns sandbox id \"12e60d7e503ac7f7a5154a5385bb7b02e965ba6b97190968bbc1c4b4efb62f37\"" Dec 16 13:34:48.965433 containerd[1607]: time="2025-12-16T13:34:48.965414855Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Dec 16 13:34:49.014640 kubelet[2936]: E1216 13:34:49.014615 2936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:34:49.014640 kubelet[2936]: W1216 13:34:49.014632 2936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:34:49.014789 kubelet[2936]: E1216 13:34:49.014657 2936 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:34:49.014789 kubelet[2936]: E1216 13:34:49.014781 2936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:34:49.014789 kubelet[2936]: W1216 13:34:49.014787 2936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:34:49.014884 kubelet[2936]: E1216 13:34:49.014793 2936 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:34:49.014928 kubelet[2936]: E1216 13:34:49.014917 2936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:34:49.014928 kubelet[2936]: W1216 13:34:49.014922 2936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:34:49.014928 kubelet[2936]: E1216 13:34:49.014928 2936 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:34:49.015018 kubelet[2936]: E1216 13:34:49.015012 2936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:34:49.015018 kubelet[2936]: W1216 13:34:49.015016 2936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:34:49.015098 kubelet[2936]: E1216 13:34:49.015021 2936 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:34:49.015131 kubelet[2936]: E1216 13:34:49.015102 2936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:34:49.015131 kubelet[2936]: W1216 13:34:49.015106 2936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:34:49.015131 kubelet[2936]: E1216 13:34:49.015110 2936 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:34:49.015263 kubelet[2936]: E1216 13:34:49.015255 2936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:34:49.015303 kubelet[2936]: W1216 13:34:49.015265 2936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:34:49.015303 kubelet[2936]: E1216 13:34:49.015272 2936 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:34:49.015383 kubelet[2936]: E1216 13:34:49.015371 2936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:34:49.015383 kubelet[2936]: W1216 13:34:49.015377 2936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:34:49.015445 kubelet[2936]: E1216 13:34:49.015382 2936 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:34:49.015499 kubelet[2936]: E1216 13:34:49.015487 2936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:34:49.015499 kubelet[2936]: W1216 13:34:49.015495 2936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:34:49.015545 kubelet[2936]: E1216 13:34:49.015500 2936 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:34:49.015639 kubelet[2936]: E1216 13:34:49.015628 2936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:34:49.015639 kubelet[2936]: W1216 13:34:49.015636 2936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:34:49.015688 kubelet[2936]: E1216 13:34:49.015642 2936 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:34:49.015769 kubelet[2936]: E1216 13:34:49.015755 2936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:34:49.015769 kubelet[2936]: W1216 13:34:49.015763 2936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:34:49.015769 kubelet[2936]: E1216 13:34:49.015767 2936 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:34:49.015879 kubelet[2936]: E1216 13:34:49.015837 2936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:34:49.015879 kubelet[2936]: W1216 13:34:49.015841 2936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:34:49.015879 kubelet[2936]: E1216 13:34:49.015845 2936 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:34:49.017619 kubelet[2936]: E1216 13:34:49.017606 2936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:34:49.017619 kubelet[2936]: W1216 13:34:49.017614 2936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:34:49.017687 kubelet[2936]: E1216 13:34:49.017623 2936 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:34:49.017826 kubelet[2936]: E1216 13:34:49.017745 2936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:34:49.017826 kubelet[2936]: W1216 13:34:49.017757 2936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:34:49.017826 kubelet[2936]: E1216 13:34:49.017762 2936 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:34:49.017968 kubelet[2936]: E1216 13:34:49.017882 2936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:34:49.017968 kubelet[2936]: W1216 13:34:49.017887 2936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:34:49.017968 kubelet[2936]: E1216 13:34:49.017896 2936 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:34:49.017968 kubelet[2936]: E1216 13:34:49.017997 2936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:34:49.017968 kubelet[2936]: W1216 13:34:49.018002 2936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:34:49.017968 kubelet[2936]: E1216 13:34:49.018008 2936 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:34:49.018443 kubelet[2936]: E1216 13:34:49.018098 2936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:34:49.018443 kubelet[2936]: W1216 13:34:49.018102 2936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:34:49.018443 kubelet[2936]: E1216 13:34:49.018107 2936 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:34:49.018443 kubelet[2936]: E1216 13:34:49.018221 2936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:34:49.018443 kubelet[2936]: W1216 13:34:49.018262 2936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:34:49.018443 kubelet[2936]: E1216 13:34:49.018268 2936 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:34:49.018443 kubelet[2936]: E1216 13:34:49.018360 2936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:34:49.018443 kubelet[2936]: W1216 13:34:49.018366 2936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:34:49.018443 kubelet[2936]: E1216 13:34:49.018370 2936 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:34:49.018778 kubelet[2936]: E1216 13:34:49.018514 2936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:34:49.018778 kubelet[2936]: W1216 13:34:49.018518 2936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:34:49.018778 kubelet[2936]: E1216 13:34:49.018523 2936 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:34:49.018778 kubelet[2936]: E1216 13:34:49.018679 2936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:34:49.018778 kubelet[2936]: W1216 13:34:49.018684 2936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:34:49.018778 kubelet[2936]: E1216 13:34:49.018689 2936 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:34:49.018873 kubelet[2936]: E1216 13:34:49.018816 2936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:34:49.018873 kubelet[2936]: W1216 13:34:49.018820 2936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:34:49.018873 kubelet[2936]: E1216 13:34:49.018825 2936 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:34:49.019201 kubelet[2936]: E1216 13:34:49.018955 2936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:34:49.019201 kubelet[2936]: W1216 13:34:49.018959 2936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:34:49.019201 kubelet[2936]: E1216 13:34:49.018964 2936 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:34:49.019201 kubelet[2936]: E1216 13:34:49.019052 2936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:34:49.019201 kubelet[2936]: W1216 13:34:49.019057 2936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:34:49.019201 kubelet[2936]: E1216 13:34:49.019061 2936 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:34:49.019201 kubelet[2936]: E1216 13:34:49.019142 2936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:34:49.019201 kubelet[2936]: W1216 13:34:49.019146 2936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:34:49.019201 kubelet[2936]: E1216 13:34:49.019151 2936 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:34:49.019434 kubelet[2936]: E1216 13:34:49.019276 2936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:34:49.019434 kubelet[2936]: W1216 13:34:49.019280 2936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:34:49.019434 kubelet[2936]: E1216 13:34:49.019284 2936 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:34:49.022423 containerd[1607]: time="2025-12-16T13:34:49.021960608Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-j2ck4,Uid:6a725d7f-a956-4a53-b654-c7a59697d300,Namespace:calico-system,Attempt:0,}" Dec 16 13:34:49.029171 kubelet[2936]: E1216 13:34:49.028960 2936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:34:49.029171 kubelet[2936]: W1216 13:34:49.028974 2936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:34:49.029171 kubelet[2936]: E1216 13:34:49.028986 2936 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:34:49.036915 containerd[1607]: time="2025-12-16T13:34:49.036887874Z" level=info msg="connecting to shim 8784275f1890a3ffcd6787f762f5c39e5939de092900c9a92aa1515b1dc6d5b2" address="unix:///run/containerd/s/2dddc14c55f244dd4e700eb95640063e23f0b06f3e1952a295664f476e450b2f" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:34:49.058430 systemd[1]: Started cri-containerd-8784275f1890a3ffcd6787f762f5c39e5939de092900c9a92aa1515b1dc6d5b2.scope - libcontainer container 8784275f1890a3ffcd6787f762f5c39e5939de092900c9a92aa1515b1dc6d5b2. Dec 16 13:34:49.084797 containerd[1607]: time="2025-12-16T13:34:49.084217497Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-j2ck4,Uid:6a725d7f-a956-4a53-b654-c7a59697d300,Namespace:calico-system,Attempt:0,} returns sandbox id \"8784275f1890a3ffcd6787f762f5c39e5939de092900c9a92aa1515b1dc6d5b2\"" Dec 16 13:34:50.485724 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2654341486.mount: Deactivated successfully. Dec 16 13:34:51.304941 kubelet[2936]: E1216 13:34:51.304885 2936 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vq4wz" podUID="6f1d1d03-3f5c-4209-adf6-a2cec01d4b01" Dec 16 13:34:51.693567 containerd[1607]: time="2025-12-16T13:34:51.693495765Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:34:51.698235 containerd[1607]: time="2025-12-16T13:34:51.698209901Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Dec 16 13:34:51.702982 containerd[1607]: time="2025-12-16T13:34:51.702951359Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:34:51.707798 containerd[1607]: time="2025-12-16T13:34:51.707769515Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:34:51.708141 containerd[1607]: time="2025-12-16T13:34:51.708119940Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 2.742580714s" Dec 16 13:34:51.708327 containerd[1607]: time="2025-12-16T13:34:51.708141230Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Dec 16 13:34:51.709257 containerd[1607]: time="2025-12-16T13:34:51.709238397Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Dec 16 13:34:51.726639 containerd[1607]: time="2025-12-16T13:34:51.726554209Z" level=info msg="CreateContainer within sandbox \"12e60d7e503ac7f7a5154a5385bb7b02e965ba6b97190968bbc1c4b4efb62f37\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Dec 16 13:34:51.731859 containerd[1607]: time="2025-12-16T13:34:51.731842896Z" level=info msg="Container 42aff65fe8810a63e54f605334955218a24400239e50fde033913d2a132e235b: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:34:51.734786 containerd[1607]: time="2025-12-16T13:34:51.734733851Z" level=info msg="CreateContainer within sandbox \"12e60d7e503ac7f7a5154a5385bb7b02e965ba6b97190968bbc1c4b4efb62f37\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"42aff65fe8810a63e54f605334955218a24400239e50fde033913d2a132e235b\"" Dec 16 13:34:51.735148 containerd[1607]: time="2025-12-16T13:34:51.735131217Z" level=info msg="StartContainer for \"42aff65fe8810a63e54f605334955218a24400239e50fde033913d2a132e235b\"" Dec 16 13:34:51.736516 containerd[1607]: time="2025-12-16T13:34:51.736499171Z" level=info msg="connecting to shim 42aff65fe8810a63e54f605334955218a24400239e50fde033913d2a132e235b" address="unix:///run/containerd/s/dd277212cd183bbad7f753757ae948b253416f9afbe433825bb35253118bb507" protocol=ttrpc version=3 Dec 16 13:34:51.764316 systemd[1]: Started cri-containerd-42aff65fe8810a63e54f605334955218a24400239e50fde033913d2a132e235b.scope - libcontainer container 42aff65fe8810a63e54f605334955218a24400239e50fde033913d2a132e235b. Dec 16 13:34:51.804743 containerd[1607]: time="2025-12-16T13:34:51.804716147Z" level=info msg="StartContainer for \"42aff65fe8810a63e54f605334955218a24400239e50fde033913d2a132e235b\" returns successfully" Dec 16 13:34:52.377521 kubelet[2936]: I1216 13:34:52.377340 2936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5b86945f84-lgzvm" podStartSLOduration=1.618393485 podStartE2EDuration="4.377329506s" podCreationTimestamp="2025-12-16 13:34:48 +0000 UTC" firstStartedPulling="2025-12-16 13:34:48.95000773 +0000 UTC m=+18.728505820" lastFinishedPulling="2025-12-16 13:34:51.708943747 +0000 UTC m=+21.487441841" observedRunningTime="2025-12-16 13:34:52.375634433 +0000 UTC m=+22.154132525" watchObservedRunningTime="2025-12-16 13:34:52.377329506 +0000 UTC m=+22.155827607" Dec 16 13:34:52.429548 kubelet[2936]: E1216 13:34:52.429520 2936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:34:52.429548 kubelet[2936]: W1216 13:34:52.429544 2936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:34:52.430290 kubelet[2936]: E1216 13:34:52.430267 2936 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:34:52.430461 kubelet[2936]: E1216 13:34:52.430444 2936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:34:52.430461 kubelet[2936]: W1216 13:34:52.430458 2936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:34:52.430553 kubelet[2936]: E1216 13:34:52.430468 2936 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:34:52.430596 kubelet[2936]: E1216 13:34:52.430567 2936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:34:52.430596 kubelet[2936]: W1216 13:34:52.430573 2936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:34:52.430596 kubelet[2936]: E1216 13:34:52.430579 2936 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:34:52.430731 kubelet[2936]: E1216 13:34:52.430715 2936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:34:52.430731 kubelet[2936]: W1216 13:34:52.430726 2936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:34:52.430804 kubelet[2936]: E1216 13:34:52.430732 2936 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:34:52.430857 kubelet[2936]: E1216 13:34:52.430847 2936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:34:52.430857 kubelet[2936]: W1216 13:34:52.430855 2936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:34:52.430930 kubelet[2936]: E1216 13:34:52.430866 2936 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:34:52.430987 kubelet[2936]: E1216 13:34:52.430966 2936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:34:52.430987 kubelet[2936]: W1216 13:34:52.430977 2936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:34:52.430987 kubelet[2936]: E1216 13:34:52.430984 2936 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:34:52.431114 kubelet[2936]: E1216 13:34:52.431070 2936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:34:52.431114 kubelet[2936]: W1216 13:34:52.431076 2936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:34:52.431114 kubelet[2936]: E1216 13:34:52.431082 2936 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:34:52.431275 kubelet[2936]: E1216 13:34:52.431179 2936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:34:52.431275 kubelet[2936]: W1216 13:34:52.431186 2936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:34:52.431275 kubelet[2936]: E1216 13:34:52.431192 2936 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:34:52.431383 kubelet[2936]: E1216 13:34:52.431315 2936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:34:52.431383 kubelet[2936]: W1216 13:34:52.431322 2936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:34:52.431383 kubelet[2936]: E1216 13:34:52.431328 2936 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:34:52.431502 kubelet[2936]: E1216 13:34:52.431429 2936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:34:52.431502 kubelet[2936]: W1216 13:34:52.431438 2936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:34:52.431502 kubelet[2936]: E1216 13:34:52.431447 2936 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:34:52.431603 kubelet[2936]: E1216 13:34:52.431546 2936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:34:52.431603 kubelet[2936]: W1216 13:34:52.431553 2936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:34:52.431603 kubelet[2936]: E1216 13:34:52.431560 2936 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:34:52.431704 kubelet[2936]: E1216 13:34:52.431667 2936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:34:52.431704 kubelet[2936]: W1216 13:34:52.431673 2936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:34:52.431704 kubelet[2936]: E1216 13:34:52.431679 2936 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:34:52.431829 kubelet[2936]: E1216 13:34:52.431778 2936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:34:52.431829 kubelet[2936]: W1216 13:34:52.431785 2936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:34:52.431829 kubelet[2936]: E1216 13:34:52.431791 2936 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:34:52.431922 kubelet[2936]: E1216 13:34:52.431882 2936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:34:52.431922 kubelet[2936]: W1216 13:34:52.431890 2936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:34:52.431922 kubelet[2936]: E1216 13:34:52.431897 2936 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:34:52.432026 kubelet[2936]: E1216 13:34:52.431982 2936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:34:52.432026 kubelet[2936]: W1216 13:34:52.431988 2936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:34:52.432026 kubelet[2936]: E1216 13:34:52.431994 2936 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:34:52.443413 kubelet[2936]: E1216 13:34:52.443343 2936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:34:52.443413 kubelet[2936]: W1216 13:34:52.443357 2936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:34:52.443413 kubelet[2936]: E1216 13:34:52.443370 2936 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:34:52.444407 kubelet[2936]: E1216 13:34:52.444363 2936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:34:52.444407 kubelet[2936]: W1216 13:34:52.444371 2936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:34:52.444407 kubelet[2936]: E1216 13:34:52.444379 2936 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:34:52.444524 kubelet[2936]: E1216 13:34:52.444508 2936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:34:52.444524 kubelet[2936]: W1216 13:34:52.444520 2936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:34:52.444638 kubelet[2936]: E1216 13:34:52.444528 2936 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:34:52.444638 kubelet[2936]: E1216 13:34:52.444624 2936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:34:52.444638 kubelet[2936]: W1216 13:34:52.444630 2936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:34:52.444638 kubelet[2936]: E1216 13:34:52.444638 2936 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:34:52.444774 kubelet[2936]: E1216 13:34:52.444727 2936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:34:52.444774 kubelet[2936]: W1216 13:34:52.444733 2936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:34:52.444774 kubelet[2936]: E1216 13:34:52.444739 2936 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:34:52.444890 kubelet[2936]: E1216 13:34:52.444853 2936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:34:52.444890 kubelet[2936]: W1216 13:34:52.444859 2936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:34:52.444890 kubelet[2936]: E1216 13:34:52.444865 2936 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:34:52.445125 kubelet[2936]: E1216 13:34:52.445080 2936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:34:52.445125 kubelet[2936]: W1216 13:34:52.445089 2936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:34:52.445125 kubelet[2936]: E1216 13:34:52.445096 2936 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:34:52.445212 kubelet[2936]: E1216 13:34:52.445204 2936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:34:52.445212 kubelet[2936]: W1216 13:34:52.445212 2936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:34:52.445296 kubelet[2936]: E1216 13:34:52.445218 2936 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:34:52.445360 kubelet[2936]: E1216 13:34:52.445327 2936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:34:52.445360 kubelet[2936]: W1216 13:34:52.445333 2936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:34:52.445360 kubelet[2936]: E1216 13:34:52.445338 2936 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:34:52.445469 kubelet[2936]: E1216 13:34:52.445427 2936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:34:52.445469 kubelet[2936]: W1216 13:34:52.445433 2936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:34:52.445469 kubelet[2936]: E1216 13:34:52.445438 2936 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:34:52.445589 kubelet[2936]: E1216 13:34:52.445543 2936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:34:52.445589 kubelet[2936]: W1216 13:34:52.445549 2936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:34:52.445589 kubelet[2936]: E1216 13:34:52.445555 2936 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:34:52.445820 kubelet[2936]: E1216 13:34:52.445795 2936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:34:52.445820 kubelet[2936]: W1216 13:34:52.445803 2936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:34:52.445820 kubelet[2936]: E1216 13:34:52.445811 2936 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:34:52.446122 kubelet[2936]: E1216 13:34:52.446046 2936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:34:52.446122 kubelet[2936]: W1216 13:34:52.446054 2936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:34:52.446122 kubelet[2936]: E1216 13:34:52.446062 2936 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:34:52.446301 kubelet[2936]: E1216 13:34:52.446256 2936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:34:52.446301 kubelet[2936]: W1216 13:34:52.446264 2936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:34:52.446301 kubelet[2936]: E1216 13:34:52.446271 2936 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:34:52.446658 kubelet[2936]: E1216 13:34:52.446596 2936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:34:52.446658 kubelet[2936]: W1216 13:34:52.446605 2936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:34:52.446658 kubelet[2936]: E1216 13:34:52.446612 2936 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:34:52.446955 kubelet[2936]: E1216 13:34:52.446841 2936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:34:52.446955 kubelet[2936]: W1216 13:34:52.446849 2936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:34:52.446955 kubelet[2936]: E1216 13:34:52.446856 2936 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:34:52.447078 kubelet[2936]: E1216 13:34:52.447066 2936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:34:52.447078 kubelet[2936]: W1216 13:34:52.447076 2936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:34:52.447143 kubelet[2936]: E1216 13:34:52.447084 2936 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:34:52.447241 kubelet[2936]: E1216 13:34:52.447191 2936 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:34:52.447241 kubelet[2936]: W1216 13:34:52.447199 2936 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:34:52.447241 kubelet[2936]: E1216 13:34:52.447205 2936 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:34:53.070161 containerd[1607]: time="2025-12-16T13:34:53.069677161Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:34:53.071504 containerd[1607]: time="2025-12-16T13:34:53.071432193Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Dec 16 13:34:53.071728 containerd[1607]: time="2025-12-16T13:34:53.071600055Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:34:53.072554 containerd[1607]: time="2025-12-16T13:34:53.072538383Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:34:53.080410 containerd[1607]: time="2025-12-16T13:34:53.080347842Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.3710903s" Dec 16 13:34:53.080410 containerd[1607]: time="2025-12-16T13:34:53.080364598Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Dec 16 13:34:53.083260 containerd[1607]: time="2025-12-16T13:34:53.082852814Z" level=info msg="CreateContainer within sandbox \"8784275f1890a3ffcd6787f762f5c39e5939de092900c9a92aa1515b1dc6d5b2\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 16 13:34:53.089921 containerd[1607]: time="2025-12-16T13:34:53.089901548Z" level=info msg="Container 5eab1d85792fcaff2cf364aed799068767abebeff854e72b5f542cccc81b2e2b: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:34:53.093473 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3099440986.mount: Deactivated successfully. Dec 16 13:34:53.094764 containerd[1607]: time="2025-12-16T13:34:53.094746933Z" level=info msg="CreateContainer within sandbox \"8784275f1890a3ffcd6787f762f5c39e5939de092900c9a92aa1515b1dc6d5b2\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"5eab1d85792fcaff2cf364aed799068767abebeff854e72b5f542cccc81b2e2b\"" Dec 16 13:34:53.095428 containerd[1607]: time="2025-12-16T13:34:53.095397919Z" level=info msg="StartContainer for \"5eab1d85792fcaff2cf364aed799068767abebeff854e72b5f542cccc81b2e2b\"" Dec 16 13:34:53.097136 containerd[1607]: time="2025-12-16T13:34:53.097120365Z" level=info msg="connecting to shim 5eab1d85792fcaff2cf364aed799068767abebeff854e72b5f542cccc81b2e2b" address="unix:///run/containerd/s/2dddc14c55f244dd4e700eb95640063e23f0b06f3e1952a295664f476e450b2f" protocol=ttrpc version=3 Dec 16 13:34:53.119324 systemd[1]: Started cri-containerd-5eab1d85792fcaff2cf364aed799068767abebeff854e72b5f542cccc81b2e2b.scope - libcontainer container 5eab1d85792fcaff2cf364aed799068767abebeff854e72b5f542cccc81b2e2b. Dec 16 13:34:53.188520 systemd[1]: cri-containerd-5eab1d85792fcaff2cf364aed799068767abebeff854e72b5f542cccc81b2e2b.scope: Deactivated successfully. Dec 16 13:34:53.230516 containerd[1607]: time="2025-12-16T13:34:53.230455913Z" level=info msg="StartContainer for \"5eab1d85792fcaff2cf364aed799068767abebeff854e72b5f542cccc81b2e2b\" returns successfully" Dec 16 13:34:53.256530 containerd[1607]: time="2025-12-16T13:34:53.256501350Z" level=info msg="received container exit event container_id:\"5eab1d85792fcaff2cf364aed799068767abebeff854e72b5f542cccc81b2e2b\" id:\"5eab1d85792fcaff2cf364aed799068767abebeff854e72b5f542cccc81b2e2b\" pid:3575 exited_at:{seconds:1765892093 nanos:191198925}" Dec 16 13:34:53.286083 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5eab1d85792fcaff2cf364aed799068767abebeff854e72b5f542cccc81b2e2b-rootfs.mount: Deactivated successfully. Dec 16 13:34:53.320142 kubelet[2936]: E1216 13:34:53.319263 2936 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vq4wz" podUID="6f1d1d03-3f5c-4209-adf6-a2cec01d4b01" Dec 16 13:34:54.376546 containerd[1607]: time="2025-12-16T13:34:54.376516550Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Dec 16 13:34:55.305245 kubelet[2936]: E1216 13:34:55.305019 2936 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vq4wz" podUID="6f1d1d03-3f5c-4209-adf6-a2cec01d4b01" Dec 16 13:34:57.304671 kubelet[2936]: E1216 13:34:57.304254 2936 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vq4wz" podUID="6f1d1d03-3f5c-4209-adf6-a2cec01d4b01" Dec 16 13:34:58.110248 containerd[1607]: time="2025-12-16T13:34:58.110028085Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:34:58.110594 containerd[1607]: time="2025-12-16T13:34:58.110578980Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Dec 16 13:34:58.110924 containerd[1607]: time="2025-12-16T13:34:58.110909487Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:34:58.112319 containerd[1607]: time="2025-12-16T13:34:58.112304268Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:34:58.115112 containerd[1607]: time="2025-12-16T13:34:58.112693498Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 3.736157327s" Dec 16 13:34:58.115112 containerd[1607]: time="2025-12-16T13:34:58.112706928Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Dec 16 13:34:58.115112 containerd[1607]: time="2025-12-16T13:34:58.114957728Z" level=info msg="CreateContainer within sandbox \"8784275f1890a3ffcd6787f762f5c39e5939de092900c9a92aa1515b1dc6d5b2\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 16 13:34:58.120459 containerd[1607]: time="2025-12-16T13:34:58.120388944Z" level=info msg="Container 2dd2d5d8e9bbc4fadf0c6f7cc1cb81cbf6a92cd18b0438a904c90e909178e937: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:34:58.122629 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1147006684.mount: Deactivated successfully. Dec 16 13:34:58.125316 containerd[1607]: time="2025-12-16T13:34:58.125299913Z" level=info msg="CreateContainer within sandbox \"8784275f1890a3ffcd6787f762f5c39e5939de092900c9a92aa1515b1dc6d5b2\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"2dd2d5d8e9bbc4fadf0c6f7cc1cb81cbf6a92cd18b0438a904c90e909178e937\"" Dec 16 13:34:58.126007 containerd[1607]: time="2025-12-16T13:34:58.125650118Z" level=info msg="StartContainer for \"2dd2d5d8e9bbc4fadf0c6f7cc1cb81cbf6a92cd18b0438a904c90e909178e937\"" Dec 16 13:34:58.140926 containerd[1607]: time="2025-12-16T13:34:58.140902922Z" level=info msg="connecting to shim 2dd2d5d8e9bbc4fadf0c6f7cc1cb81cbf6a92cd18b0438a904c90e909178e937" address="unix:///run/containerd/s/2dddc14c55f244dd4e700eb95640063e23f0b06f3e1952a295664f476e450b2f" protocol=ttrpc version=3 Dec 16 13:34:58.163314 systemd[1]: Started cri-containerd-2dd2d5d8e9bbc4fadf0c6f7cc1cb81cbf6a92cd18b0438a904c90e909178e937.scope - libcontainer container 2dd2d5d8e9bbc4fadf0c6f7cc1cb81cbf6a92cd18b0438a904c90e909178e937. Dec 16 13:34:58.228420 containerd[1607]: time="2025-12-16T13:34:58.228399431Z" level=info msg="StartContainer for \"2dd2d5d8e9bbc4fadf0c6f7cc1cb81cbf6a92cd18b0438a904c90e909178e937\" returns successfully" Dec 16 13:34:59.323618 kubelet[2936]: E1216 13:34:59.323473 2936 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vq4wz" podUID="6f1d1d03-3f5c-4209-adf6-a2cec01d4b01" Dec 16 13:34:59.340036 systemd[1]: cri-containerd-2dd2d5d8e9bbc4fadf0c6f7cc1cb81cbf6a92cd18b0438a904c90e909178e937.scope: Deactivated successfully. Dec 16 13:34:59.340506 systemd[1]: cri-containerd-2dd2d5d8e9bbc4fadf0c6f7cc1cb81cbf6a92cd18b0438a904c90e909178e937.scope: Consumed 289ms CPU time, 168.2M memory peak, 2.3M read from disk, 171.3M written to disk. Dec 16 13:34:59.352158 containerd[1607]: time="2025-12-16T13:34:59.352134128Z" level=info msg="received container exit event container_id:\"2dd2d5d8e9bbc4fadf0c6f7cc1cb81cbf6a92cd18b0438a904c90e909178e937\" id:\"2dd2d5d8e9bbc4fadf0c6f7cc1cb81cbf6a92cd18b0438a904c90e909178e937\" pid:3636 exited_at:{seconds:1765892099 nanos:341600014}" Dec 16 13:34:59.380341 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2dd2d5d8e9bbc4fadf0c6f7cc1cb81cbf6a92cd18b0438a904c90e909178e937-rootfs.mount: Deactivated successfully. Dec 16 13:34:59.392882 kubelet[2936]: I1216 13:34:59.392862 2936 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Dec 16 13:34:59.434884 systemd[1]: Created slice kubepods-burstable-poddfb9abff_2398_4522_b3cb_8954570c6d45.slice - libcontainer container kubepods-burstable-poddfb9abff_2398_4522_b3cb_8954570c6d45.slice. Dec 16 13:34:59.446756 systemd[1]: Created slice kubepods-besteffort-podd4fd9b6b_9d45_4c37_9fcc_05498ebe1ddf.slice - libcontainer container kubepods-besteffort-podd4fd9b6b_9d45_4c37_9fcc_05498ebe1ddf.slice. Dec 16 13:34:59.452732 systemd[1]: Created slice kubepods-besteffort-pod7900a761_f455_401e_a05b_5ba11ccd5975.slice - libcontainer container kubepods-besteffort-pod7900a761_f455_401e_a05b_5ba11ccd5975.slice. Dec 16 13:34:59.458223 systemd[1]: Created slice kubepods-besteffort-pod1c2fd014_23b6_46cf_a696_49f2992f6d8d.slice - libcontainer container kubepods-besteffort-pod1c2fd014_23b6_46cf_a696_49f2992f6d8d.slice. Dec 16 13:34:59.463166 systemd[1]: Created slice kubepods-besteffort-podec4d992b_26cc_4e71_9db7_9af961649e2b.slice - libcontainer container kubepods-besteffort-podec4d992b_26cc_4e71_9db7_9af961649e2b.slice. Dec 16 13:34:59.467560 systemd[1]: Created slice kubepods-burstable-podddb278be_f5ee_492e_b7ea_fb105804e93d.slice - libcontainer container kubepods-burstable-podddb278be_f5ee_492e_b7ea_fb105804e93d.slice. Dec 16 13:34:59.471631 systemd[1]: Created slice kubepods-besteffort-podf5d82f91_8d34_4dd6_9053_b327d15a7af5.slice - libcontainer container kubepods-besteffort-podf5d82f91_8d34_4dd6_9053_b327d15a7af5.slice. Dec 16 13:34:59.476900 containerd[1607]: time="2025-12-16T13:34:59.476870427Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Dec 16 13:34:59.490428 kubelet[2936]: I1216 13:34:59.490403 2936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n88np\" (UniqueName: \"kubernetes.io/projected/7900a761-f455-401e-a05b-5ba11ccd5975-kube-api-access-n88np\") pod \"calico-kube-controllers-78f95cbfdd-lh872\" (UID: \"7900a761-f455-401e-a05b-5ba11ccd5975\") " pod="calico-system/calico-kube-controllers-78f95cbfdd-lh872" Dec 16 13:34:59.490428 kubelet[2936]: I1216 13:34:59.490432 2936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5d82f91-8d34-4dd6-9053-b327d15a7af5-config\") pod \"goldmane-666569f655-5bgh6\" (UID: \"f5d82f91-8d34-4dd6-9053-b327d15a7af5\") " pod="calico-system/goldmane-666569f655-5bgh6" Dec 16 13:34:59.490547 kubelet[2936]: I1216 13:34:59.490444 2936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/f5d82f91-8d34-4dd6-9053-b327d15a7af5-goldmane-key-pair\") pod \"goldmane-666569f655-5bgh6\" (UID: \"f5d82f91-8d34-4dd6-9053-b327d15a7af5\") " pod="calico-system/goldmane-666569f655-5bgh6" Dec 16 13:34:59.490547 kubelet[2936]: I1216 13:34:59.490453 2936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6fxr6\" (UniqueName: \"kubernetes.io/projected/dfb9abff-2398-4522-b3cb-8954570c6d45-kube-api-access-6fxr6\") pod \"coredns-674b8bbfcf-h54lr\" (UID: \"dfb9abff-2398-4522-b3cb-8954570c6d45\") " pod="kube-system/coredns-674b8bbfcf-h54lr" Dec 16 13:34:59.490547 kubelet[2936]: I1216 13:34:59.490476 2936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d4fd9b6b-9d45-4c37-9fcc-05498ebe1ddf-calico-apiserver-certs\") pod \"calico-apiserver-59f94fcf66-9fr5g\" (UID: \"d4fd9b6b-9d45-4c37-9fcc-05498ebe1ddf\") " pod="calico-apiserver/calico-apiserver-59f94fcf66-9fr5g" Dec 16 13:34:59.490547 kubelet[2936]: I1216 13:34:59.490485 2936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7900a761-f455-401e-a05b-5ba11ccd5975-tigera-ca-bundle\") pod \"calico-kube-controllers-78f95cbfdd-lh872\" (UID: \"7900a761-f455-401e-a05b-5ba11ccd5975\") " pod="calico-system/calico-kube-controllers-78f95cbfdd-lh872" Dec 16 13:34:59.490547 kubelet[2936]: I1216 13:34:59.490500 2936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nctwd\" (UniqueName: \"kubernetes.io/projected/ec4d992b-26cc-4e71-9db7-9af961649e2b-kube-api-access-nctwd\") pod \"calico-apiserver-59f94fcf66-7qr5n\" (UID: \"ec4d992b-26cc-4e71-9db7-9af961649e2b\") " pod="calico-apiserver/calico-apiserver-59f94fcf66-7qr5n" Dec 16 13:34:59.490637 kubelet[2936]: I1216 13:34:59.490511 2936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ddb278be-f5ee-492e-b7ea-fb105804e93d-config-volume\") pod \"coredns-674b8bbfcf-zr5tc\" (UID: \"ddb278be-f5ee-492e-b7ea-fb105804e93d\") " pod="kube-system/coredns-674b8bbfcf-zr5tc" Dec 16 13:34:59.490637 kubelet[2936]: I1216 13:34:59.490520 2936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzgsj\" (UniqueName: \"kubernetes.io/projected/f5d82f91-8d34-4dd6-9053-b327d15a7af5-kube-api-access-nzgsj\") pod \"goldmane-666569f655-5bgh6\" (UID: \"f5d82f91-8d34-4dd6-9053-b327d15a7af5\") " pod="calico-system/goldmane-666569f655-5bgh6" Dec 16 13:34:59.490637 kubelet[2936]: I1216 13:34:59.490530 2936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1c2fd014-23b6-46cf-a696-49f2992f6d8d-whisker-ca-bundle\") pod \"whisker-7dd785ffcd-6zmlm\" (UID: \"1c2fd014-23b6-46cf-a696-49f2992f6d8d\") " pod="calico-system/whisker-7dd785ffcd-6zmlm" Dec 16 13:34:59.490637 kubelet[2936]: I1216 13:34:59.490543 2936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7bhf\" (UniqueName: \"kubernetes.io/projected/d4fd9b6b-9d45-4c37-9fcc-05498ebe1ddf-kube-api-access-l7bhf\") pod \"calico-apiserver-59f94fcf66-9fr5g\" (UID: \"d4fd9b6b-9d45-4c37-9fcc-05498ebe1ddf\") " pod="calico-apiserver/calico-apiserver-59f94fcf66-9fr5g" Dec 16 13:34:59.490637 kubelet[2936]: I1216 13:34:59.490551 2936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f5d82f91-8d34-4dd6-9053-b327d15a7af5-goldmane-ca-bundle\") pod \"goldmane-666569f655-5bgh6\" (UID: \"f5d82f91-8d34-4dd6-9053-b327d15a7af5\") " pod="calico-system/goldmane-666569f655-5bgh6" Dec 16 13:34:59.490946 kubelet[2936]: I1216 13:34:59.490561 2936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/1c2fd014-23b6-46cf-a696-49f2992f6d8d-whisker-backend-key-pair\") pod \"whisker-7dd785ffcd-6zmlm\" (UID: \"1c2fd014-23b6-46cf-a696-49f2992f6d8d\") " pod="calico-system/whisker-7dd785ffcd-6zmlm" Dec 16 13:34:59.490946 kubelet[2936]: I1216 13:34:59.490570 2936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9497\" (UniqueName: \"kubernetes.io/projected/1c2fd014-23b6-46cf-a696-49f2992f6d8d-kube-api-access-r9497\") pod \"whisker-7dd785ffcd-6zmlm\" (UID: \"1c2fd014-23b6-46cf-a696-49f2992f6d8d\") " pod="calico-system/whisker-7dd785ffcd-6zmlm" Dec 16 13:34:59.490946 kubelet[2936]: I1216 13:34:59.490585 2936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ec4d992b-26cc-4e71-9db7-9af961649e2b-calico-apiserver-certs\") pod \"calico-apiserver-59f94fcf66-7qr5n\" (UID: \"ec4d992b-26cc-4e71-9db7-9af961649e2b\") " pod="calico-apiserver/calico-apiserver-59f94fcf66-7qr5n" Dec 16 13:34:59.490946 kubelet[2936]: I1216 13:34:59.490596 2936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lhtm\" (UniqueName: \"kubernetes.io/projected/ddb278be-f5ee-492e-b7ea-fb105804e93d-kube-api-access-2lhtm\") pod \"coredns-674b8bbfcf-zr5tc\" (UID: \"ddb278be-f5ee-492e-b7ea-fb105804e93d\") " pod="kube-system/coredns-674b8bbfcf-zr5tc" Dec 16 13:34:59.490946 kubelet[2936]: I1216 13:34:59.490604 2936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dfb9abff-2398-4522-b3cb-8954570c6d45-config-volume\") pod \"coredns-674b8bbfcf-h54lr\" (UID: \"dfb9abff-2398-4522-b3cb-8954570c6d45\") " pod="kube-system/coredns-674b8bbfcf-h54lr" Dec 16 13:34:59.744005 containerd[1607]: time="2025-12-16T13:34:59.743983990Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-h54lr,Uid:dfb9abff-2398-4522-b3cb-8954570c6d45,Namespace:kube-system,Attempt:0,}" Dec 16 13:34:59.750371 containerd[1607]: time="2025-12-16T13:34:59.750349687Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59f94fcf66-9fr5g,Uid:d4fd9b6b-9d45-4c37-9fcc-05498ebe1ddf,Namespace:calico-apiserver,Attempt:0,}" Dec 16 13:34:59.769379 containerd[1607]: time="2025-12-16T13:34:59.769344502Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59f94fcf66-7qr5n,Uid:ec4d992b-26cc-4e71-9db7-9af961649e2b,Namespace:calico-apiserver,Attempt:0,}" Dec 16 13:34:59.770143 containerd[1607]: time="2025-12-16T13:34:59.769689766Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-zr5tc,Uid:ddb278be-f5ee-492e-b7ea-fb105804e93d,Namespace:kube-system,Attempt:0,}" Dec 16 13:34:59.770261 containerd[1607]: time="2025-12-16T13:34:59.769712655Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-78f95cbfdd-lh872,Uid:7900a761-f455-401e-a05b-5ba11ccd5975,Namespace:calico-system,Attempt:0,}" Dec 16 13:34:59.770360 containerd[1607]: time="2025-12-16T13:34:59.769738638Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7dd785ffcd-6zmlm,Uid:1c2fd014-23b6-46cf-a696-49f2992f6d8d,Namespace:calico-system,Attempt:0,}" Dec 16 13:34:59.800213 containerd[1607]: time="2025-12-16T13:34:59.800188129Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-5bgh6,Uid:f5d82f91-8d34-4dd6-9053-b327d15a7af5,Namespace:calico-system,Attempt:0,}" Dec 16 13:34:59.975808 containerd[1607]: time="2025-12-16T13:34:59.975778799Z" level=error msg="Failed to destroy network for sandbox \"ea56fada1c9f7345936e99f163af84c5f2de4d84939d2b0e0a96c2be206f0e4e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:34:59.977371 containerd[1607]: time="2025-12-16T13:34:59.977353914Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59f94fcf66-7qr5n,Uid:ec4d992b-26cc-4e71-9db7-9af961649e2b,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea56fada1c9f7345936e99f163af84c5f2de4d84939d2b0e0a96c2be206f0e4e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:34:59.977888 kubelet[2936]: E1216 13:34:59.977859 2936 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea56fada1c9f7345936e99f163af84c5f2de4d84939d2b0e0a96c2be206f0e4e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:34:59.977997 kubelet[2936]: E1216 13:34:59.977987 2936 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea56fada1c9f7345936e99f163af84c5f2de4d84939d2b0e0a96c2be206f0e4e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-59f94fcf66-7qr5n" Dec 16 13:34:59.978079 kubelet[2936]: E1216 13:34:59.978064 2936 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea56fada1c9f7345936e99f163af84c5f2de4d84939d2b0e0a96c2be206f0e4e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-59f94fcf66-7qr5n" Dec 16 13:34:59.979291 kubelet[2936]: E1216 13:34:59.979244 2936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-59f94fcf66-7qr5n_calico-apiserver(ec4d992b-26cc-4e71-9db7-9af961649e2b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-59f94fcf66-7qr5n_calico-apiserver(ec4d992b-26cc-4e71-9db7-9af961649e2b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ea56fada1c9f7345936e99f163af84c5f2de4d84939d2b0e0a96c2be206f0e4e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-59f94fcf66-7qr5n" podUID="ec4d992b-26cc-4e71-9db7-9af961649e2b" Dec 16 13:34:59.987201 containerd[1607]: time="2025-12-16T13:34:59.987175894Z" level=error msg="Failed to destroy network for sandbox \"005aae1ea255143654f9818279b585ed2e5ad9bec6886ab4aacbc83655c5d9b3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:34:59.988575 containerd[1607]: time="2025-12-16T13:34:59.988411917Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59f94fcf66-9fr5g,Uid:d4fd9b6b-9d45-4c37-9fcc-05498ebe1ddf,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"005aae1ea255143654f9818279b585ed2e5ad9bec6886ab4aacbc83655c5d9b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:34:59.993939 kubelet[2936]: E1216 13:34:59.993869 2936 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"005aae1ea255143654f9818279b585ed2e5ad9bec6886ab4aacbc83655c5d9b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:34:59.993939 kubelet[2936]: E1216 13:34:59.993914 2936 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"005aae1ea255143654f9818279b585ed2e5ad9bec6886ab4aacbc83655c5d9b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-59f94fcf66-9fr5g" Dec 16 13:34:59.993939 kubelet[2936]: E1216 13:34:59.993929 2936 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"005aae1ea255143654f9818279b585ed2e5ad9bec6886ab4aacbc83655c5d9b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-59f94fcf66-9fr5g" Dec 16 13:34:59.994110 kubelet[2936]: E1216 13:34:59.993965 2936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-59f94fcf66-9fr5g_calico-apiserver(d4fd9b6b-9d45-4c37-9fcc-05498ebe1ddf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-59f94fcf66-9fr5g_calico-apiserver(d4fd9b6b-9d45-4c37-9fcc-05498ebe1ddf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"005aae1ea255143654f9818279b585ed2e5ad9bec6886ab4aacbc83655c5d9b3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-59f94fcf66-9fr5g" podUID="d4fd9b6b-9d45-4c37-9fcc-05498ebe1ddf" Dec 16 13:35:00.008067 containerd[1607]: time="2025-12-16T13:35:00.008043522Z" level=error msg="Failed to destroy network for sandbox \"a73adbadc91ffe11f0545519b6a90abc0e555fac007843bcd96560dd548d33c4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:35:00.009714 containerd[1607]: time="2025-12-16T13:35:00.009688217Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7dd785ffcd-6zmlm,Uid:1c2fd014-23b6-46cf-a696-49f2992f6d8d,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a73adbadc91ffe11f0545519b6a90abc0e555fac007843bcd96560dd548d33c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:35:00.009965 kubelet[2936]: E1216 13:35:00.009932 2936 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a73adbadc91ffe11f0545519b6a90abc0e555fac007843bcd96560dd548d33c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:35:00.010077 kubelet[2936]: E1216 13:35:00.010065 2936 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a73adbadc91ffe11f0545519b6a90abc0e555fac007843bcd96560dd548d33c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7dd785ffcd-6zmlm" Dec 16 13:35:00.010144 kubelet[2936]: E1216 13:35:00.010130 2936 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a73adbadc91ffe11f0545519b6a90abc0e555fac007843bcd96560dd548d33c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7dd785ffcd-6zmlm" Dec 16 13:35:00.010291 kubelet[2936]: E1216 13:35:00.010277 2936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7dd785ffcd-6zmlm_calico-system(1c2fd014-23b6-46cf-a696-49f2992f6d8d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7dd785ffcd-6zmlm_calico-system(1c2fd014-23b6-46cf-a696-49f2992f6d8d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a73adbadc91ffe11f0545519b6a90abc0e555fac007843bcd96560dd548d33c4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7dd785ffcd-6zmlm" podUID="1c2fd014-23b6-46cf-a696-49f2992f6d8d" Dec 16 13:35:00.013262 containerd[1607]: time="2025-12-16T13:35:00.013141188Z" level=error msg="Failed to destroy network for sandbox \"22a815834f16563b1d2bb47d62fd757b343973c9f32318ced6663f4ab0f3a02e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:35:00.013771 containerd[1607]: time="2025-12-16T13:35:00.013754033Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-zr5tc,Uid:ddb278be-f5ee-492e-b7ea-fb105804e93d,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"22a815834f16563b1d2bb47d62fd757b343973c9f32318ced6663f4ab0f3a02e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:35:00.013953 kubelet[2936]: E1216 13:35:00.013935 2936 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"22a815834f16563b1d2bb47d62fd757b343973c9f32318ced6663f4ab0f3a02e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:35:00.014015 kubelet[2936]: E1216 13:35:00.013963 2936 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"22a815834f16563b1d2bb47d62fd757b343973c9f32318ced6663f4ab0f3a02e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-zr5tc" Dec 16 13:35:00.014015 kubelet[2936]: E1216 13:35:00.013976 2936 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"22a815834f16563b1d2bb47d62fd757b343973c9f32318ced6663f4ab0f3a02e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-zr5tc" Dec 16 13:35:00.014340 kubelet[2936]: E1216 13:35:00.014259 2936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-zr5tc_kube-system(ddb278be-f5ee-492e-b7ea-fb105804e93d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-zr5tc_kube-system(ddb278be-f5ee-492e-b7ea-fb105804e93d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"22a815834f16563b1d2bb47d62fd757b343973c9f32318ced6663f4ab0f3a02e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-zr5tc" podUID="ddb278be-f5ee-492e-b7ea-fb105804e93d" Dec 16 13:35:00.016546 containerd[1607]: time="2025-12-16T13:35:00.016522347Z" level=error msg="Failed to destroy network for sandbox \"2df3e40c85fed9516f0bcdae1f6bd4f614137194be7415ba37df044af684c03d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:35:00.017484 containerd[1607]: time="2025-12-16T13:35:00.017444579Z" level=error msg="Failed to destroy network for sandbox \"941f0bc4c2f7aa7ff468a2392cf14398923ed15055a35011aa6d2652ce140120\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:35:00.017769 containerd[1607]: time="2025-12-16T13:35:00.017741154Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-5bgh6,Uid:f5d82f91-8d34-4dd6-9053-b327d15a7af5,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2df3e40c85fed9516f0bcdae1f6bd4f614137194be7415ba37df044af684c03d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:35:00.018052 containerd[1607]: time="2025-12-16T13:35:00.018035211Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-78f95cbfdd-lh872,Uid:7900a761-f455-401e-a05b-5ba11ccd5975,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"941f0bc4c2f7aa7ff468a2392cf14398923ed15055a35011aa6d2652ce140120\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:35:00.018133 kubelet[2936]: E1216 13:35:00.018067 2936 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2df3e40c85fed9516f0bcdae1f6bd4f614137194be7415ba37df044af684c03d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:35:00.018133 kubelet[2936]: E1216 13:35:00.018092 2936 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2df3e40c85fed9516f0bcdae1f6bd4f614137194be7415ba37df044af684c03d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-5bgh6" Dec 16 13:35:00.018133 kubelet[2936]: E1216 13:35:00.018116 2936 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2df3e40c85fed9516f0bcdae1f6bd4f614137194be7415ba37df044af684c03d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-5bgh6" Dec 16 13:35:00.018376 kubelet[2936]: E1216 13:35:00.018151 2936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-5bgh6_calico-system(f5d82f91-8d34-4dd6-9053-b327d15a7af5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-5bgh6_calico-system(f5d82f91-8d34-4dd6-9053-b327d15a7af5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2df3e40c85fed9516f0bcdae1f6bd4f614137194be7415ba37df044af684c03d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-5bgh6" podUID="f5d82f91-8d34-4dd6-9053-b327d15a7af5" Dec 16 13:35:00.018504 kubelet[2936]: E1216 13:35:00.018419 2936 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"941f0bc4c2f7aa7ff468a2392cf14398923ed15055a35011aa6d2652ce140120\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:35:00.018504 kubelet[2936]: E1216 13:35:00.018442 2936 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"941f0bc4c2f7aa7ff468a2392cf14398923ed15055a35011aa6d2652ce140120\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-78f95cbfdd-lh872" Dec 16 13:35:00.018504 kubelet[2936]: E1216 13:35:00.018453 2936 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"941f0bc4c2f7aa7ff468a2392cf14398923ed15055a35011aa6d2652ce140120\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-78f95cbfdd-lh872" Dec 16 13:35:00.018657 kubelet[2936]: E1216 13:35:00.018474 2936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-78f95cbfdd-lh872_calico-system(7900a761-f455-401e-a05b-5ba11ccd5975)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-78f95cbfdd-lh872_calico-system(7900a761-f455-401e-a05b-5ba11ccd5975)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"941f0bc4c2f7aa7ff468a2392cf14398923ed15055a35011aa6d2652ce140120\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-78f95cbfdd-lh872" podUID="7900a761-f455-401e-a05b-5ba11ccd5975" Dec 16 13:35:00.022358 containerd[1607]: time="2025-12-16T13:35:00.022332999Z" level=error msg="Failed to destroy network for sandbox \"804e1a996ad27e0aeb867c658fd3a42e2d070d50e52d9c9b43fc77aee4c1ecb3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:35:00.022692 containerd[1607]: time="2025-12-16T13:35:00.022673406Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-h54lr,Uid:dfb9abff-2398-4522-b3cb-8954570c6d45,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"804e1a996ad27e0aeb867c658fd3a42e2d070d50e52d9c9b43fc77aee4c1ecb3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:35:00.022794 kubelet[2936]: E1216 13:35:00.022774 2936 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"804e1a996ad27e0aeb867c658fd3a42e2d070d50e52d9c9b43fc77aee4c1ecb3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:35:00.022827 kubelet[2936]: E1216 13:35:00.022808 2936 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"804e1a996ad27e0aeb867c658fd3a42e2d070d50e52d9c9b43fc77aee4c1ecb3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-h54lr" Dec 16 13:35:00.022827 kubelet[2936]: E1216 13:35:00.022822 2936 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"804e1a996ad27e0aeb867c658fd3a42e2d070d50e52d9c9b43fc77aee4c1ecb3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-h54lr" Dec 16 13:35:00.022877 kubelet[2936]: E1216 13:35:00.022865 2936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-h54lr_kube-system(dfb9abff-2398-4522-b3cb-8954570c6d45)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-h54lr_kube-system(dfb9abff-2398-4522-b3cb-8954570c6d45)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"804e1a996ad27e0aeb867c658fd3a42e2d070d50e52d9c9b43fc77aee4c1ecb3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-h54lr" podUID="dfb9abff-2398-4522-b3cb-8954570c6d45" Dec 16 13:35:01.308920 systemd[1]: Created slice kubepods-besteffort-pod6f1d1d03_3f5c_4209_adf6_a2cec01d4b01.slice - libcontainer container kubepods-besteffort-pod6f1d1d03_3f5c_4209_adf6_a2cec01d4b01.slice. Dec 16 13:35:01.312835 containerd[1607]: time="2025-12-16T13:35:01.312810828Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vq4wz,Uid:6f1d1d03-3f5c-4209-adf6-a2cec01d4b01,Namespace:calico-system,Attempt:0,}" Dec 16 13:35:01.357902 containerd[1607]: time="2025-12-16T13:35:01.357869423Z" level=error msg="Failed to destroy network for sandbox \"b9ed0bc5b2984db691fdc789e0c9d8495691e0ffb850bac382c95e14de3b446e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:35:01.359353 systemd[1]: run-netns-cni\x2d12e23026\x2d3289\x2db313\x2d4f57\x2df41f2d95171c.mount: Deactivated successfully. Dec 16 13:35:01.359822 containerd[1607]: time="2025-12-16T13:35:01.359729958Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vq4wz,Uid:6f1d1d03-3f5c-4209-adf6-a2cec01d4b01,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9ed0bc5b2984db691fdc789e0c9d8495691e0ffb850bac382c95e14de3b446e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:35:01.360395 kubelet[2936]: E1216 13:35:01.360015 2936 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9ed0bc5b2984db691fdc789e0c9d8495691e0ffb850bac382c95e14de3b446e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:35:01.360395 kubelet[2936]: E1216 13:35:01.360057 2936 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9ed0bc5b2984db691fdc789e0c9d8495691e0ffb850bac382c95e14de3b446e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-vq4wz" Dec 16 13:35:01.360571 kubelet[2936]: E1216 13:35:01.360428 2936 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9ed0bc5b2984db691fdc789e0c9d8495691e0ffb850bac382c95e14de3b446e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-vq4wz" Dec 16 13:35:01.360697 kubelet[2936]: E1216 13:35:01.360584 2936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-vq4wz_calico-system(6f1d1d03-3f5c-4209-adf6-a2cec01d4b01)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-vq4wz_calico-system(6f1d1d03-3f5c-4209-adf6-a2cec01d4b01)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b9ed0bc5b2984db691fdc789e0c9d8495691e0ffb850bac382c95e14de3b446e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-vq4wz" podUID="6f1d1d03-3f5c-4209-adf6-a2cec01d4b01" Dec 16 13:35:05.182486 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1461309693.mount: Deactivated successfully. Dec 16 13:35:05.390778 containerd[1607]: time="2025-12-16T13:35:05.377004118Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:35:05.397412 containerd[1607]: time="2025-12-16T13:35:05.397395187Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Dec 16 13:35:05.400154 containerd[1607]: time="2025-12-16T13:35:05.399425848Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:35:05.402134 containerd[1607]: time="2025-12-16T13:35:05.402109600Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:35:05.402540 containerd[1607]: time="2025-12-16T13:35:05.402369310Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 5.925474766s" Dec 16 13:35:05.402540 containerd[1607]: time="2025-12-16T13:35:05.402395817Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Dec 16 13:35:05.421416 containerd[1607]: time="2025-12-16T13:35:05.421389381Z" level=info msg="CreateContainer within sandbox \"8784275f1890a3ffcd6787f762f5c39e5939de092900c9a92aa1515b1dc6d5b2\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 16 13:35:05.448427 containerd[1607]: time="2025-12-16T13:35:05.448309222Z" level=info msg="Container 933ba0787e31840ad7b18dbbd53f62087f2c5117178561380086027bf83ab39b: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:35:05.449450 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount76662050.mount: Deactivated successfully. Dec 16 13:35:05.482962 containerd[1607]: time="2025-12-16T13:35:05.482935177Z" level=info msg="CreateContainer within sandbox \"8784275f1890a3ffcd6787f762f5c39e5939de092900c9a92aa1515b1dc6d5b2\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"933ba0787e31840ad7b18dbbd53f62087f2c5117178561380086027bf83ab39b\"" Dec 16 13:35:05.483624 containerd[1607]: time="2025-12-16T13:35:05.483386373Z" level=info msg="StartContainer for \"933ba0787e31840ad7b18dbbd53f62087f2c5117178561380086027bf83ab39b\"" Dec 16 13:35:05.487366 containerd[1607]: time="2025-12-16T13:35:05.487344663Z" level=info msg="connecting to shim 933ba0787e31840ad7b18dbbd53f62087f2c5117178561380086027bf83ab39b" address="unix:///run/containerd/s/2dddc14c55f244dd4e700eb95640063e23f0b06f3e1952a295664f476e450b2f" protocol=ttrpc version=3 Dec 16 13:35:05.561354 systemd[1]: Started cri-containerd-933ba0787e31840ad7b18dbbd53f62087f2c5117178561380086027bf83ab39b.scope - libcontainer container 933ba0787e31840ad7b18dbbd53f62087f2c5117178561380086027bf83ab39b. Dec 16 13:35:05.621627 containerd[1607]: time="2025-12-16T13:35:05.621602211Z" level=info msg="StartContainer for \"933ba0787e31840ad7b18dbbd53f62087f2c5117178561380086027bf83ab39b\" returns successfully" Dec 16 13:35:05.709316 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 16 13:35:05.711450 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 16 13:35:06.131603 kubelet[2936]: I1216 13:35:06.131575 2936 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1c2fd014-23b6-46cf-a696-49f2992f6d8d-whisker-ca-bundle\") pod \"1c2fd014-23b6-46cf-a696-49f2992f6d8d\" (UID: \"1c2fd014-23b6-46cf-a696-49f2992f6d8d\") " Dec 16 13:35:06.131854 kubelet[2936]: I1216 13:35:06.131622 2936 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r9497\" (UniqueName: \"kubernetes.io/projected/1c2fd014-23b6-46cf-a696-49f2992f6d8d-kube-api-access-r9497\") pod \"1c2fd014-23b6-46cf-a696-49f2992f6d8d\" (UID: \"1c2fd014-23b6-46cf-a696-49f2992f6d8d\") " Dec 16 13:35:06.131854 kubelet[2936]: I1216 13:35:06.131639 2936 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/1c2fd014-23b6-46cf-a696-49f2992f6d8d-whisker-backend-key-pair\") pod \"1c2fd014-23b6-46cf-a696-49f2992f6d8d\" (UID: \"1c2fd014-23b6-46cf-a696-49f2992f6d8d\") " Dec 16 13:35:06.141533 kubelet[2936]: I1216 13:35:06.141370 2936 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c2fd014-23b6-46cf-a696-49f2992f6d8d-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "1c2fd014-23b6-46cf-a696-49f2992f6d8d" (UID: "1c2fd014-23b6-46cf-a696-49f2992f6d8d"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 16 13:35:06.147353 kubelet[2936]: I1216 13:35:06.147337 2936 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c2fd014-23b6-46cf-a696-49f2992f6d8d-kube-api-access-r9497" (OuterVolumeSpecName: "kube-api-access-r9497") pod "1c2fd014-23b6-46cf-a696-49f2992f6d8d" (UID: "1c2fd014-23b6-46cf-a696-49f2992f6d8d"). InnerVolumeSpecName "kube-api-access-r9497". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 16 13:35:06.147484 kubelet[2936]: I1216 13:35:06.147466 2936 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c2fd014-23b6-46cf-a696-49f2992f6d8d-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "1c2fd014-23b6-46cf-a696-49f2992f6d8d" (UID: "1c2fd014-23b6-46cf-a696-49f2992f6d8d"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 16 13:35:06.183933 systemd[1]: var-lib-kubelet-pods-1c2fd014\x2d23b6\x2d46cf\x2da696\x2d49f2992f6d8d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dr9497.mount: Deactivated successfully. Dec 16 13:35:06.183987 systemd[1]: var-lib-kubelet-pods-1c2fd014\x2d23b6\x2d46cf\x2da696\x2d49f2992f6d8d-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Dec 16 13:35:06.232123 kubelet[2936]: I1216 13:35:06.232073 2936 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1c2fd014-23b6-46cf-a696-49f2992f6d8d-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Dec 16 13:35:06.232123 kubelet[2936]: I1216 13:35:06.232099 2936 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-r9497\" (UniqueName: \"kubernetes.io/projected/1c2fd014-23b6-46cf-a696-49f2992f6d8d-kube-api-access-r9497\") on node \"localhost\" DevicePath \"\"" Dec 16 13:35:06.232123 kubelet[2936]: I1216 13:35:06.232106 2936 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/1c2fd014-23b6-46cf-a696-49f2992f6d8d-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Dec 16 13:35:06.308698 systemd[1]: Removed slice kubepods-besteffort-pod1c2fd014_23b6_46cf_a696_49f2992f6d8d.slice - libcontainer container kubepods-besteffort-pod1c2fd014_23b6_46cf_a696_49f2992f6d8d.slice. Dec 16 13:35:06.506382 kubelet[2936]: I1216 13:35:06.505188 2936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-j2ck4" podStartSLOduration=2.214602555 podStartE2EDuration="18.505176669s" podCreationTimestamp="2025-12-16 13:34:48 +0000 UTC" firstStartedPulling="2025-12-16 13:34:49.112294616 +0000 UTC m=+18.890792706" lastFinishedPulling="2025-12-16 13:35:05.402868731 +0000 UTC m=+35.181366820" observedRunningTime="2025-12-16 13:35:06.504699612 +0000 UTC m=+36.283197713" watchObservedRunningTime="2025-12-16 13:35:06.505176669 +0000 UTC m=+36.283674765" Dec 16 13:35:06.562589 systemd[1]: Created slice kubepods-besteffort-pod8d8ff3b5_7370_40af_945d_8fef79b8d3a6.slice - libcontainer container kubepods-besteffort-pod8d8ff3b5_7370_40af_945d_8fef79b8d3a6.slice. Dec 16 13:35:06.635400 kubelet[2936]: I1216 13:35:06.635365 2936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/8d8ff3b5-7370-40af-945d-8fef79b8d3a6-whisker-backend-key-pair\") pod \"whisker-d87f9cb8-v4hv4\" (UID: \"8d8ff3b5-7370-40af-945d-8fef79b8d3a6\") " pod="calico-system/whisker-d87f9cb8-v4hv4" Dec 16 13:35:06.635482 kubelet[2936]: I1216 13:35:06.635413 2936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8d8ff3b5-7370-40af-945d-8fef79b8d3a6-whisker-ca-bundle\") pod \"whisker-d87f9cb8-v4hv4\" (UID: \"8d8ff3b5-7370-40af-945d-8fef79b8d3a6\") " pod="calico-system/whisker-d87f9cb8-v4hv4" Dec 16 13:35:06.635482 kubelet[2936]: I1216 13:35:06.635425 2936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjfmc\" (UniqueName: \"kubernetes.io/projected/8d8ff3b5-7370-40af-945d-8fef79b8d3a6-kube-api-access-mjfmc\") pod \"whisker-d87f9cb8-v4hv4\" (UID: \"8d8ff3b5-7370-40af-945d-8fef79b8d3a6\") " pod="calico-system/whisker-d87f9cb8-v4hv4" Dec 16 13:35:06.865674 containerd[1607]: time="2025-12-16T13:35:06.865575729Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-d87f9cb8-v4hv4,Uid:8d8ff3b5-7370-40af-945d-8fef79b8d3a6,Namespace:calico-system,Attempt:0,}" Dec 16 13:35:07.540606 kubelet[2936]: I1216 13:35:07.540100 2936 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 16 13:35:07.552290 systemd-networkd[1506]: cali264be4bdf6d: Link UP Dec 16 13:35:07.553405 systemd-networkd[1506]: cali264be4bdf6d: Gained carrier Dec 16 13:35:07.565468 containerd[1607]: 2025-12-16 13:35:06.894 [INFO][3970] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 16 13:35:07.565468 containerd[1607]: 2025-12-16 13:35:06.932 [INFO][3970] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--d87f9cb8--v4hv4-eth0 whisker-d87f9cb8- calico-system 8d8ff3b5-7370-40af-945d-8fef79b8d3a6 882 0 2025-12-16 13:35:06 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:d87f9cb8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-d87f9cb8-v4hv4 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali264be4bdf6d [] [] }} ContainerID="58317beadb05d284e81d3765054618846de256152af2576964a2474a3c370067" Namespace="calico-system" Pod="whisker-d87f9cb8-v4hv4" WorkloadEndpoint="localhost-k8s-whisker--d87f9cb8--v4hv4-" Dec 16 13:35:07.565468 containerd[1607]: 2025-12-16 13:35:06.932 [INFO][3970] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="58317beadb05d284e81d3765054618846de256152af2576964a2474a3c370067" Namespace="calico-system" Pod="whisker-d87f9cb8-v4hv4" WorkloadEndpoint="localhost-k8s-whisker--d87f9cb8--v4hv4-eth0" Dec 16 13:35:07.565468 containerd[1607]: 2025-12-16 13:35:07.462 [INFO][3981] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="58317beadb05d284e81d3765054618846de256152af2576964a2474a3c370067" HandleID="k8s-pod-network.58317beadb05d284e81d3765054618846de256152af2576964a2474a3c370067" Workload="localhost-k8s-whisker--d87f9cb8--v4hv4-eth0" Dec 16 13:35:07.565869 containerd[1607]: 2025-12-16 13:35:07.469 [INFO][3981] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="58317beadb05d284e81d3765054618846de256152af2576964a2474a3c370067" HandleID="k8s-pod-network.58317beadb05d284e81d3765054618846de256152af2576964a2474a3c370067" Workload="localhost-k8s-whisker--d87f9cb8--v4hv4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000103440), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-d87f9cb8-v4hv4", "timestamp":"2025-12-16 13:35:07.462710172 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 13:35:07.565869 containerd[1607]: 2025-12-16 13:35:07.469 [INFO][3981] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 13:35:07.565869 containerd[1607]: 2025-12-16 13:35:07.469 [INFO][3981] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 13:35:07.565869 containerd[1607]: 2025-12-16 13:35:07.470 [INFO][3981] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 16 13:35:07.565869 containerd[1607]: 2025-12-16 13:35:07.493 [INFO][3981] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.58317beadb05d284e81d3765054618846de256152af2576964a2474a3c370067" host="localhost" Dec 16 13:35:07.565869 containerd[1607]: 2025-12-16 13:35:07.507 [INFO][3981] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 16 13:35:07.565869 containerd[1607]: 2025-12-16 13:35:07.514 [INFO][3981] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 16 13:35:07.565869 containerd[1607]: 2025-12-16 13:35:07.516 [INFO][3981] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 16 13:35:07.565869 containerd[1607]: 2025-12-16 13:35:07.519 [INFO][3981] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 16 13:35:07.565869 containerd[1607]: 2025-12-16 13:35:07.519 [INFO][3981] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.58317beadb05d284e81d3765054618846de256152af2576964a2474a3c370067" host="localhost" Dec 16 13:35:07.566033 containerd[1607]: 2025-12-16 13:35:07.520 [INFO][3981] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.58317beadb05d284e81d3765054618846de256152af2576964a2474a3c370067 Dec 16 13:35:07.566033 containerd[1607]: 2025-12-16 13:35:07.525 [INFO][3981] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.58317beadb05d284e81d3765054618846de256152af2576964a2474a3c370067" host="localhost" Dec 16 13:35:07.566033 containerd[1607]: 2025-12-16 13:35:07.529 [INFO][3981] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.58317beadb05d284e81d3765054618846de256152af2576964a2474a3c370067" host="localhost" Dec 16 13:35:07.566033 containerd[1607]: 2025-12-16 13:35:07.529 [INFO][3981] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.58317beadb05d284e81d3765054618846de256152af2576964a2474a3c370067" host="localhost" Dec 16 13:35:07.566033 containerd[1607]: 2025-12-16 13:35:07.529 [INFO][3981] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 13:35:07.566033 containerd[1607]: 2025-12-16 13:35:07.529 [INFO][3981] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="58317beadb05d284e81d3765054618846de256152af2576964a2474a3c370067" HandleID="k8s-pod-network.58317beadb05d284e81d3765054618846de256152af2576964a2474a3c370067" Workload="localhost-k8s-whisker--d87f9cb8--v4hv4-eth0" Dec 16 13:35:07.566123 containerd[1607]: 2025-12-16 13:35:07.533 [INFO][3970] cni-plugin/k8s.go 418: Populated endpoint ContainerID="58317beadb05d284e81d3765054618846de256152af2576964a2474a3c370067" Namespace="calico-system" Pod="whisker-d87f9cb8-v4hv4" WorkloadEndpoint="localhost-k8s-whisker--d87f9cb8--v4hv4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--d87f9cb8--v4hv4-eth0", GenerateName:"whisker-d87f9cb8-", Namespace:"calico-system", SelfLink:"", UID:"8d8ff3b5-7370-40af-945d-8fef79b8d3a6", ResourceVersion:"882", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 35, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"d87f9cb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-d87f9cb8-v4hv4", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali264be4bdf6d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:35:07.566123 containerd[1607]: 2025-12-16 13:35:07.534 [INFO][3970] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="58317beadb05d284e81d3765054618846de256152af2576964a2474a3c370067" Namespace="calico-system" Pod="whisker-d87f9cb8-v4hv4" WorkloadEndpoint="localhost-k8s-whisker--d87f9cb8--v4hv4-eth0" Dec 16 13:35:07.566181 containerd[1607]: 2025-12-16 13:35:07.534 [INFO][3970] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali264be4bdf6d ContainerID="58317beadb05d284e81d3765054618846de256152af2576964a2474a3c370067" Namespace="calico-system" Pod="whisker-d87f9cb8-v4hv4" WorkloadEndpoint="localhost-k8s-whisker--d87f9cb8--v4hv4-eth0" Dec 16 13:35:07.566181 containerd[1607]: 2025-12-16 13:35:07.552 [INFO][3970] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="58317beadb05d284e81d3765054618846de256152af2576964a2474a3c370067" Namespace="calico-system" Pod="whisker-d87f9cb8-v4hv4" WorkloadEndpoint="localhost-k8s-whisker--d87f9cb8--v4hv4-eth0" Dec 16 13:35:07.566216 containerd[1607]: 2025-12-16 13:35:07.552 [INFO][3970] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="58317beadb05d284e81d3765054618846de256152af2576964a2474a3c370067" Namespace="calico-system" Pod="whisker-d87f9cb8-v4hv4" WorkloadEndpoint="localhost-k8s-whisker--d87f9cb8--v4hv4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--d87f9cb8--v4hv4-eth0", GenerateName:"whisker-d87f9cb8-", Namespace:"calico-system", SelfLink:"", UID:"8d8ff3b5-7370-40af-945d-8fef79b8d3a6", ResourceVersion:"882", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 35, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"d87f9cb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"58317beadb05d284e81d3765054618846de256152af2576964a2474a3c370067", Pod:"whisker-d87f9cb8-v4hv4", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali264be4bdf6d", MAC:"b2:18:9a:31:9f:8b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:35:07.566273 containerd[1607]: 2025-12-16 13:35:07.559 [INFO][3970] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="58317beadb05d284e81d3765054618846de256152af2576964a2474a3c370067" Namespace="calico-system" Pod="whisker-d87f9cb8-v4hv4" WorkloadEndpoint="localhost-k8s-whisker--d87f9cb8--v4hv4-eth0" Dec 16 13:35:07.732369 systemd-networkd[1506]: vxlan.calico: Link UP Dec 16 13:35:07.732373 systemd-networkd[1506]: vxlan.calico: Gained carrier Dec 16 13:35:07.741333 containerd[1607]: time="2025-12-16T13:35:07.741308267Z" level=info msg="connecting to shim 58317beadb05d284e81d3765054618846de256152af2576964a2474a3c370067" address="unix:///run/containerd/s/ee85652e4e54101553007eee1c5fd29b87a7f88d9f151772e15c79a287b7c9c6" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:35:07.774318 systemd[1]: Started cri-containerd-58317beadb05d284e81d3765054618846de256152af2576964a2474a3c370067.scope - libcontainer container 58317beadb05d284e81d3765054618846de256152af2576964a2474a3c370067. Dec 16 13:35:07.800665 systemd-resolved[1507]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 16 13:35:07.882563 containerd[1607]: time="2025-12-16T13:35:07.882535661Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-d87f9cb8-v4hv4,Uid:8d8ff3b5-7370-40af-945d-8fef79b8d3a6,Namespace:calico-system,Attempt:0,} returns sandbox id \"58317beadb05d284e81d3765054618846de256152af2576964a2474a3c370067\"" Dec 16 13:35:07.892936 containerd[1607]: time="2025-12-16T13:35:07.892917352Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 13:35:08.272707 containerd[1607]: time="2025-12-16T13:35:08.272663980Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:35:08.273162 containerd[1607]: time="2025-12-16T13:35:08.273142030Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 13:35:08.273196 containerd[1607]: time="2025-12-16T13:35:08.273189477Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Dec 16 13:35:08.273749 kubelet[2936]: E1216 13:35:08.273309 2936 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 13:35:08.273749 kubelet[2936]: E1216 13:35:08.273396 2936 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 13:35:08.279375 kubelet[2936]: E1216 13:35:08.279320 2936 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:961cb6d8db104cc6a58fd1d09c5a9cf2,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mjfmc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-d87f9cb8-v4hv4_calico-system(8d8ff3b5-7370-40af-945d-8fef79b8d3a6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 13:35:08.281068 containerd[1607]: time="2025-12-16T13:35:08.280963289Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 13:35:08.306219 kubelet[2936]: I1216 13:35:08.306191 2936 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c2fd014-23b6-46cf-a696-49f2992f6d8d" path="/var/lib/kubelet/pods/1c2fd014-23b6-46cf-a696-49f2992f6d8d/volumes" Dec 16 13:35:08.650985 containerd[1607]: time="2025-12-16T13:35:08.650869673Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:35:08.651326 containerd[1607]: time="2025-12-16T13:35:08.651178286Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 13:35:08.651326 containerd[1607]: time="2025-12-16T13:35:08.651252665Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Dec 16 13:35:08.651532 kubelet[2936]: E1216 13:35:08.651499 2936 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 13:35:08.651725 kubelet[2936]: E1216 13:35:08.651540 2936 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 13:35:08.651756 kubelet[2936]: E1216 13:35:08.651648 2936 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mjfmc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-d87f9cb8-v4hv4_calico-system(8d8ff3b5-7370-40af-945d-8fef79b8d3a6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 13:35:08.652924 kubelet[2936]: E1216 13:35:08.652896 2936 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-d87f9cb8-v4hv4" podUID="8d8ff3b5-7370-40af-945d-8fef79b8d3a6" Dec 16 13:35:08.808323 systemd-networkd[1506]: cali264be4bdf6d: Gained IPv6LL Dec 16 13:35:09.256391 systemd-networkd[1506]: vxlan.calico: Gained IPv6LL Dec 16 13:35:09.546111 kubelet[2936]: E1216 13:35:09.546034 2936 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-d87f9cb8-v4hv4" podUID="8d8ff3b5-7370-40af-945d-8fef79b8d3a6" Dec 16 13:35:10.306946 containerd[1607]: time="2025-12-16T13:35:10.306886143Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-zr5tc,Uid:ddb278be-f5ee-492e-b7ea-fb105804e93d,Namespace:kube-system,Attempt:0,}" Dec 16 13:35:10.405012 systemd-networkd[1506]: calie0369550071: Link UP Dec 16 13:35:10.405666 systemd-networkd[1506]: calie0369550071: Gained carrier Dec 16 13:35:10.416324 containerd[1607]: 2025-12-16 13:35:10.355 [INFO][4250] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--zr5tc-eth0 coredns-674b8bbfcf- kube-system ddb278be-f5ee-492e-b7ea-fb105804e93d 820 0 2025-12-16 13:34:37 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-zr5tc eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calie0369550071 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="a016ee9226e2952964066b5c425a910e48790bd33477d0b2525686981db0f717" Namespace="kube-system" Pod="coredns-674b8bbfcf-zr5tc" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--zr5tc-" Dec 16 13:35:10.416324 containerd[1607]: 2025-12-16 13:35:10.355 [INFO][4250] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a016ee9226e2952964066b5c425a910e48790bd33477d0b2525686981db0f717" Namespace="kube-system" Pod="coredns-674b8bbfcf-zr5tc" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--zr5tc-eth0" Dec 16 13:35:10.416324 containerd[1607]: 2025-12-16 13:35:10.381 [INFO][4261] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a016ee9226e2952964066b5c425a910e48790bd33477d0b2525686981db0f717" HandleID="k8s-pod-network.a016ee9226e2952964066b5c425a910e48790bd33477d0b2525686981db0f717" Workload="localhost-k8s-coredns--674b8bbfcf--zr5tc-eth0" Dec 16 13:35:10.416463 containerd[1607]: 2025-12-16 13:35:10.382 [INFO][4261] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a016ee9226e2952964066b5c425a910e48790bd33477d0b2525686981db0f717" HandleID="k8s-pod-network.a016ee9226e2952964066b5c425a910e48790bd33477d0b2525686981db0f717" Workload="localhost-k8s-coredns--674b8bbfcf--zr5tc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024efe0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-zr5tc", "timestamp":"2025-12-16 13:35:10.381863424 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 13:35:10.416463 containerd[1607]: 2025-12-16 13:35:10.382 [INFO][4261] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 13:35:10.416463 containerd[1607]: 2025-12-16 13:35:10.382 [INFO][4261] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 13:35:10.416463 containerd[1607]: 2025-12-16 13:35:10.382 [INFO][4261] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 16 13:35:10.416463 containerd[1607]: 2025-12-16 13:35:10.387 [INFO][4261] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a016ee9226e2952964066b5c425a910e48790bd33477d0b2525686981db0f717" host="localhost" Dec 16 13:35:10.416463 containerd[1607]: 2025-12-16 13:35:10.389 [INFO][4261] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 16 13:35:10.416463 containerd[1607]: 2025-12-16 13:35:10.391 [INFO][4261] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 16 13:35:10.416463 containerd[1607]: 2025-12-16 13:35:10.392 [INFO][4261] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 16 13:35:10.416463 containerd[1607]: 2025-12-16 13:35:10.393 [INFO][4261] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 16 13:35:10.416463 containerd[1607]: 2025-12-16 13:35:10.393 [INFO][4261] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a016ee9226e2952964066b5c425a910e48790bd33477d0b2525686981db0f717" host="localhost" Dec 16 13:35:10.416626 containerd[1607]: 2025-12-16 13:35:10.394 [INFO][4261] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a016ee9226e2952964066b5c425a910e48790bd33477d0b2525686981db0f717 Dec 16 13:35:10.416626 containerd[1607]: 2025-12-16 13:35:10.396 [INFO][4261] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a016ee9226e2952964066b5c425a910e48790bd33477d0b2525686981db0f717" host="localhost" Dec 16 13:35:10.416626 containerd[1607]: 2025-12-16 13:35:10.398 [INFO][4261] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.a016ee9226e2952964066b5c425a910e48790bd33477d0b2525686981db0f717" host="localhost" Dec 16 13:35:10.416626 containerd[1607]: 2025-12-16 13:35:10.398 [INFO][4261] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.a016ee9226e2952964066b5c425a910e48790bd33477d0b2525686981db0f717" host="localhost" Dec 16 13:35:10.416626 containerd[1607]: 2025-12-16 13:35:10.398 [INFO][4261] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 13:35:10.416626 containerd[1607]: 2025-12-16 13:35:10.398 [INFO][4261] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="a016ee9226e2952964066b5c425a910e48790bd33477d0b2525686981db0f717" HandleID="k8s-pod-network.a016ee9226e2952964066b5c425a910e48790bd33477d0b2525686981db0f717" Workload="localhost-k8s-coredns--674b8bbfcf--zr5tc-eth0" Dec 16 13:35:10.416712 containerd[1607]: 2025-12-16 13:35:10.401 [INFO][4250] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a016ee9226e2952964066b5c425a910e48790bd33477d0b2525686981db0f717" Namespace="kube-system" Pod="coredns-674b8bbfcf-zr5tc" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--zr5tc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--zr5tc-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"ddb278be-f5ee-492e-b7ea-fb105804e93d", ResourceVersion:"820", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 34, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-zr5tc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie0369550071", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:35:10.416757 containerd[1607]: 2025-12-16 13:35:10.401 [INFO][4250] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="a016ee9226e2952964066b5c425a910e48790bd33477d0b2525686981db0f717" Namespace="kube-system" Pod="coredns-674b8bbfcf-zr5tc" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--zr5tc-eth0" Dec 16 13:35:10.416757 containerd[1607]: 2025-12-16 13:35:10.401 [INFO][4250] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie0369550071 ContainerID="a016ee9226e2952964066b5c425a910e48790bd33477d0b2525686981db0f717" Namespace="kube-system" Pod="coredns-674b8bbfcf-zr5tc" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--zr5tc-eth0" Dec 16 13:35:10.416757 containerd[1607]: 2025-12-16 13:35:10.405 [INFO][4250] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a016ee9226e2952964066b5c425a910e48790bd33477d0b2525686981db0f717" Namespace="kube-system" Pod="coredns-674b8bbfcf-zr5tc" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--zr5tc-eth0" Dec 16 13:35:10.416811 containerd[1607]: 2025-12-16 13:35:10.406 [INFO][4250] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a016ee9226e2952964066b5c425a910e48790bd33477d0b2525686981db0f717" Namespace="kube-system" Pod="coredns-674b8bbfcf-zr5tc" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--zr5tc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--zr5tc-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"ddb278be-f5ee-492e-b7ea-fb105804e93d", ResourceVersion:"820", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 34, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a016ee9226e2952964066b5c425a910e48790bd33477d0b2525686981db0f717", Pod:"coredns-674b8bbfcf-zr5tc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie0369550071", MAC:"ca:23:b5:ce:83:ae", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:35:10.416811 containerd[1607]: 2025-12-16 13:35:10.412 [INFO][4250] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a016ee9226e2952964066b5c425a910e48790bd33477d0b2525686981db0f717" Namespace="kube-system" Pod="coredns-674b8bbfcf-zr5tc" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--zr5tc-eth0" Dec 16 13:35:10.430494 containerd[1607]: time="2025-12-16T13:35:10.429944952Z" level=info msg="connecting to shim a016ee9226e2952964066b5c425a910e48790bd33477d0b2525686981db0f717" address="unix:///run/containerd/s/74b51668f27d715d8d305d7954c8ee98689fdb78f42073459f6a936757e10ffb" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:35:10.451447 systemd[1]: Started cri-containerd-a016ee9226e2952964066b5c425a910e48790bd33477d0b2525686981db0f717.scope - libcontainer container a016ee9226e2952964066b5c425a910e48790bd33477d0b2525686981db0f717. Dec 16 13:35:10.458759 systemd-resolved[1507]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 16 13:35:10.485960 containerd[1607]: time="2025-12-16T13:35:10.485935093Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-zr5tc,Uid:ddb278be-f5ee-492e-b7ea-fb105804e93d,Namespace:kube-system,Attempt:0,} returns sandbox id \"a016ee9226e2952964066b5c425a910e48790bd33477d0b2525686981db0f717\"" Dec 16 13:35:10.493162 containerd[1607]: time="2025-12-16T13:35:10.493144593Z" level=info msg="CreateContainer within sandbox \"a016ee9226e2952964066b5c425a910e48790bd33477d0b2525686981db0f717\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 16 13:35:10.505433 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3152437036.mount: Deactivated successfully. Dec 16 13:35:10.507561 containerd[1607]: time="2025-12-16T13:35:10.507544545Z" level=info msg="Container 925aace531ece0c324b53a3b0269ee0565a8d11dcdaeabd62193d1c33635b158: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:35:10.509749 containerd[1607]: time="2025-12-16T13:35:10.509737178Z" level=info msg="CreateContainer within sandbox \"a016ee9226e2952964066b5c425a910e48790bd33477d0b2525686981db0f717\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"925aace531ece0c324b53a3b0269ee0565a8d11dcdaeabd62193d1c33635b158\"" Dec 16 13:35:10.510420 containerd[1607]: time="2025-12-16T13:35:10.510342981Z" level=info msg="StartContainer for \"925aace531ece0c324b53a3b0269ee0565a8d11dcdaeabd62193d1c33635b158\"" Dec 16 13:35:10.511019 containerd[1607]: time="2025-12-16T13:35:10.511008248Z" level=info msg="connecting to shim 925aace531ece0c324b53a3b0269ee0565a8d11dcdaeabd62193d1c33635b158" address="unix:///run/containerd/s/74b51668f27d715d8d305d7954c8ee98689fdb78f42073459f6a936757e10ffb" protocol=ttrpc version=3 Dec 16 13:35:10.525327 systemd[1]: Started cri-containerd-925aace531ece0c324b53a3b0269ee0565a8d11dcdaeabd62193d1c33635b158.scope - libcontainer container 925aace531ece0c324b53a3b0269ee0565a8d11dcdaeabd62193d1c33635b158. Dec 16 13:35:10.545182 containerd[1607]: time="2025-12-16T13:35:10.545155322Z" level=info msg="StartContainer for \"925aace531ece0c324b53a3b0269ee0565a8d11dcdaeabd62193d1c33635b158\" returns successfully" Dec 16 13:35:10.554629 kubelet[2936]: I1216 13:35:10.554596 2936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-zr5tc" podStartSLOduration=33.554584273 podStartE2EDuration="33.554584273s" podCreationTimestamp="2025-12-16 13:34:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:35:10.553585206 +0000 UTC m=+40.332083306" watchObservedRunningTime="2025-12-16 13:35:10.554584273 +0000 UTC m=+40.333082369" Dec 16 13:35:11.305268 containerd[1607]: time="2025-12-16T13:35:11.304949054Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59f94fcf66-7qr5n,Uid:ec4d992b-26cc-4e71-9db7-9af961649e2b,Namespace:calico-apiserver,Attempt:0,}" Dec 16 13:35:11.381278 systemd-networkd[1506]: cali54908ef9499: Link UP Dec 16 13:35:11.381987 systemd-networkd[1506]: cali54908ef9499: Gained carrier Dec 16 13:35:11.394772 containerd[1607]: 2025-12-16 13:35:11.333 [INFO][4359] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--59f94fcf66--7qr5n-eth0 calico-apiserver-59f94fcf66- calico-apiserver ec4d992b-26cc-4e71-9db7-9af961649e2b 821 0 2025-12-16 13:34:44 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:59f94fcf66 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-59f94fcf66-7qr5n eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali54908ef9499 [] [] }} ContainerID="bb3ea3601b9b91ece45ea36842a05af0a2814129057f25224ae61036ec9f2c16" Namespace="calico-apiserver" Pod="calico-apiserver-59f94fcf66-7qr5n" WorkloadEndpoint="localhost-k8s-calico--apiserver--59f94fcf66--7qr5n-" Dec 16 13:35:11.394772 containerd[1607]: 2025-12-16 13:35:11.333 [INFO][4359] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="bb3ea3601b9b91ece45ea36842a05af0a2814129057f25224ae61036ec9f2c16" Namespace="calico-apiserver" Pod="calico-apiserver-59f94fcf66-7qr5n" WorkloadEndpoint="localhost-k8s-calico--apiserver--59f94fcf66--7qr5n-eth0" Dec 16 13:35:11.394772 containerd[1607]: 2025-12-16 13:35:11.347 [INFO][4371] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bb3ea3601b9b91ece45ea36842a05af0a2814129057f25224ae61036ec9f2c16" HandleID="k8s-pod-network.bb3ea3601b9b91ece45ea36842a05af0a2814129057f25224ae61036ec9f2c16" Workload="localhost-k8s-calico--apiserver--59f94fcf66--7qr5n-eth0" Dec 16 13:35:11.394772 containerd[1607]: 2025-12-16 13:35:11.348 [INFO][4371] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="bb3ea3601b9b91ece45ea36842a05af0a2814129057f25224ae61036ec9f2c16" HandleID="k8s-pod-network.bb3ea3601b9b91ece45ea36842a05af0a2814129057f25224ae61036ec9f2c16" Workload="localhost-k8s-calico--apiserver--59f94fcf66--7qr5n-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024efe0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-59f94fcf66-7qr5n", "timestamp":"2025-12-16 13:35:11.347955362 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 13:35:11.394772 containerd[1607]: 2025-12-16 13:35:11.348 [INFO][4371] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 13:35:11.394772 containerd[1607]: 2025-12-16 13:35:11.348 [INFO][4371] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 13:35:11.394772 containerd[1607]: 2025-12-16 13:35:11.348 [INFO][4371] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 16 13:35:11.394772 containerd[1607]: 2025-12-16 13:35:11.355 [INFO][4371] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.bb3ea3601b9b91ece45ea36842a05af0a2814129057f25224ae61036ec9f2c16" host="localhost" Dec 16 13:35:11.394772 containerd[1607]: 2025-12-16 13:35:11.358 [INFO][4371] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 16 13:35:11.394772 containerd[1607]: 2025-12-16 13:35:11.360 [INFO][4371] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 16 13:35:11.394772 containerd[1607]: 2025-12-16 13:35:11.362 [INFO][4371] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 16 13:35:11.394772 containerd[1607]: 2025-12-16 13:35:11.365 [INFO][4371] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 16 13:35:11.394772 containerd[1607]: 2025-12-16 13:35:11.365 [INFO][4371] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.bb3ea3601b9b91ece45ea36842a05af0a2814129057f25224ae61036ec9f2c16" host="localhost" Dec 16 13:35:11.394772 containerd[1607]: 2025-12-16 13:35:11.367 [INFO][4371] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.bb3ea3601b9b91ece45ea36842a05af0a2814129057f25224ae61036ec9f2c16 Dec 16 13:35:11.394772 containerd[1607]: 2025-12-16 13:35:11.369 [INFO][4371] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.bb3ea3601b9b91ece45ea36842a05af0a2814129057f25224ae61036ec9f2c16" host="localhost" Dec 16 13:35:11.394772 containerd[1607]: 2025-12-16 13:35:11.373 [INFO][4371] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.bb3ea3601b9b91ece45ea36842a05af0a2814129057f25224ae61036ec9f2c16" host="localhost" Dec 16 13:35:11.394772 containerd[1607]: 2025-12-16 13:35:11.373 [INFO][4371] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.bb3ea3601b9b91ece45ea36842a05af0a2814129057f25224ae61036ec9f2c16" host="localhost" Dec 16 13:35:11.394772 containerd[1607]: 2025-12-16 13:35:11.373 [INFO][4371] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 13:35:11.394772 containerd[1607]: 2025-12-16 13:35:11.373 [INFO][4371] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="bb3ea3601b9b91ece45ea36842a05af0a2814129057f25224ae61036ec9f2c16" HandleID="k8s-pod-network.bb3ea3601b9b91ece45ea36842a05af0a2814129057f25224ae61036ec9f2c16" Workload="localhost-k8s-calico--apiserver--59f94fcf66--7qr5n-eth0" Dec 16 13:35:11.409264 containerd[1607]: 2025-12-16 13:35:11.376 [INFO][4359] cni-plugin/k8s.go 418: Populated endpoint ContainerID="bb3ea3601b9b91ece45ea36842a05af0a2814129057f25224ae61036ec9f2c16" Namespace="calico-apiserver" Pod="calico-apiserver-59f94fcf66-7qr5n" WorkloadEndpoint="localhost-k8s-calico--apiserver--59f94fcf66--7qr5n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--59f94fcf66--7qr5n-eth0", GenerateName:"calico-apiserver-59f94fcf66-", Namespace:"calico-apiserver", SelfLink:"", UID:"ec4d992b-26cc-4e71-9db7-9af961649e2b", ResourceVersion:"821", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 34, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59f94fcf66", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-59f94fcf66-7qr5n", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali54908ef9499", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:35:11.409264 containerd[1607]: 2025-12-16 13:35:11.376 [INFO][4359] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="bb3ea3601b9b91ece45ea36842a05af0a2814129057f25224ae61036ec9f2c16" Namespace="calico-apiserver" Pod="calico-apiserver-59f94fcf66-7qr5n" WorkloadEndpoint="localhost-k8s-calico--apiserver--59f94fcf66--7qr5n-eth0" Dec 16 13:35:11.409264 containerd[1607]: 2025-12-16 13:35:11.376 [INFO][4359] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali54908ef9499 ContainerID="bb3ea3601b9b91ece45ea36842a05af0a2814129057f25224ae61036ec9f2c16" Namespace="calico-apiserver" Pod="calico-apiserver-59f94fcf66-7qr5n" WorkloadEndpoint="localhost-k8s-calico--apiserver--59f94fcf66--7qr5n-eth0" Dec 16 13:35:11.409264 containerd[1607]: 2025-12-16 13:35:11.382 [INFO][4359] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bb3ea3601b9b91ece45ea36842a05af0a2814129057f25224ae61036ec9f2c16" Namespace="calico-apiserver" Pod="calico-apiserver-59f94fcf66-7qr5n" WorkloadEndpoint="localhost-k8s-calico--apiserver--59f94fcf66--7qr5n-eth0" Dec 16 13:35:11.409264 containerd[1607]: 2025-12-16 13:35:11.383 [INFO][4359] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="bb3ea3601b9b91ece45ea36842a05af0a2814129057f25224ae61036ec9f2c16" Namespace="calico-apiserver" Pod="calico-apiserver-59f94fcf66-7qr5n" WorkloadEndpoint="localhost-k8s-calico--apiserver--59f94fcf66--7qr5n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--59f94fcf66--7qr5n-eth0", GenerateName:"calico-apiserver-59f94fcf66-", Namespace:"calico-apiserver", SelfLink:"", UID:"ec4d992b-26cc-4e71-9db7-9af961649e2b", ResourceVersion:"821", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 34, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59f94fcf66", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bb3ea3601b9b91ece45ea36842a05af0a2814129057f25224ae61036ec9f2c16", Pod:"calico-apiserver-59f94fcf66-7qr5n", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali54908ef9499", MAC:"96:fe:9f:89:12:c8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:35:11.409264 containerd[1607]: 2025-12-16 13:35:11.389 [INFO][4359] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="bb3ea3601b9b91ece45ea36842a05af0a2814129057f25224ae61036ec9f2c16" Namespace="calico-apiserver" Pod="calico-apiserver-59f94fcf66-7qr5n" WorkloadEndpoint="localhost-k8s-calico--apiserver--59f94fcf66--7qr5n-eth0" Dec 16 13:35:11.439136 containerd[1607]: time="2025-12-16T13:35:11.439053539Z" level=info msg="connecting to shim bb3ea3601b9b91ece45ea36842a05af0a2814129057f25224ae61036ec9f2c16" address="unix:///run/containerd/s/948110023ad563ea4c421ac8508343a7b6dd8905639d11085d957edcdec7d4e3" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:35:11.466313 systemd[1]: Started cri-containerd-bb3ea3601b9b91ece45ea36842a05af0a2814129057f25224ae61036ec9f2c16.scope - libcontainer container bb3ea3601b9b91ece45ea36842a05af0a2814129057f25224ae61036ec9f2c16. Dec 16 13:35:11.474748 systemd-resolved[1507]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 16 13:35:11.500467 containerd[1607]: time="2025-12-16T13:35:11.500438342Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59f94fcf66-7qr5n,Uid:ec4d992b-26cc-4e71-9db7-9af961649e2b,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"bb3ea3601b9b91ece45ea36842a05af0a2814129057f25224ae61036ec9f2c16\"" Dec 16 13:35:11.504840 containerd[1607]: time="2025-12-16T13:35:11.504820629Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 13:35:11.835247 containerd[1607]: time="2025-12-16T13:35:11.835188136Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:35:11.835782 containerd[1607]: time="2025-12-16T13:35:11.835718365Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 13:35:11.835782 containerd[1607]: time="2025-12-16T13:35:11.835767169Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 16 13:35:11.836049 kubelet[2936]: E1216 13:35:11.835973 2936 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:35:11.836049 kubelet[2936]: E1216 13:35:11.836029 2936 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:35:11.836666 kubelet[2936]: E1216 13:35:11.836468 2936 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nctwd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-59f94fcf66-7qr5n_calico-apiserver(ec4d992b-26cc-4e71-9db7-9af961649e2b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 13:35:11.838486 kubelet[2936]: E1216 13:35:11.838385 2936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59f94fcf66-7qr5n" podUID="ec4d992b-26cc-4e71-9db7-9af961649e2b" Dec 16 13:35:12.305656 containerd[1607]: time="2025-12-16T13:35:12.305602731Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-78f95cbfdd-lh872,Uid:7900a761-f455-401e-a05b-5ba11ccd5975,Namespace:calico-system,Attempt:0,}" Dec 16 13:35:12.329460 systemd-networkd[1506]: calie0369550071: Gained IPv6LL Dec 16 13:35:12.364458 systemd-networkd[1506]: cali2869cc5a4f5: Link UP Dec 16 13:35:12.365332 systemd-networkd[1506]: cali2869cc5a4f5: Gained carrier Dec 16 13:35:12.373782 containerd[1607]: 2025-12-16 13:35:12.330 [INFO][4435] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--78f95cbfdd--lh872-eth0 calico-kube-controllers-78f95cbfdd- calico-system 7900a761-f455-401e-a05b-5ba11ccd5975 816 0 2025-12-16 13:34:48 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:78f95cbfdd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-78f95cbfdd-lh872 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali2869cc5a4f5 [] [] }} ContainerID="adff0d23812b6e5e229660f6f3844db2cf427ebd7674e01ff599784fd39658d2" Namespace="calico-system" Pod="calico-kube-controllers-78f95cbfdd-lh872" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--78f95cbfdd--lh872-" Dec 16 13:35:12.373782 containerd[1607]: 2025-12-16 13:35:12.331 [INFO][4435] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="adff0d23812b6e5e229660f6f3844db2cf427ebd7674e01ff599784fd39658d2" Namespace="calico-system" Pod="calico-kube-controllers-78f95cbfdd-lh872" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--78f95cbfdd--lh872-eth0" Dec 16 13:35:12.373782 containerd[1607]: 2025-12-16 13:35:12.345 [INFO][4447] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="adff0d23812b6e5e229660f6f3844db2cf427ebd7674e01ff599784fd39658d2" HandleID="k8s-pod-network.adff0d23812b6e5e229660f6f3844db2cf427ebd7674e01ff599784fd39658d2" Workload="localhost-k8s-calico--kube--controllers--78f95cbfdd--lh872-eth0" Dec 16 13:35:12.373782 containerd[1607]: 2025-12-16 13:35:12.345 [INFO][4447] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="adff0d23812b6e5e229660f6f3844db2cf427ebd7674e01ff599784fd39658d2" HandleID="k8s-pod-network.adff0d23812b6e5e229660f6f3844db2cf427ebd7674e01ff599784fd39658d2" Workload="localhost-k8s-calico--kube--controllers--78f95cbfdd--lh872-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f180), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-78f95cbfdd-lh872", "timestamp":"2025-12-16 13:35:12.345642466 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 13:35:12.373782 containerd[1607]: 2025-12-16 13:35:12.345 [INFO][4447] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 13:35:12.373782 containerd[1607]: 2025-12-16 13:35:12.345 [INFO][4447] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 13:35:12.373782 containerd[1607]: 2025-12-16 13:35:12.345 [INFO][4447] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 16 13:35:12.373782 containerd[1607]: 2025-12-16 13:35:12.349 [INFO][4447] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.adff0d23812b6e5e229660f6f3844db2cf427ebd7674e01ff599784fd39658d2" host="localhost" Dec 16 13:35:12.373782 containerd[1607]: 2025-12-16 13:35:12.351 [INFO][4447] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 16 13:35:12.373782 containerd[1607]: 2025-12-16 13:35:12.353 [INFO][4447] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 16 13:35:12.373782 containerd[1607]: 2025-12-16 13:35:12.354 [INFO][4447] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 16 13:35:12.373782 containerd[1607]: 2025-12-16 13:35:12.355 [INFO][4447] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 16 13:35:12.373782 containerd[1607]: 2025-12-16 13:35:12.355 [INFO][4447] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.adff0d23812b6e5e229660f6f3844db2cf427ebd7674e01ff599784fd39658d2" host="localhost" Dec 16 13:35:12.373782 containerd[1607]: 2025-12-16 13:35:12.356 [INFO][4447] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.adff0d23812b6e5e229660f6f3844db2cf427ebd7674e01ff599784fd39658d2 Dec 16 13:35:12.373782 containerd[1607]: 2025-12-16 13:35:12.357 [INFO][4447] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.adff0d23812b6e5e229660f6f3844db2cf427ebd7674e01ff599784fd39658d2" host="localhost" Dec 16 13:35:12.373782 containerd[1607]: 2025-12-16 13:35:12.360 [INFO][4447] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.adff0d23812b6e5e229660f6f3844db2cf427ebd7674e01ff599784fd39658d2" host="localhost" Dec 16 13:35:12.373782 containerd[1607]: 2025-12-16 13:35:12.360 [INFO][4447] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.adff0d23812b6e5e229660f6f3844db2cf427ebd7674e01ff599784fd39658d2" host="localhost" Dec 16 13:35:12.373782 containerd[1607]: 2025-12-16 13:35:12.360 [INFO][4447] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 13:35:12.373782 containerd[1607]: 2025-12-16 13:35:12.360 [INFO][4447] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="adff0d23812b6e5e229660f6f3844db2cf427ebd7674e01ff599784fd39658d2" HandleID="k8s-pod-network.adff0d23812b6e5e229660f6f3844db2cf427ebd7674e01ff599784fd39658d2" Workload="localhost-k8s-calico--kube--controllers--78f95cbfdd--lh872-eth0" Dec 16 13:35:12.376356 containerd[1607]: 2025-12-16 13:35:12.362 [INFO][4435] cni-plugin/k8s.go 418: Populated endpoint ContainerID="adff0d23812b6e5e229660f6f3844db2cf427ebd7674e01ff599784fd39658d2" Namespace="calico-system" Pod="calico-kube-controllers-78f95cbfdd-lh872" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--78f95cbfdd--lh872-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--78f95cbfdd--lh872-eth0", GenerateName:"calico-kube-controllers-78f95cbfdd-", Namespace:"calico-system", SelfLink:"", UID:"7900a761-f455-401e-a05b-5ba11ccd5975", ResourceVersion:"816", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 34, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"78f95cbfdd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-78f95cbfdd-lh872", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2869cc5a4f5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:35:12.376356 containerd[1607]: 2025-12-16 13:35:12.362 [INFO][4435] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="adff0d23812b6e5e229660f6f3844db2cf427ebd7674e01ff599784fd39658d2" Namespace="calico-system" Pod="calico-kube-controllers-78f95cbfdd-lh872" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--78f95cbfdd--lh872-eth0" Dec 16 13:35:12.376356 containerd[1607]: 2025-12-16 13:35:12.362 [INFO][4435] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2869cc5a4f5 ContainerID="adff0d23812b6e5e229660f6f3844db2cf427ebd7674e01ff599784fd39658d2" Namespace="calico-system" Pod="calico-kube-controllers-78f95cbfdd-lh872" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--78f95cbfdd--lh872-eth0" Dec 16 13:35:12.376356 containerd[1607]: 2025-12-16 13:35:12.364 [INFO][4435] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="adff0d23812b6e5e229660f6f3844db2cf427ebd7674e01ff599784fd39658d2" Namespace="calico-system" Pod="calico-kube-controllers-78f95cbfdd-lh872" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--78f95cbfdd--lh872-eth0" Dec 16 13:35:12.376356 containerd[1607]: 2025-12-16 13:35:12.364 [INFO][4435] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="adff0d23812b6e5e229660f6f3844db2cf427ebd7674e01ff599784fd39658d2" Namespace="calico-system" Pod="calico-kube-controllers-78f95cbfdd-lh872" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--78f95cbfdd--lh872-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--78f95cbfdd--lh872-eth0", GenerateName:"calico-kube-controllers-78f95cbfdd-", Namespace:"calico-system", SelfLink:"", UID:"7900a761-f455-401e-a05b-5ba11ccd5975", ResourceVersion:"816", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 34, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"78f95cbfdd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"adff0d23812b6e5e229660f6f3844db2cf427ebd7674e01ff599784fd39658d2", Pod:"calico-kube-controllers-78f95cbfdd-lh872", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2869cc5a4f5", MAC:"c6:25:21:c0:3f:ab", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:35:12.376356 containerd[1607]: 2025-12-16 13:35:12.371 [INFO][4435] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="adff0d23812b6e5e229660f6f3844db2cf427ebd7674e01ff599784fd39658d2" Namespace="calico-system" Pod="calico-kube-controllers-78f95cbfdd-lh872" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--78f95cbfdd--lh872-eth0" Dec 16 13:35:12.387488 containerd[1607]: time="2025-12-16T13:35:12.387462531Z" level=info msg="connecting to shim adff0d23812b6e5e229660f6f3844db2cf427ebd7674e01ff599784fd39658d2" address="unix:///run/containerd/s/d4fe9046de4a94d16f6ec0fe2c0a49c5a642b91972c1fd649a7349a0724a23e4" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:35:12.408377 systemd[1]: Started cri-containerd-adff0d23812b6e5e229660f6f3844db2cf427ebd7674e01ff599784fd39658d2.scope - libcontainer container adff0d23812b6e5e229660f6f3844db2cf427ebd7674e01ff599784fd39658d2. Dec 16 13:35:12.416035 systemd-resolved[1507]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 16 13:35:12.439885 containerd[1607]: time="2025-12-16T13:35:12.439861261Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-78f95cbfdd-lh872,Uid:7900a761-f455-401e-a05b-5ba11ccd5975,Namespace:calico-system,Attempt:0,} returns sandbox id \"adff0d23812b6e5e229660f6f3844db2cf427ebd7674e01ff599784fd39658d2\"" Dec 16 13:35:12.441141 containerd[1607]: time="2025-12-16T13:35:12.441059473Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 13:35:12.552201 kubelet[2936]: E1216 13:35:12.552163 2936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59f94fcf66-7qr5n" podUID="ec4d992b-26cc-4e71-9db7-9af961649e2b" Dec 16 13:35:12.783763 containerd[1607]: time="2025-12-16T13:35:12.783328711Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:35:12.783763 containerd[1607]: time="2025-12-16T13:35:12.783612753Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 13:35:12.783763 containerd[1607]: time="2025-12-16T13:35:12.783672424Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Dec 16 13:35:12.784430 kubelet[2936]: E1216 13:35:12.784411 2936 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 13:35:12.784492 kubelet[2936]: E1216 13:35:12.784482 2936 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 13:35:12.785830 kubelet[2936]: E1216 13:35:12.785565 2936 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n88np,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-78f95cbfdd-lh872_calico-system(7900a761-f455-401e-a05b-5ba11ccd5975): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 13:35:12.786975 kubelet[2936]: E1216 13:35:12.786941 2936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-78f95cbfdd-lh872" podUID="7900a761-f455-401e-a05b-5ba11ccd5975" Dec 16 13:35:13.160309 systemd-networkd[1506]: cali54908ef9499: Gained IPv6LL Dec 16 13:35:13.305778 containerd[1607]: time="2025-12-16T13:35:13.305488226Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-h54lr,Uid:dfb9abff-2398-4522-b3cb-8954570c6d45,Namespace:kube-system,Attempt:0,}" Dec 16 13:35:13.305778 containerd[1607]: time="2025-12-16T13:35:13.305579412Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59f94fcf66-9fr5g,Uid:d4fd9b6b-9d45-4c37-9fcc-05498ebe1ddf,Namespace:calico-apiserver,Attempt:0,}" Dec 16 13:35:13.309319 containerd[1607]: time="2025-12-16T13:35:13.306091712Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-5bgh6,Uid:f5d82f91-8d34-4dd6-9053-b327d15a7af5,Namespace:calico-system,Attempt:0,}" Dec 16 13:35:13.457018 systemd-networkd[1506]: cali5d0615be7fa: Link UP Dec 16 13:35:13.459719 systemd-networkd[1506]: cali5d0615be7fa: Gained carrier Dec 16 13:35:13.491416 containerd[1607]: 2025-12-16 13:35:13.386 [INFO][4525] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--59f94fcf66--9fr5g-eth0 calico-apiserver-59f94fcf66- calico-apiserver d4fd9b6b-9d45-4c37-9fcc-05498ebe1ddf 812 0 2025-12-16 13:34:44 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:59f94fcf66 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-59f94fcf66-9fr5g eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali5d0615be7fa [] [] }} ContainerID="78cd8138930309cb13785b2bd73c5d7cbb16197abf9c45529749a5cefa098e4c" Namespace="calico-apiserver" Pod="calico-apiserver-59f94fcf66-9fr5g" WorkloadEndpoint="localhost-k8s-calico--apiserver--59f94fcf66--9fr5g-" Dec 16 13:35:13.491416 containerd[1607]: 2025-12-16 13:35:13.386 [INFO][4525] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="78cd8138930309cb13785b2bd73c5d7cbb16197abf9c45529749a5cefa098e4c" Namespace="calico-apiserver" Pod="calico-apiserver-59f94fcf66-9fr5g" WorkloadEndpoint="localhost-k8s-calico--apiserver--59f94fcf66--9fr5g-eth0" Dec 16 13:35:13.491416 containerd[1607]: 2025-12-16 13:35:13.415 [INFO][4561] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="78cd8138930309cb13785b2bd73c5d7cbb16197abf9c45529749a5cefa098e4c" HandleID="k8s-pod-network.78cd8138930309cb13785b2bd73c5d7cbb16197abf9c45529749a5cefa098e4c" Workload="localhost-k8s-calico--apiserver--59f94fcf66--9fr5g-eth0" Dec 16 13:35:13.491416 containerd[1607]: 2025-12-16 13:35:13.415 [INFO][4561] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="78cd8138930309cb13785b2bd73c5d7cbb16197abf9c45529749a5cefa098e4c" HandleID="k8s-pod-network.78cd8138930309cb13785b2bd73c5d7cbb16197abf9c45529749a5cefa098e4c" Workload="localhost-k8s-calico--apiserver--59f94fcf66--9fr5g-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f770), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-59f94fcf66-9fr5g", "timestamp":"2025-12-16 13:35:13.415036171 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 13:35:13.491416 containerd[1607]: 2025-12-16 13:35:13.415 [INFO][4561] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 13:35:13.491416 containerd[1607]: 2025-12-16 13:35:13.415 [INFO][4561] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 13:35:13.491416 containerd[1607]: 2025-12-16 13:35:13.415 [INFO][4561] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 16 13:35:13.491416 containerd[1607]: 2025-12-16 13:35:13.429 [INFO][4561] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.78cd8138930309cb13785b2bd73c5d7cbb16197abf9c45529749a5cefa098e4c" host="localhost" Dec 16 13:35:13.491416 containerd[1607]: 2025-12-16 13:35:13.432 [INFO][4561] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 16 13:35:13.491416 containerd[1607]: 2025-12-16 13:35:13.435 [INFO][4561] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 16 13:35:13.491416 containerd[1607]: 2025-12-16 13:35:13.435 [INFO][4561] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 16 13:35:13.491416 containerd[1607]: 2025-12-16 13:35:13.436 [INFO][4561] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 16 13:35:13.491416 containerd[1607]: 2025-12-16 13:35:13.436 [INFO][4561] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.78cd8138930309cb13785b2bd73c5d7cbb16197abf9c45529749a5cefa098e4c" host="localhost" Dec 16 13:35:13.491416 containerd[1607]: 2025-12-16 13:35:13.437 [INFO][4561] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.78cd8138930309cb13785b2bd73c5d7cbb16197abf9c45529749a5cefa098e4c Dec 16 13:35:13.491416 containerd[1607]: 2025-12-16 13:35:13.444 [INFO][4561] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.78cd8138930309cb13785b2bd73c5d7cbb16197abf9c45529749a5cefa098e4c" host="localhost" Dec 16 13:35:13.491416 containerd[1607]: 2025-12-16 13:35:13.450 [INFO][4561] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.78cd8138930309cb13785b2bd73c5d7cbb16197abf9c45529749a5cefa098e4c" host="localhost" Dec 16 13:35:13.491416 containerd[1607]: 2025-12-16 13:35:13.450 [INFO][4561] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.78cd8138930309cb13785b2bd73c5d7cbb16197abf9c45529749a5cefa098e4c" host="localhost" Dec 16 13:35:13.491416 containerd[1607]: 2025-12-16 13:35:13.450 [INFO][4561] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 13:35:13.491416 containerd[1607]: 2025-12-16 13:35:13.450 [INFO][4561] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="78cd8138930309cb13785b2bd73c5d7cbb16197abf9c45529749a5cefa098e4c" HandleID="k8s-pod-network.78cd8138930309cb13785b2bd73c5d7cbb16197abf9c45529749a5cefa098e4c" Workload="localhost-k8s-calico--apiserver--59f94fcf66--9fr5g-eth0" Dec 16 13:35:13.491960 containerd[1607]: 2025-12-16 13:35:13.453 [INFO][4525] cni-plugin/k8s.go 418: Populated endpoint ContainerID="78cd8138930309cb13785b2bd73c5d7cbb16197abf9c45529749a5cefa098e4c" Namespace="calico-apiserver" Pod="calico-apiserver-59f94fcf66-9fr5g" WorkloadEndpoint="localhost-k8s-calico--apiserver--59f94fcf66--9fr5g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--59f94fcf66--9fr5g-eth0", GenerateName:"calico-apiserver-59f94fcf66-", Namespace:"calico-apiserver", SelfLink:"", UID:"d4fd9b6b-9d45-4c37-9fcc-05498ebe1ddf", ResourceVersion:"812", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 34, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59f94fcf66", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-59f94fcf66-9fr5g", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5d0615be7fa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:35:13.491960 containerd[1607]: 2025-12-16 13:35:13.454 [INFO][4525] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="78cd8138930309cb13785b2bd73c5d7cbb16197abf9c45529749a5cefa098e4c" Namespace="calico-apiserver" Pod="calico-apiserver-59f94fcf66-9fr5g" WorkloadEndpoint="localhost-k8s-calico--apiserver--59f94fcf66--9fr5g-eth0" Dec 16 13:35:13.491960 containerd[1607]: 2025-12-16 13:35:13.454 [INFO][4525] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5d0615be7fa ContainerID="78cd8138930309cb13785b2bd73c5d7cbb16197abf9c45529749a5cefa098e4c" Namespace="calico-apiserver" Pod="calico-apiserver-59f94fcf66-9fr5g" WorkloadEndpoint="localhost-k8s-calico--apiserver--59f94fcf66--9fr5g-eth0" Dec 16 13:35:13.491960 containerd[1607]: 2025-12-16 13:35:13.459 [INFO][4525] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="78cd8138930309cb13785b2bd73c5d7cbb16197abf9c45529749a5cefa098e4c" Namespace="calico-apiserver" Pod="calico-apiserver-59f94fcf66-9fr5g" WorkloadEndpoint="localhost-k8s-calico--apiserver--59f94fcf66--9fr5g-eth0" Dec 16 13:35:13.491960 containerd[1607]: 2025-12-16 13:35:13.460 [INFO][4525] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="78cd8138930309cb13785b2bd73c5d7cbb16197abf9c45529749a5cefa098e4c" Namespace="calico-apiserver" Pod="calico-apiserver-59f94fcf66-9fr5g" WorkloadEndpoint="localhost-k8s-calico--apiserver--59f94fcf66--9fr5g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--59f94fcf66--9fr5g-eth0", GenerateName:"calico-apiserver-59f94fcf66-", Namespace:"calico-apiserver", SelfLink:"", UID:"d4fd9b6b-9d45-4c37-9fcc-05498ebe1ddf", ResourceVersion:"812", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 34, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59f94fcf66", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"78cd8138930309cb13785b2bd73c5d7cbb16197abf9c45529749a5cefa098e4c", Pod:"calico-apiserver-59f94fcf66-9fr5g", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5d0615be7fa", MAC:"96:1f:60:ec:46:d2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:35:13.491960 containerd[1607]: 2025-12-16 13:35:13.486 [INFO][4525] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="78cd8138930309cb13785b2bd73c5d7cbb16197abf9c45529749a5cefa098e4c" Namespace="calico-apiserver" Pod="calico-apiserver-59f94fcf66-9fr5g" WorkloadEndpoint="localhost-k8s-calico--apiserver--59f94fcf66--9fr5g-eth0" Dec 16 13:35:13.566661 systemd-networkd[1506]: cali0aec8cf6bbc: Link UP Dec 16 13:35:13.570567 systemd-networkd[1506]: cali0aec8cf6bbc: Gained carrier Dec 16 13:35:13.576247 kubelet[2936]: E1216 13:35:13.576075 2936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-78f95cbfdd-lh872" podUID="7900a761-f455-401e-a05b-5ba11ccd5975" Dec 16 13:35:13.591030 containerd[1607]: 2025-12-16 13:35:13.373 [INFO][4515] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--h54lr-eth0 coredns-674b8bbfcf- kube-system dfb9abff-2398-4522-b3cb-8954570c6d45 809 0 2025-12-16 13:34:37 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-h54lr eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali0aec8cf6bbc [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="a235a3dd328bdbcd32876816f2f9baebfe6ca8b27e7fd6a585e4fecf076c98df" Namespace="kube-system" Pod="coredns-674b8bbfcf-h54lr" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--h54lr-" Dec 16 13:35:13.591030 containerd[1607]: 2025-12-16 13:35:13.373 [INFO][4515] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a235a3dd328bdbcd32876816f2f9baebfe6ca8b27e7fd6a585e4fecf076c98df" Namespace="kube-system" Pod="coredns-674b8bbfcf-h54lr" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--h54lr-eth0" Dec 16 13:35:13.591030 containerd[1607]: 2025-12-16 13:35:13.420 [INFO][4553] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a235a3dd328bdbcd32876816f2f9baebfe6ca8b27e7fd6a585e4fecf076c98df" HandleID="k8s-pod-network.a235a3dd328bdbcd32876816f2f9baebfe6ca8b27e7fd6a585e4fecf076c98df" Workload="localhost-k8s-coredns--674b8bbfcf--h54lr-eth0" Dec 16 13:35:13.591030 containerd[1607]: 2025-12-16 13:35:13.420 [INFO][4553] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a235a3dd328bdbcd32876816f2f9baebfe6ca8b27e7fd6a585e4fecf076c98df" HandleID="k8s-pod-network.a235a3dd328bdbcd32876816f2f9baebfe6ca8b27e7fd6a585e4fecf076c98df" Workload="localhost-k8s-coredns--674b8bbfcf--h54lr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00025b820), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-h54lr", "timestamp":"2025-12-16 13:35:13.420073043 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 13:35:13.591030 containerd[1607]: 2025-12-16 13:35:13.420 [INFO][4553] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 13:35:13.591030 containerd[1607]: 2025-12-16 13:35:13.450 [INFO][4553] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 13:35:13.591030 containerd[1607]: 2025-12-16 13:35:13.450 [INFO][4553] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 16 13:35:13.591030 containerd[1607]: 2025-12-16 13:35:13.530 [INFO][4553] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a235a3dd328bdbcd32876816f2f9baebfe6ca8b27e7fd6a585e4fecf076c98df" host="localhost" Dec 16 13:35:13.591030 containerd[1607]: 2025-12-16 13:35:13.533 [INFO][4553] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 16 13:35:13.591030 containerd[1607]: 2025-12-16 13:35:13.535 [INFO][4553] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 16 13:35:13.591030 containerd[1607]: 2025-12-16 13:35:13.536 [INFO][4553] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 16 13:35:13.591030 containerd[1607]: 2025-12-16 13:35:13.537 [INFO][4553] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 16 13:35:13.591030 containerd[1607]: 2025-12-16 13:35:13.537 [INFO][4553] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a235a3dd328bdbcd32876816f2f9baebfe6ca8b27e7fd6a585e4fecf076c98df" host="localhost" Dec 16 13:35:13.591030 containerd[1607]: 2025-12-16 13:35:13.538 [INFO][4553] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a235a3dd328bdbcd32876816f2f9baebfe6ca8b27e7fd6a585e4fecf076c98df Dec 16 13:35:13.591030 containerd[1607]: 2025-12-16 13:35:13.543 [INFO][4553] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a235a3dd328bdbcd32876816f2f9baebfe6ca8b27e7fd6a585e4fecf076c98df" host="localhost" Dec 16 13:35:13.591030 containerd[1607]: 2025-12-16 13:35:13.556 [INFO][4553] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.a235a3dd328bdbcd32876816f2f9baebfe6ca8b27e7fd6a585e4fecf076c98df" host="localhost" Dec 16 13:35:13.591030 containerd[1607]: 2025-12-16 13:35:13.556 [INFO][4553] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.a235a3dd328bdbcd32876816f2f9baebfe6ca8b27e7fd6a585e4fecf076c98df" host="localhost" Dec 16 13:35:13.591030 containerd[1607]: 2025-12-16 13:35:13.556 [INFO][4553] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 13:35:13.591030 containerd[1607]: 2025-12-16 13:35:13.556 [INFO][4553] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="a235a3dd328bdbcd32876816f2f9baebfe6ca8b27e7fd6a585e4fecf076c98df" HandleID="k8s-pod-network.a235a3dd328bdbcd32876816f2f9baebfe6ca8b27e7fd6a585e4fecf076c98df" Workload="localhost-k8s-coredns--674b8bbfcf--h54lr-eth0" Dec 16 13:35:13.591479 containerd[1607]: 2025-12-16 13:35:13.558 [INFO][4515] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a235a3dd328bdbcd32876816f2f9baebfe6ca8b27e7fd6a585e4fecf076c98df" Namespace="kube-system" Pod="coredns-674b8bbfcf-h54lr" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--h54lr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--h54lr-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"dfb9abff-2398-4522-b3cb-8954570c6d45", ResourceVersion:"809", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 34, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-h54lr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0aec8cf6bbc", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:35:13.591479 containerd[1607]: 2025-12-16 13:35:13.559 [INFO][4515] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="a235a3dd328bdbcd32876816f2f9baebfe6ca8b27e7fd6a585e4fecf076c98df" Namespace="kube-system" Pod="coredns-674b8bbfcf-h54lr" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--h54lr-eth0" Dec 16 13:35:13.591479 containerd[1607]: 2025-12-16 13:35:13.559 [INFO][4515] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0aec8cf6bbc ContainerID="a235a3dd328bdbcd32876816f2f9baebfe6ca8b27e7fd6a585e4fecf076c98df" Namespace="kube-system" Pod="coredns-674b8bbfcf-h54lr" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--h54lr-eth0" Dec 16 13:35:13.591479 containerd[1607]: 2025-12-16 13:35:13.571 [INFO][4515] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a235a3dd328bdbcd32876816f2f9baebfe6ca8b27e7fd6a585e4fecf076c98df" Namespace="kube-system" Pod="coredns-674b8bbfcf-h54lr" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--h54lr-eth0" Dec 16 13:35:13.591479 containerd[1607]: 2025-12-16 13:35:13.572 [INFO][4515] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a235a3dd328bdbcd32876816f2f9baebfe6ca8b27e7fd6a585e4fecf076c98df" Namespace="kube-system" Pod="coredns-674b8bbfcf-h54lr" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--h54lr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--h54lr-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"dfb9abff-2398-4522-b3cb-8954570c6d45", ResourceVersion:"809", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 34, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a235a3dd328bdbcd32876816f2f9baebfe6ca8b27e7fd6a585e4fecf076c98df", Pod:"coredns-674b8bbfcf-h54lr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0aec8cf6bbc", MAC:"da:f7:b5:64:2b:ce", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:35:13.591479 containerd[1607]: 2025-12-16 13:35:13.589 [INFO][4515] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a235a3dd328bdbcd32876816f2f9baebfe6ca8b27e7fd6a585e4fecf076c98df" Namespace="kube-system" Pod="coredns-674b8bbfcf-h54lr" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--h54lr-eth0" Dec 16 13:35:13.605513 containerd[1607]: time="2025-12-16T13:35:13.605457925Z" level=info msg="connecting to shim 78cd8138930309cb13785b2bd73c5d7cbb16197abf9c45529749a5cefa098e4c" address="unix:///run/containerd/s/8cfa1c26396f933afe35db2198aa355006fa8bd1fe25f765fa0d5756fa226c0a" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:35:13.630376 systemd[1]: Started cri-containerd-78cd8138930309cb13785b2bd73c5d7cbb16197abf9c45529749a5cefa098e4c.scope - libcontainer container 78cd8138930309cb13785b2bd73c5d7cbb16197abf9c45529749a5cefa098e4c. Dec 16 13:35:13.632239 containerd[1607]: time="2025-12-16T13:35:13.631812587Z" level=info msg="connecting to shim a235a3dd328bdbcd32876816f2f9baebfe6ca8b27e7fd6a585e4fecf076c98df" address="unix:///run/containerd/s/eeee3b5340c1b38d044cc714ff0c68048b562e86a6e7bca82a4f71dbcea95953" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:35:13.654946 systemd-resolved[1507]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 16 13:35:13.663347 systemd[1]: Started cri-containerd-a235a3dd328bdbcd32876816f2f9baebfe6ca8b27e7fd6a585e4fecf076c98df.scope - libcontainer container a235a3dd328bdbcd32876816f2f9baebfe6ca8b27e7fd6a585e4fecf076c98df. Dec 16 13:35:13.664990 systemd-networkd[1506]: cali7828768357f: Link UP Dec 16 13:35:13.666610 systemd-networkd[1506]: cali7828768357f: Gained carrier Dec 16 13:35:13.679196 containerd[1607]: 2025-12-16 13:35:13.482 [INFO][4534] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--666569f655--5bgh6-eth0 goldmane-666569f655- calico-system f5d82f91-8d34-4dd6-9053-b327d15a7af5 818 0 2025-12-16 13:34:46 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-666569f655-5bgh6 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali7828768357f [] [] }} ContainerID="29de8bbcb8115f11218c988931c25a43e315c346f9b338029f76addb4c15add2" Namespace="calico-system" Pod="goldmane-666569f655-5bgh6" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--5bgh6-" Dec 16 13:35:13.679196 containerd[1607]: 2025-12-16 13:35:13.482 [INFO][4534] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="29de8bbcb8115f11218c988931c25a43e315c346f9b338029f76addb4c15add2" Namespace="calico-system" Pod="goldmane-666569f655-5bgh6" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--5bgh6-eth0" Dec 16 13:35:13.679196 containerd[1607]: 2025-12-16 13:35:13.508 [INFO][4573] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="29de8bbcb8115f11218c988931c25a43e315c346f9b338029f76addb4c15add2" HandleID="k8s-pod-network.29de8bbcb8115f11218c988931c25a43e315c346f9b338029f76addb4c15add2" Workload="localhost-k8s-goldmane--666569f655--5bgh6-eth0" Dec 16 13:35:13.679196 containerd[1607]: 2025-12-16 13:35:13.508 [INFO][4573] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="29de8bbcb8115f11218c988931c25a43e315c346f9b338029f76addb4c15add2" HandleID="k8s-pod-network.29de8bbcb8115f11218c988931c25a43e315c346f9b338029f76addb4c15add2" Workload="localhost-k8s-goldmane--666569f655--5bgh6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00025b280), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-666569f655-5bgh6", "timestamp":"2025-12-16 13:35:13.508694 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 13:35:13.679196 containerd[1607]: 2025-12-16 13:35:13.508 [INFO][4573] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 13:35:13.679196 containerd[1607]: 2025-12-16 13:35:13.556 [INFO][4573] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 13:35:13.679196 containerd[1607]: 2025-12-16 13:35:13.558 [INFO][4573] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 16 13:35:13.679196 containerd[1607]: 2025-12-16 13:35:13.632 [INFO][4573] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.29de8bbcb8115f11218c988931c25a43e315c346f9b338029f76addb4c15add2" host="localhost" Dec 16 13:35:13.679196 containerd[1607]: 2025-12-16 13:35:13.636 [INFO][4573] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 16 13:35:13.679196 containerd[1607]: 2025-12-16 13:35:13.643 [INFO][4573] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 16 13:35:13.679196 containerd[1607]: 2025-12-16 13:35:13.645 [INFO][4573] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 16 13:35:13.679196 containerd[1607]: 2025-12-16 13:35:13.646 [INFO][4573] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 16 13:35:13.679196 containerd[1607]: 2025-12-16 13:35:13.646 [INFO][4573] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.29de8bbcb8115f11218c988931c25a43e315c346f9b338029f76addb4c15add2" host="localhost" Dec 16 13:35:13.679196 containerd[1607]: 2025-12-16 13:35:13.649 [INFO][4573] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.29de8bbcb8115f11218c988931c25a43e315c346f9b338029f76addb4c15add2 Dec 16 13:35:13.679196 containerd[1607]: 2025-12-16 13:35:13.651 [INFO][4573] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.29de8bbcb8115f11218c988931c25a43e315c346f9b338029f76addb4c15add2" host="localhost" Dec 16 13:35:13.679196 containerd[1607]: 2025-12-16 13:35:13.656 [INFO][4573] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.29de8bbcb8115f11218c988931c25a43e315c346f9b338029f76addb4c15add2" host="localhost" Dec 16 13:35:13.679196 containerd[1607]: 2025-12-16 13:35:13.656 [INFO][4573] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.29de8bbcb8115f11218c988931c25a43e315c346f9b338029f76addb4c15add2" host="localhost" Dec 16 13:35:13.679196 containerd[1607]: 2025-12-16 13:35:13.656 [INFO][4573] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 13:35:13.679196 containerd[1607]: 2025-12-16 13:35:13.656 [INFO][4573] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="29de8bbcb8115f11218c988931c25a43e315c346f9b338029f76addb4c15add2" HandleID="k8s-pod-network.29de8bbcb8115f11218c988931c25a43e315c346f9b338029f76addb4c15add2" Workload="localhost-k8s-goldmane--666569f655--5bgh6-eth0" Dec 16 13:35:13.680424 containerd[1607]: 2025-12-16 13:35:13.660 [INFO][4534] cni-plugin/k8s.go 418: Populated endpoint ContainerID="29de8bbcb8115f11218c988931c25a43e315c346f9b338029f76addb4c15add2" Namespace="calico-system" Pod="goldmane-666569f655-5bgh6" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--5bgh6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--5bgh6-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"f5d82f91-8d34-4dd6-9053-b327d15a7af5", ResourceVersion:"818", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 34, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-666569f655-5bgh6", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali7828768357f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:35:13.680424 containerd[1607]: 2025-12-16 13:35:13.660 [INFO][4534] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="29de8bbcb8115f11218c988931c25a43e315c346f9b338029f76addb4c15add2" Namespace="calico-system" Pod="goldmane-666569f655-5bgh6" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--5bgh6-eth0" Dec 16 13:35:13.680424 containerd[1607]: 2025-12-16 13:35:13.660 [INFO][4534] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7828768357f ContainerID="29de8bbcb8115f11218c988931c25a43e315c346f9b338029f76addb4c15add2" Namespace="calico-system" Pod="goldmane-666569f655-5bgh6" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--5bgh6-eth0" Dec 16 13:35:13.680424 containerd[1607]: 2025-12-16 13:35:13.666 [INFO][4534] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="29de8bbcb8115f11218c988931c25a43e315c346f9b338029f76addb4c15add2" Namespace="calico-system" Pod="goldmane-666569f655-5bgh6" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--5bgh6-eth0" Dec 16 13:35:13.680424 containerd[1607]: 2025-12-16 13:35:13.667 [INFO][4534] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="29de8bbcb8115f11218c988931c25a43e315c346f9b338029f76addb4c15add2" Namespace="calico-system" Pod="goldmane-666569f655-5bgh6" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--5bgh6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--5bgh6-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"f5d82f91-8d34-4dd6-9053-b327d15a7af5", ResourceVersion:"818", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 34, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"29de8bbcb8115f11218c988931c25a43e315c346f9b338029f76addb4c15add2", Pod:"goldmane-666569f655-5bgh6", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali7828768357f", MAC:"6a:28:ae:6e:c5:c2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:35:13.680424 containerd[1607]: 2025-12-16 13:35:13.677 [INFO][4534] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="29de8bbcb8115f11218c988931c25a43e315c346f9b338029f76addb4c15add2" Namespace="calico-system" Pod="goldmane-666569f655-5bgh6" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--5bgh6-eth0" Dec 16 13:35:13.684074 systemd-resolved[1507]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 16 13:35:13.699116 containerd[1607]: time="2025-12-16T13:35:13.699072277Z" level=info msg="connecting to shim 29de8bbcb8115f11218c988931c25a43e315c346f9b338029f76addb4c15add2" address="unix:///run/containerd/s/237b3932c4e732e3d5b7dc50335c6c044a4ba641cbf672782b2799740d110293" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:35:13.717427 systemd[1]: Started cri-containerd-29de8bbcb8115f11218c988931c25a43e315c346f9b338029f76addb4c15add2.scope - libcontainer container 29de8bbcb8115f11218c988931c25a43e315c346f9b338029f76addb4c15add2. Dec 16 13:35:13.740492 systemd-resolved[1507]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 16 13:35:13.742602 containerd[1607]: time="2025-12-16T13:35:13.742580685Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-h54lr,Uid:dfb9abff-2398-4522-b3cb-8954570c6d45,Namespace:kube-system,Attempt:0,} returns sandbox id \"a235a3dd328bdbcd32876816f2f9baebfe6ca8b27e7fd6a585e4fecf076c98df\"" Dec 16 13:35:13.748158 containerd[1607]: time="2025-12-16T13:35:13.748065767Z" level=info msg="CreateContainer within sandbox \"a235a3dd328bdbcd32876816f2f9baebfe6ca8b27e7fd6a585e4fecf076c98df\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 16 13:35:13.751445 containerd[1607]: time="2025-12-16T13:35:13.751405680Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59f94fcf66-9fr5g,Uid:d4fd9b6b-9d45-4c37-9fcc-05498ebe1ddf,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"78cd8138930309cb13785b2bd73c5d7cbb16197abf9c45529749a5cefa098e4c\"" Dec 16 13:35:13.753811 containerd[1607]: time="2025-12-16T13:35:13.753740216Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 13:35:13.759578 containerd[1607]: time="2025-12-16T13:35:13.759552802Z" level=info msg="Container 4daf5a2723d513fe3d5152724cd8ae0bbe297efe27755f6d87d1155b18780ba6: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:35:13.765284 containerd[1607]: time="2025-12-16T13:35:13.765267087Z" level=info msg="CreateContainer within sandbox \"a235a3dd328bdbcd32876816f2f9baebfe6ca8b27e7fd6a585e4fecf076c98df\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4daf5a2723d513fe3d5152724cd8ae0bbe297efe27755f6d87d1155b18780ba6\"" Dec 16 13:35:13.766458 containerd[1607]: time="2025-12-16T13:35:13.765981800Z" level=info msg="StartContainer for \"4daf5a2723d513fe3d5152724cd8ae0bbe297efe27755f6d87d1155b18780ba6\"" Dec 16 13:35:13.766763 containerd[1607]: time="2025-12-16T13:35:13.766751579Z" level=info msg="connecting to shim 4daf5a2723d513fe3d5152724cd8ae0bbe297efe27755f6d87d1155b18780ba6" address="unix:///run/containerd/s/eeee3b5340c1b38d044cc714ff0c68048b562e86a6e7bca82a4f71dbcea95953" protocol=ttrpc version=3 Dec 16 13:35:13.782433 systemd[1]: Started cri-containerd-4daf5a2723d513fe3d5152724cd8ae0bbe297efe27755f6d87d1155b18780ba6.scope - libcontainer container 4daf5a2723d513fe3d5152724cd8ae0bbe297efe27755f6d87d1155b18780ba6. Dec 16 13:35:13.794136 containerd[1607]: time="2025-12-16T13:35:13.794110751Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-5bgh6,Uid:f5d82f91-8d34-4dd6-9053-b327d15a7af5,Namespace:calico-system,Attempt:0,} returns sandbox id \"29de8bbcb8115f11218c988931c25a43e315c346f9b338029f76addb4c15add2\"" Dec 16 13:35:13.804939 containerd[1607]: time="2025-12-16T13:35:13.804908609Z" level=info msg="StartContainer for \"4daf5a2723d513fe3d5152724cd8ae0bbe297efe27755f6d87d1155b18780ba6\" returns successfully" Dec 16 13:35:13.864332 systemd-networkd[1506]: cali2869cc5a4f5: Gained IPv6LL Dec 16 13:35:14.129011 containerd[1607]: time="2025-12-16T13:35:14.128406045Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:35:14.129011 containerd[1607]: time="2025-12-16T13:35:14.128798041Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 13:35:14.129011 containerd[1607]: time="2025-12-16T13:35:14.128852334Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 16 13:35:14.129285 kubelet[2936]: E1216 13:35:14.128955 2936 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:35:14.129285 kubelet[2936]: E1216 13:35:14.128987 2936 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:35:14.129835 containerd[1607]: time="2025-12-16T13:35:14.129433615Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 13:35:14.129898 kubelet[2936]: E1216 13:35:14.129600 2936 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l7bhf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-59f94fcf66-9fr5g_calico-apiserver(d4fd9b6b-9d45-4c37-9fcc-05498ebe1ddf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 13:35:14.131158 kubelet[2936]: E1216 13:35:14.131124 2936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59f94fcf66-9fr5g" podUID="d4fd9b6b-9d45-4c37-9fcc-05498ebe1ddf" Dec 16 13:35:14.304863 containerd[1607]: time="2025-12-16T13:35:14.304771755Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vq4wz,Uid:6f1d1d03-3f5c-4209-adf6-a2cec01d4b01,Namespace:calico-system,Attempt:0,}" Dec 16 13:35:14.371249 systemd-networkd[1506]: cali2d6f3b912b4: Link UP Dec 16 13:35:14.371931 systemd-networkd[1506]: cali2d6f3b912b4: Gained carrier Dec 16 13:35:14.383188 containerd[1607]: 2025-12-16 13:35:14.332 [INFO][4774] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--vq4wz-eth0 csi-node-driver- calico-system 6f1d1d03-3f5c-4209-adf6-a2cec01d4b01 704 0 2025-12-16 13:34:48 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-vq4wz eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali2d6f3b912b4 [] [] }} ContainerID="1d3fd2164991387a3b313686e8f89ff7398b1b3f09751dd274aa5fd1b1fdeff1" Namespace="calico-system" Pod="csi-node-driver-vq4wz" WorkloadEndpoint="localhost-k8s-csi--node--driver--vq4wz-" Dec 16 13:35:14.383188 containerd[1607]: 2025-12-16 13:35:14.332 [INFO][4774] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1d3fd2164991387a3b313686e8f89ff7398b1b3f09751dd274aa5fd1b1fdeff1" Namespace="calico-system" Pod="csi-node-driver-vq4wz" WorkloadEndpoint="localhost-k8s-csi--node--driver--vq4wz-eth0" Dec 16 13:35:14.383188 containerd[1607]: 2025-12-16 13:35:14.350 [INFO][4786] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1d3fd2164991387a3b313686e8f89ff7398b1b3f09751dd274aa5fd1b1fdeff1" HandleID="k8s-pod-network.1d3fd2164991387a3b313686e8f89ff7398b1b3f09751dd274aa5fd1b1fdeff1" Workload="localhost-k8s-csi--node--driver--vq4wz-eth0" Dec 16 13:35:14.383188 containerd[1607]: 2025-12-16 13:35:14.350 [INFO][4786] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="1d3fd2164991387a3b313686e8f89ff7398b1b3f09751dd274aa5fd1b1fdeff1" HandleID="k8s-pod-network.1d3fd2164991387a3b313686e8f89ff7398b1b3f09751dd274aa5fd1b1fdeff1" Workload="localhost-k8s-csi--node--driver--vq4wz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f090), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-vq4wz", "timestamp":"2025-12-16 13:35:14.350011349 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 13:35:14.383188 containerd[1607]: 2025-12-16 13:35:14.350 [INFO][4786] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 13:35:14.383188 containerd[1607]: 2025-12-16 13:35:14.350 [INFO][4786] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 13:35:14.383188 containerd[1607]: 2025-12-16 13:35:14.350 [INFO][4786] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 16 13:35:14.383188 containerd[1607]: 2025-12-16 13:35:14.355 [INFO][4786] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1d3fd2164991387a3b313686e8f89ff7398b1b3f09751dd274aa5fd1b1fdeff1" host="localhost" Dec 16 13:35:14.383188 containerd[1607]: 2025-12-16 13:35:14.357 [INFO][4786] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 16 13:35:14.383188 containerd[1607]: 2025-12-16 13:35:14.359 [INFO][4786] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 16 13:35:14.383188 containerd[1607]: 2025-12-16 13:35:14.360 [INFO][4786] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 16 13:35:14.383188 containerd[1607]: 2025-12-16 13:35:14.361 [INFO][4786] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 16 13:35:14.383188 containerd[1607]: 2025-12-16 13:35:14.361 [INFO][4786] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1d3fd2164991387a3b313686e8f89ff7398b1b3f09751dd274aa5fd1b1fdeff1" host="localhost" Dec 16 13:35:14.383188 containerd[1607]: 2025-12-16 13:35:14.362 [INFO][4786] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.1d3fd2164991387a3b313686e8f89ff7398b1b3f09751dd274aa5fd1b1fdeff1 Dec 16 13:35:14.383188 containerd[1607]: 2025-12-16 13:35:14.364 [INFO][4786] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1d3fd2164991387a3b313686e8f89ff7398b1b3f09751dd274aa5fd1b1fdeff1" host="localhost" Dec 16 13:35:14.383188 containerd[1607]: 2025-12-16 13:35:14.367 [INFO][4786] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.1d3fd2164991387a3b313686e8f89ff7398b1b3f09751dd274aa5fd1b1fdeff1" host="localhost" Dec 16 13:35:14.383188 containerd[1607]: 2025-12-16 13:35:14.367 [INFO][4786] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.1d3fd2164991387a3b313686e8f89ff7398b1b3f09751dd274aa5fd1b1fdeff1" host="localhost" Dec 16 13:35:14.383188 containerd[1607]: 2025-12-16 13:35:14.367 [INFO][4786] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 13:35:14.383188 containerd[1607]: 2025-12-16 13:35:14.367 [INFO][4786] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="1d3fd2164991387a3b313686e8f89ff7398b1b3f09751dd274aa5fd1b1fdeff1" HandleID="k8s-pod-network.1d3fd2164991387a3b313686e8f89ff7398b1b3f09751dd274aa5fd1b1fdeff1" Workload="localhost-k8s-csi--node--driver--vq4wz-eth0" Dec 16 13:35:14.385407 containerd[1607]: 2025-12-16 13:35:14.369 [INFO][4774] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1d3fd2164991387a3b313686e8f89ff7398b1b3f09751dd274aa5fd1b1fdeff1" Namespace="calico-system" Pod="csi-node-driver-vq4wz" WorkloadEndpoint="localhost-k8s-csi--node--driver--vq4wz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--vq4wz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6f1d1d03-3f5c-4209-adf6-a2cec01d4b01", ResourceVersion:"704", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 34, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-vq4wz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali2d6f3b912b4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:35:14.385407 containerd[1607]: 2025-12-16 13:35:14.369 [INFO][4774] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="1d3fd2164991387a3b313686e8f89ff7398b1b3f09751dd274aa5fd1b1fdeff1" Namespace="calico-system" Pod="csi-node-driver-vq4wz" WorkloadEndpoint="localhost-k8s-csi--node--driver--vq4wz-eth0" Dec 16 13:35:14.385407 containerd[1607]: 2025-12-16 13:35:14.369 [INFO][4774] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2d6f3b912b4 ContainerID="1d3fd2164991387a3b313686e8f89ff7398b1b3f09751dd274aa5fd1b1fdeff1" Namespace="calico-system" Pod="csi-node-driver-vq4wz" WorkloadEndpoint="localhost-k8s-csi--node--driver--vq4wz-eth0" Dec 16 13:35:14.385407 containerd[1607]: 2025-12-16 13:35:14.371 [INFO][4774] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1d3fd2164991387a3b313686e8f89ff7398b1b3f09751dd274aa5fd1b1fdeff1" Namespace="calico-system" Pod="csi-node-driver-vq4wz" WorkloadEndpoint="localhost-k8s-csi--node--driver--vq4wz-eth0" Dec 16 13:35:14.385407 containerd[1607]: 2025-12-16 13:35:14.372 [INFO][4774] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1d3fd2164991387a3b313686e8f89ff7398b1b3f09751dd274aa5fd1b1fdeff1" Namespace="calico-system" Pod="csi-node-driver-vq4wz" WorkloadEndpoint="localhost-k8s-csi--node--driver--vq4wz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--vq4wz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6f1d1d03-3f5c-4209-adf6-a2cec01d4b01", ResourceVersion:"704", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 34, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1d3fd2164991387a3b313686e8f89ff7398b1b3f09751dd274aa5fd1b1fdeff1", Pod:"csi-node-driver-vq4wz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali2d6f3b912b4", MAC:"36:86:3a:1c:9a:18", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:35:14.385407 containerd[1607]: 2025-12-16 13:35:14.379 [INFO][4774] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1d3fd2164991387a3b313686e8f89ff7398b1b3f09751dd274aa5fd1b1fdeff1" Namespace="calico-system" Pod="csi-node-driver-vq4wz" WorkloadEndpoint="localhost-k8s-csi--node--driver--vq4wz-eth0" Dec 16 13:35:14.400064 containerd[1607]: time="2025-12-16T13:35:14.399856356Z" level=info msg="connecting to shim 1d3fd2164991387a3b313686e8f89ff7398b1b3f09751dd274aa5fd1b1fdeff1" address="unix:///run/containerd/s/1ac34014c24b4e05ee5cbb083bf232d1945318a8eab8777378f152de01ceaa2e" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:35:14.414325 systemd[1]: Started cri-containerd-1d3fd2164991387a3b313686e8f89ff7398b1b3f09751dd274aa5fd1b1fdeff1.scope - libcontainer container 1d3fd2164991387a3b313686e8f89ff7398b1b3f09751dd274aa5fd1b1fdeff1. Dec 16 13:35:14.421799 systemd-resolved[1507]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 16 13:35:14.430122 containerd[1607]: time="2025-12-16T13:35:14.430068485Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vq4wz,Uid:6f1d1d03-3f5c-4209-adf6-a2cec01d4b01,Namespace:calico-system,Attempt:0,} returns sandbox id \"1d3fd2164991387a3b313686e8f89ff7398b1b3f09751dd274aa5fd1b1fdeff1\"" Dec 16 13:35:14.494262 containerd[1607]: time="2025-12-16T13:35:14.494222297Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:35:14.494856 containerd[1607]: time="2025-12-16T13:35:14.494835469Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 13:35:14.494967 containerd[1607]: time="2025-12-16T13:35:14.494912570Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Dec 16 13:35:14.495146 kubelet[2936]: E1216 13:35:14.495113 2936 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 13:35:14.495200 kubelet[2936]: E1216 13:35:14.495168 2936 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 13:35:14.495404 kubelet[2936]: E1216 13:35:14.495366 2936 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nzgsj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-5bgh6_calico-system(f5d82f91-8d34-4dd6-9053-b327d15a7af5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 13:35:14.495698 containerd[1607]: time="2025-12-16T13:35:14.495677964Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 13:35:14.497283 kubelet[2936]: E1216 13:35:14.496908 2936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5bgh6" podUID="f5d82f91-8d34-4dd6-9053-b327d15a7af5" Dec 16 13:35:14.504337 systemd-networkd[1506]: cali5d0615be7fa: Gained IPv6LL Dec 16 13:35:14.558151 kubelet[2936]: E1216 13:35:14.558114 2936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5bgh6" podUID="f5d82f91-8d34-4dd6-9053-b327d15a7af5" Dec 16 13:35:14.564714 kubelet[2936]: E1216 13:35:14.564611 2936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59f94fcf66-9fr5g" podUID="d4fd9b6b-9d45-4c37-9fcc-05498ebe1ddf" Dec 16 13:35:14.824351 systemd-networkd[1506]: cali7828768357f: Gained IPv6LL Dec 16 13:35:14.876542 containerd[1607]: time="2025-12-16T13:35:14.876410569Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:35:14.876789 containerd[1607]: time="2025-12-16T13:35:14.876768213Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 13:35:14.876917 containerd[1607]: time="2025-12-16T13:35:14.876834445Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Dec 16 13:35:14.876983 kubelet[2936]: E1216 13:35:14.876939 2936 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 13:35:14.877553 kubelet[2936]: E1216 13:35:14.876991 2936 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 13:35:14.877553 kubelet[2936]: E1216 13:35:14.877101 2936 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hllc7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-vq4wz_calico-system(6f1d1d03-3f5c-4209-adf6-a2cec01d4b01): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 13:35:14.879490 containerd[1607]: time="2025-12-16T13:35:14.879471731Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 13:35:15.223422 containerd[1607]: time="2025-12-16T13:35:15.223385022Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:35:15.223774 containerd[1607]: time="2025-12-16T13:35:15.223726748Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 13:35:15.223774 containerd[1607]: time="2025-12-16T13:35:15.223743535Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Dec 16 13:35:15.223878 kubelet[2936]: E1216 13:35:15.223852 2936 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 13:35:15.223935 kubelet[2936]: E1216 13:35:15.223885 2936 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 13:35:15.224196 kubelet[2936]: E1216 13:35:15.223973 2936 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hllc7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-vq4wz_calico-system(6f1d1d03-3f5c-4209-adf6-a2cec01d4b01): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 13:35:15.225701 kubelet[2936]: E1216 13:35:15.225684 2936 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-vq4wz" podUID="6f1d1d03-3f5c-4209-adf6-a2cec01d4b01" Dec 16 13:35:15.528531 systemd-networkd[1506]: cali0aec8cf6bbc: Gained IPv6LL Dec 16 13:35:15.566387 kubelet[2936]: E1216 13:35:15.566290 2936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59f94fcf66-9fr5g" podUID="d4fd9b6b-9d45-4c37-9fcc-05498ebe1ddf" Dec 16 13:35:15.566516 kubelet[2936]: E1216 13:35:15.566465 2936 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-vq4wz" podUID="6f1d1d03-3f5c-4209-adf6-a2cec01d4b01" Dec 16 13:35:15.567105 kubelet[2936]: E1216 13:35:15.566602 2936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5bgh6" podUID="f5d82f91-8d34-4dd6-9053-b327d15a7af5" Dec 16 13:35:15.579378 kubelet[2936]: I1216 13:35:15.578295 2936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-h54lr" podStartSLOduration=38.578275931 podStartE2EDuration="38.578275931s" podCreationTimestamp="2025-12-16 13:34:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:35:14.618666024 +0000 UTC m=+44.397164125" watchObservedRunningTime="2025-12-16 13:35:15.578275931 +0000 UTC m=+45.356774041" Dec 16 13:35:15.751590 kubelet[2936]: I1216 13:35:15.751558 2936 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 16 13:35:16.360403 systemd-networkd[1506]: cali2d6f3b912b4: Gained IPv6LL Dec 16 13:35:20.306972 containerd[1607]: time="2025-12-16T13:35:20.306886632Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 13:35:20.655628 containerd[1607]: time="2025-12-16T13:35:20.655447402Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:35:20.656289 containerd[1607]: time="2025-12-16T13:35:20.656270146Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 13:35:20.656397 containerd[1607]: time="2025-12-16T13:35:20.656350701Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Dec 16 13:35:20.656693 kubelet[2936]: E1216 13:35:20.656520 2936 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 13:35:20.656693 kubelet[2936]: E1216 13:35:20.656571 2936 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 13:35:20.656693 kubelet[2936]: E1216 13:35:20.656664 2936 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:961cb6d8db104cc6a58fd1d09c5a9cf2,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mjfmc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-d87f9cb8-v4hv4_calico-system(8d8ff3b5-7370-40af-945d-8fef79b8d3a6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 13:35:20.658404 containerd[1607]: time="2025-12-16T13:35:20.658381947Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 13:35:20.999395 containerd[1607]: time="2025-12-16T13:35:20.998845252Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:35:20.999810 containerd[1607]: time="2025-12-16T13:35:20.999716093Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 13:35:20.999908 containerd[1607]: time="2025-12-16T13:35:20.999730524Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Dec 16 13:35:21.000012 kubelet[2936]: E1216 13:35:20.999989 2936 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 13:35:21.000155 kubelet[2936]: E1216 13:35:21.000144 2936 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 13:35:21.000288 kubelet[2936]: E1216 13:35:21.000268 2936 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mjfmc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-d87f9cb8-v4hv4_calico-system(8d8ff3b5-7370-40af-945d-8fef79b8d3a6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 13:35:21.001802 kubelet[2936]: E1216 13:35:21.001780 2936 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-d87f9cb8-v4hv4" podUID="8d8ff3b5-7370-40af-945d-8fef79b8d3a6" Dec 16 13:35:24.307357 containerd[1607]: time="2025-12-16T13:35:24.307328466Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 13:35:24.677424 containerd[1607]: time="2025-12-16T13:35:24.677390349Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:35:24.677932 containerd[1607]: time="2025-12-16T13:35:24.677843185Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 13:35:24.677932 containerd[1607]: time="2025-12-16T13:35:24.677893550Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Dec 16 13:35:24.678017 kubelet[2936]: E1216 13:35:24.677985 2936 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 13:35:24.678254 kubelet[2936]: E1216 13:35:24.678021 2936 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 13:35:24.678347 kubelet[2936]: E1216 13:35:24.678136 2936 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n88np,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-78f95cbfdd-lh872_calico-system(7900a761-f455-401e-a05b-5ba11ccd5975): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 13:35:24.679874 kubelet[2936]: E1216 13:35:24.679567 2936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-78f95cbfdd-lh872" podUID="7900a761-f455-401e-a05b-5ba11ccd5975" Dec 16 13:35:27.306375 containerd[1607]: time="2025-12-16T13:35:27.306075453Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 13:35:27.660779 containerd[1607]: time="2025-12-16T13:35:27.660736242Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:35:27.661155 containerd[1607]: time="2025-12-16T13:35:27.661124835Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 13:35:27.661835 containerd[1607]: time="2025-12-16T13:35:27.661177884Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 16 13:35:27.661835 containerd[1607]: time="2025-12-16T13:35:27.661617245Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 13:35:27.661903 kubelet[2936]: E1216 13:35:27.661293 2936 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:35:27.661903 kubelet[2936]: E1216 13:35:27.661354 2936 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:35:27.661903 kubelet[2936]: E1216 13:35:27.661742 2936 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nctwd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-59f94fcf66-7qr5n_calico-apiserver(ec4d992b-26cc-4e71-9db7-9af961649e2b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 13:35:27.663718 kubelet[2936]: E1216 13:35:27.663692 2936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59f94fcf66-7qr5n" podUID="ec4d992b-26cc-4e71-9db7-9af961649e2b" Dec 16 13:35:28.112267 containerd[1607]: time="2025-12-16T13:35:28.112142604Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:35:28.112662 containerd[1607]: time="2025-12-16T13:35:28.112596056Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 13:35:28.112738 containerd[1607]: time="2025-12-16T13:35:28.112667953Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 16 13:35:28.112909 kubelet[2936]: E1216 13:35:28.112852 2936 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:35:28.112909 kubelet[2936]: E1216 13:35:28.112892 2936 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:35:28.113167 containerd[1607]: time="2025-12-16T13:35:28.113102274Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 13:35:28.113368 kubelet[2936]: E1216 13:35:28.113260 2936 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l7bhf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-59f94fcf66-9fr5g_calico-apiserver(d4fd9b6b-9d45-4c37-9fcc-05498ebe1ddf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 13:35:28.114559 kubelet[2936]: E1216 13:35:28.114532 2936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59f94fcf66-9fr5g" podUID="d4fd9b6b-9d45-4c37-9fcc-05498ebe1ddf" Dec 16 13:35:28.418239 containerd[1607]: time="2025-12-16T13:35:28.418168508Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:35:28.418737 containerd[1607]: time="2025-12-16T13:35:28.418705226Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 13:35:28.418808 containerd[1607]: time="2025-12-16T13:35:28.418765837Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Dec 16 13:35:28.419271 kubelet[2936]: E1216 13:35:28.418896 2936 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 13:35:28.419313 kubelet[2936]: E1216 13:35:28.419281 2936 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 13:35:28.419451 kubelet[2936]: E1216 13:35:28.419374 2936 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nzgsj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-5bgh6_calico-system(f5d82f91-8d34-4dd6-9053-b327d15a7af5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 13:35:28.420517 kubelet[2936]: E1216 13:35:28.420501 2936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5bgh6" podUID="f5d82f91-8d34-4dd6-9053-b327d15a7af5" Dec 16 13:35:30.577007 containerd[1607]: time="2025-12-16T13:35:30.576916224Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 13:35:30.909115 containerd[1607]: time="2025-12-16T13:35:30.909081798Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:35:30.909537 containerd[1607]: time="2025-12-16T13:35:30.909518169Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 13:35:30.909615 containerd[1607]: time="2025-12-16T13:35:30.909588633Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Dec 16 13:35:30.909728 kubelet[2936]: E1216 13:35:30.909694 2936 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 13:35:30.910022 kubelet[2936]: E1216 13:35:30.909736 2936 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 13:35:30.910022 kubelet[2936]: E1216 13:35:30.909830 2936 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hllc7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-vq4wz_calico-system(6f1d1d03-3f5c-4209-adf6-a2cec01d4b01): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 13:35:30.912549 containerd[1607]: time="2025-12-16T13:35:30.912179181Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 13:35:31.250419 containerd[1607]: time="2025-12-16T13:35:31.250328621Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:35:31.250774 containerd[1607]: time="2025-12-16T13:35:31.250740231Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 13:35:31.250834 containerd[1607]: time="2025-12-16T13:35:31.250803957Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Dec 16 13:35:31.251106 kubelet[2936]: E1216 13:35:31.251067 2936 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 13:35:31.251196 kubelet[2936]: E1216 13:35:31.251107 2936 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 13:35:31.251707 kubelet[2936]: E1216 13:35:31.251208 2936 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hllc7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-vq4wz_calico-system(6f1d1d03-3f5c-4209-adf6-a2cec01d4b01): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 13:35:31.252975 kubelet[2936]: E1216 13:35:31.252947 2936 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-vq4wz" podUID="6f1d1d03-3f5c-4209-adf6-a2cec01d4b01" Dec 16 13:35:35.307245 kubelet[2936]: E1216 13:35:35.306877 2936 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-d87f9cb8-v4hv4" podUID="8d8ff3b5-7370-40af-945d-8fef79b8d3a6" Dec 16 13:35:37.305350 kubelet[2936]: E1216 13:35:37.305145 2936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-78f95cbfdd-lh872" podUID="7900a761-f455-401e-a05b-5ba11ccd5975" Dec 16 13:35:39.305419 kubelet[2936]: E1216 13:35:39.305368 2936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5bgh6" podUID="f5d82f91-8d34-4dd6-9053-b327d15a7af5" Dec 16 13:35:40.121343 systemd[1]: Started sshd@7-139.178.70.100:22-139.178.89.65:55418.service - OpenSSH per-connection server daemon (139.178.89.65:55418). Dec 16 13:35:40.211844 sshd[4930]: Accepted publickey for core from 139.178.89.65 port 55418 ssh2: RSA SHA256:ptIxrQTrYo0+chtuEVmg9+TwAphmKe0c6qeNmkN5+wE Dec 16 13:35:40.213325 sshd-session[4930]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:35:40.217442 systemd-logind[1583]: New session 10 of user core. Dec 16 13:35:40.225576 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 16 13:35:40.684603 sshd[4943]: Connection closed by 139.178.89.65 port 55418 Dec 16 13:35:40.684945 sshd-session[4930]: pam_unix(sshd:session): session closed for user core Dec 16 13:35:40.688004 systemd[1]: sshd@7-139.178.70.100:22-139.178.89.65:55418.service: Deactivated successfully. Dec 16 13:35:40.690597 systemd[1]: session-10.scope: Deactivated successfully. Dec 16 13:35:40.691199 systemd-logind[1583]: Session 10 logged out. Waiting for processes to exit. Dec 16 13:35:40.692147 systemd-logind[1583]: Removed session 10. Dec 16 13:35:41.306286 kubelet[2936]: E1216 13:35:41.306257 2936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59f94fcf66-7qr5n" podUID="ec4d992b-26cc-4e71-9db7-9af961649e2b" Dec 16 13:35:42.307266 kubelet[2936]: E1216 13:35:42.306520 2936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59f94fcf66-9fr5g" podUID="d4fd9b6b-9d45-4c37-9fcc-05498ebe1ddf" Dec 16 13:35:45.305697 kubelet[2936]: E1216 13:35:45.305646 2936 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-vq4wz" podUID="6f1d1d03-3f5c-4209-adf6-a2cec01d4b01" Dec 16 13:35:45.697323 systemd[1]: Started sshd@8-139.178.70.100:22-139.178.89.65:42514.service - OpenSSH per-connection server daemon (139.178.89.65:42514). Dec 16 13:35:45.754797 sshd[4962]: Accepted publickey for core from 139.178.89.65 port 42514 ssh2: RSA SHA256:ptIxrQTrYo0+chtuEVmg9+TwAphmKe0c6qeNmkN5+wE Dec 16 13:35:45.755758 sshd-session[4962]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:35:45.758353 systemd-logind[1583]: New session 11 of user core. Dec 16 13:35:45.765348 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 16 13:35:45.932298 sshd[4965]: Connection closed by 139.178.89.65 port 42514 Dec 16 13:35:45.932615 sshd-session[4962]: pam_unix(sshd:session): session closed for user core Dec 16 13:35:45.935707 systemd[1]: sshd@8-139.178.70.100:22-139.178.89.65:42514.service: Deactivated successfully. Dec 16 13:35:45.937182 systemd[1]: session-11.scope: Deactivated successfully. Dec 16 13:35:45.938029 systemd-logind[1583]: Session 11 logged out. Waiting for processes to exit. Dec 16 13:35:45.939156 systemd-logind[1583]: Removed session 11. Dec 16 13:35:49.306578 containerd[1607]: time="2025-12-16T13:35:49.306376525Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 13:35:49.625788 containerd[1607]: time="2025-12-16T13:35:49.625561621Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:35:49.629252 containerd[1607]: time="2025-12-16T13:35:49.629201686Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 13:35:49.629325 containerd[1607]: time="2025-12-16T13:35:49.629284865Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Dec 16 13:35:49.629441 kubelet[2936]: E1216 13:35:49.629397 2936 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 13:35:49.629628 kubelet[2936]: E1216 13:35:49.629449 2936 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 13:35:49.630131 containerd[1607]: time="2025-12-16T13:35:49.629830650Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 13:35:49.630166 kubelet[2936]: E1216 13:35:49.629760 2936 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n88np,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-78f95cbfdd-lh872_calico-system(7900a761-f455-401e-a05b-5ba11ccd5975): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 13:35:49.630994 kubelet[2936]: E1216 13:35:49.630950 2936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-78f95cbfdd-lh872" podUID="7900a761-f455-401e-a05b-5ba11ccd5975" Dec 16 13:35:49.980109 containerd[1607]: time="2025-12-16T13:35:49.980075841Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:35:49.982793 containerd[1607]: time="2025-12-16T13:35:49.982754459Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 13:35:49.982926 containerd[1607]: time="2025-12-16T13:35:49.982813420Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Dec 16 13:35:49.982963 kubelet[2936]: E1216 13:35:49.982907 2936 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 13:35:49.983040 kubelet[2936]: E1216 13:35:49.982957 2936 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 13:35:49.983040 kubelet[2936]: E1216 13:35:49.983044 2936 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:961cb6d8db104cc6a58fd1d09c5a9cf2,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mjfmc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-d87f9cb8-v4hv4_calico-system(8d8ff3b5-7370-40af-945d-8fef79b8d3a6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 13:35:49.984993 containerd[1607]: time="2025-12-16T13:35:49.984899736Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 13:35:50.361717 containerd[1607]: time="2025-12-16T13:35:50.361637996Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:35:50.368071 containerd[1607]: time="2025-12-16T13:35:50.367976525Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 13:35:50.368071 containerd[1607]: time="2025-12-16T13:35:50.367997157Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Dec 16 13:35:50.368321 kubelet[2936]: E1216 13:35:50.368284 2936 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 13:35:50.368418 kubelet[2936]: E1216 13:35:50.368405 2936 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 13:35:50.368688 kubelet[2936]: E1216 13:35:50.368650 2936 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mjfmc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-d87f9cb8-v4hv4_calico-system(8d8ff3b5-7370-40af-945d-8fef79b8d3a6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 13:35:50.368917 containerd[1607]: time="2025-12-16T13:35:50.368862258Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 13:35:50.370217 kubelet[2936]: E1216 13:35:50.370191 2936 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-d87f9cb8-v4hv4" podUID="8d8ff3b5-7370-40af-945d-8fef79b8d3a6" Dec 16 13:35:50.718479 containerd[1607]: time="2025-12-16T13:35:50.718426775Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:35:50.718857 containerd[1607]: time="2025-12-16T13:35:50.718818980Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 13:35:50.719271 containerd[1607]: time="2025-12-16T13:35:50.718884486Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Dec 16 13:35:50.719322 kubelet[2936]: E1216 13:35:50.718979 2936 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 13:35:50.719322 kubelet[2936]: E1216 13:35:50.719011 2936 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 13:35:50.719322 kubelet[2936]: E1216 13:35:50.719114 2936 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nzgsj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-5bgh6_calico-system(f5d82f91-8d34-4dd6-9053-b327d15a7af5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 13:35:50.721371 kubelet[2936]: E1216 13:35:50.720785 2936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5bgh6" podUID="f5d82f91-8d34-4dd6-9053-b327d15a7af5" Dec 16 13:35:50.945846 systemd[1]: Started sshd@9-139.178.70.100:22-139.178.89.65:34272.service - OpenSSH per-connection server daemon (139.178.89.65:34272). Dec 16 13:35:51.007081 sshd[5006]: Accepted publickey for core from 139.178.89.65 port 34272 ssh2: RSA SHA256:ptIxrQTrYo0+chtuEVmg9+TwAphmKe0c6qeNmkN5+wE Dec 16 13:35:51.008066 sshd-session[5006]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:35:51.011125 systemd-logind[1583]: New session 12 of user core. Dec 16 13:35:51.017312 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 16 13:35:51.130769 sshd[5009]: Connection closed by 139.178.89.65 port 34272 Dec 16 13:35:51.131107 sshd-session[5006]: pam_unix(sshd:session): session closed for user core Dec 16 13:35:51.140693 systemd[1]: sshd@9-139.178.70.100:22-139.178.89.65:34272.service: Deactivated successfully. Dec 16 13:35:51.142937 systemd[1]: session-12.scope: Deactivated successfully. Dec 16 13:35:51.144925 systemd-logind[1583]: Session 12 logged out. Waiting for processes to exit. Dec 16 13:35:51.147784 systemd-logind[1583]: Removed session 12. Dec 16 13:35:51.150114 systemd[1]: Started sshd@10-139.178.70.100:22-139.178.89.65:34278.service - OpenSSH per-connection server daemon (139.178.89.65:34278). Dec 16 13:35:51.211243 sshd[5022]: Accepted publickey for core from 139.178.89.65 port 34278 ssh2: RSA SHA256:ptIxrQTrYo0+chtuEVmg9+TwAphmKe0c6qeNmkN5+wE Dec 16 13:35:51.211338 sshd-session[5022]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:35:51.217550 systemd-logind[1583]: New session 13 of user core. Dec 16 13:35:51.221312 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 16 13:35:51.384944 sshd[5025]: Connection closed by 139.178.89.65 port 34278 Dec 16 13:35:51.386021 sshd-session[5022]: pam_unix(sshd:session): session closed for user core Dec 16 13:35:51.399111 systemd[1]: Started sshd@11-139.178.70.100:22-139.178.89.65:34280.service - OpenSSH per-connection server daemon (139.178.89.65:34280). Dec 16 13:35:51.399521 systemd[1]: sshd@10-139.178.70.100:22-139.178.89.65:34278.service: Deactivated successfully. Dec 16 13:35:51.403115 systemd[1]: session-13.scope: Deactivated successfully. Dec 16 13:35:51.407861 systemd-logind[1583]: Session 13 logged out. Waiting for processes to exit. Dec 16 13:35:51.412364 systemd-logind[1583]: Removed session 13. Dec 16 13:35:51.460436 sshd[5031]: Accepted publickey for core from 139.178.89.65 port 34280 ssh2: RSA SHA256:ptIxrQTrYo0+chtuEVmg9+TwAphmKe0c6qeNmkN5+wE Dec 16 13:35:51.461265 sshd-session[5031]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:35:51.464559 systemd-logind[1583]: New session 14 of user core. Dec 16 13:35:51.471417 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 16 13:35:51.574885 sshd[5037]: Connection closed by 139.178.89.65 port 34280 Dec 16 13:35:51.575203 sshd-session[5031]: pam_unix(sshd:session): session closed for user core Dec 16 13:35:51.577170 systemd[1]: sshd@11-139.178.70.100:22-139.178.89.65:34280.service: Deactivated successfully. Dec 16 13:35:51.578639 systemd[1]: session-14.scope: Deactivated successfully. Dec 16 13:35:51.579360 systemd-logind[1583]: Session 14 logged out. Waiting for processes to exit. Dec 16 13:35:51.580562 systemd-logind[1583]: Removed session 14. Dec 16 13:35:55.306428 containerd[1607]: time="2025-12-16T13:35:55.306177543Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 13:35:55.667362 containerd[1607]: time="2025-12-16T13:35:55.667326122Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:35:55.673656 containerd[1607]: time="2025-12-16T13:35:55.673581044Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 13:35:55.673656 containerd[1607]: time="2025-12-16T13:35:55.673635797Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 16 13:35:55.673953 kubelet[2936]: E1216 13:35:55.673842 2936 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:35:55.673953 kubelet[2936]: E1216 13:35:55.673903 2936 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:35:55.674354 kubelet[2936]: E1216 13:35:55.674296 2936 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nctwd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-59f94fcf66-7qr5n_calico-apiserver(ec4d992b-26cc-4e71-9db7-9af961649e2b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 13:35:55.675782 kubelet[2936]: E1216 13:35:55.675755 2936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59f94fcf66-7qr5n" podUID="ec4d992b-26cc-4e71-9db7-9af961649e2b" Dec 16 13:35:56.305845 containerd[1607]: time="2025-12-16T13:35:56.305815832Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 13:35:56.584521 systemd[1]: Started sshd@12-139.178.70.100:22-139.178.89.65:34282.service - OpenSSH per-connection server daemon (139.178.89.65:34282). Dec 16 13:35:56.620843 sshd[5054]: Accepted publickey for core from 139.178.89.65 port 34282 ssh2: RSA SHA256:ptIxrQTrYo0+chtuEVmg9+TwAphmKe0c6qeNmkN5+wE Dec 16 13:35:56.621847 sshd-session[5054]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:35:56.629015 systemd-logind[1583]: New session 15 of user core. Dec 16 13:35:56.640346 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 16 13:35:56.657640 containerd[1607]: time="2025-12-16T13:35:56.657608636Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:35:56.657959 containerd[1607]: time="2025-12-16T13:35:56.657935834Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 13:35:56.657997 containerd[1607]: time="2025-12-16T13:35:56.657983630Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 16 13:35:56.658151 kubelet[2936]: E1216 13:35:56.658122 2936 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:35:56.658235 kubelet[2936]: E1216 13:35:56.658211 2936 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:35:56.658383 kubelet[2936]: E1216 13:35:56.658356 2936 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l7bhf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-59f94fcf66-9fr5g_calico-apiserver(d4fd9b6b-9d45-4c37-9fcc-05498ebe1ddf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 13:35:56.660390 kubelet[2936]: E1216 13:35:56.660357 2936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59f94fcf66-9fr5g" podUID="d4fd9b6b-9d45-4c37-9fcc-05498ebe1ddf" Dec 16 13:35:56.749213 sshd[5057]: Connection closed by 139.178.89.65 port 34282 Dec 16 13:35:56.750196 sshd-session[5054]: pam_unix(sshd:session): session closed for user core Dec 16 13:35:56.753900 systemd[1]: sshd@12-139.178.70.100:22-139.178.89.65:34282.service: Deactivated successfully. Dec 16 13:35:56.755706 systemd[1]: session-15.scope: Deactivated successfully. Dec 16 13:35:56.756845 systemd-logind[1583]: Session 15 logged out. Waiting for processes to exit. Dec 16 13:35:56.757451 systemd-logind[1583]: Removed session 15. Dec 16 13:35:57.306492 containerd[1607]: time="2025-12-16T13:35:57.306390273Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 13:35:57.682374 containerd[1607]: time="2025-12-16T13:35:57.682343854Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:35:57.682952 containerd[1607]: time="2025-12-16T13:35:57.682887771Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 13:35:57.682952 containerd[1607]: time="2025-12-16T13:35:57.682934472Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Dec 16 13:35:57.683048 kubelet[2936]: E1216 13:35:57.683020 2936 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 13:35:57.683309 kubelet[2936]: E1216 13:35:57.683056 2936 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 13:35:57.683309 kubelet[2936]: E1216 13:35:57.683141 2936 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hllc7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-vq4wz_calico-system(6f1d1d03-3f5c-4209-adf6-a2cec01d4b01): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 13:35:57.686604 containerd[1607]: time="2025-12-16T13:35:57.686397194Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 13:35:58.018973 containerd[1607]: time="2025-12-16T13:35:58.018897642Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:35:58.019502 containerd[1607]: time="2025-12-16T13:35:58.019461486Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 13:35:58.019616 containerd[1607]: time="2025-12-16T13:35:58.019561877Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Dec 16 13:35:58.019806 kubelet[2936]: E1216 13:35:58.019770 2936 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 13:35:58.019944 kubelet[2936]: E1216 13:35:58.019810 2936 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 13:35:58.020266 kubelet[2936]: E1216 13:35:58.020204 2936 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hllc7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-vq4wz_calico-system(6f1d1d03-3f5c-4209-adf6-a2cec01d4b01): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 13:35:58.021442 kubelet[2936]: E1216 13:35:58.021367 2936 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-vq4wz" podUID="6f1d1d03-3f5c-4209-adf6-a2cec01d4b01" Dec 16 13:36:01.759974 systemd[1]: Started sshd@13-139.178.70.100:22-139.178.89.65:50114.service - OpenSSH per-connection server daemon (139.178.89.65:50114). Dec 16 13:36:01.828678 sshd[5070]: Accepted publickey for core from 139.178.89.65 port 50114 ssh2: RSA SHA256:ptIxrQTrYo0+chtuEVmg9+TwAphmKe0c6qeNmkN5+wE Dec 16 13:36:01.829610 sshd-session[5070]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:36:01.833498 systemd-logind[1583]: New session 16 of user core. Dec 16 13:36:01.841348 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 16 13:36:01.953863 sshd[5073]: Connection closed by 139.178.89.65 port 50114 Dec 16 13:36:01.954433 sshd-session[5070]: pam_unix(sshd:session): session closed for user core Dec 16 13:36:01.957456 systemd[1]: sshd@13-139.178.70.100:22-139.178.89.65:50114.service: Deactivated successfully. Dec 16 13:36:01.959221 systemd[1]: session-16.scope: Deactivated successfully. Dec 16 13:36:01.960462 systemd-logind[1583]: Session 16 logged out. Waiting for processes to exit. Dec 16 13:36:01.961364 systemd-logind[1583]: Removed session 16. Dec 16 13:36:03.307515 kubelet[2936]: E1216 13:36:03.307325 2936 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-d87f9cb8-v4hv4" podUID="8d8ff3b5-7370-40af-945d-8fef79b8d3a6" Dec 16 13:36:04.306638 kubelet[2936]: E1216 13:36:04.306607 2936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-78f95cbfdd-lh872" podUID="7900a761-f455-401e-a05b-5ba11ccd5975" Dec 16 13:36:05.306275 kubelet[2936]: E1216 13:36:05.305975 2936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5bgh6" podUID="f5d82f91-8d34-4dd6-9053-b327d15a7af5" Dec 16 13:36:06.963802 systemd[1]: Started sshd@14-139.178.70.100:22-139.178.89.65:50120.service - OpenSSH per-connection server daemon (139.178.89.65:50120). Dec 16 13:36:07.000903 sshd[5086]: Accepted publickey for core from 139.178.89.65 port 50120 ssh2: RSA SHA256:ptIxrQTrYo0+chtuEVmg9+TwAphmKe0c6qeNmkN5+wE Dec 16 13:36:07.001432 sshd-session[5086]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:36:07.005057 systemd-logind[1583]: New session 17 of user core. Dec 16 13:36:07.010418 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 16 13:36:07.136367 sshd[5089]: Connection closed by 139.178.89.65 port 50120 Dec 16 13:36:07.138308 sshd-session[5086]: pam_unix(sshd:session): session closed for user core Dec 16 13:36:07.143626 systemd[1]: sshd@14-139.178.70.100:22-139.178.89.65:50120.service: Deactivated successfully. Dec 16 13:36:07.145316 systemd[1]: session-17.scope: Deactivated successfully. Dec 16 13:36:07.146345 systemd-logind[1583]: Session 17 logged out. Waiting for processes to exit. Dec 16 13:36:07.148398 systemd[1]: Started sshd@15-139.178.70.100:22-139.178.89.65:50130.service - OpenSSH per-connection server daemon (139.178.89.65:50130). Dec 16 13:36:07.149104 systemd-logind[1583]: Removed session 17. Dec 16 13:36:07.184881 sshd[5101]: Accepted publickey for core from 139.178.89.65 port 50130 ssh2: RSA SHA256:ptIxrQTrYo0+chtuEVmg9+TwAphmKe0c6qeNmkN5+wE Dec 16 13:36:07.185908 sshd-session[5101]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:36:07.189431 systemd-logind[1583]: New session 18 of user core. Dec 16 13:36:07.194316 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 16 13:36:07.304899 kubelet[2936]: E1216 13:36:07.304831 2936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59f94fcf66-7qr5n" podUID="ec4d992b-26cc-4e71-9db7-9af961649e2b" Dec 16 13:36:07.550677 sshd[5104]: Connection closed by 139.178.89.65 port 50130 Dec 16 13:36:07.553262 sshd-session[5101]: pam_unix(sshd:session): session closed for user core Dec 16 13:36:07.561187 systemd[1]: sshd@15-139.178.70.100:22-139.178.89.65:50130.service: Deactivated successfully. Dec 16 13:36:07.561316 systemd-logind[1583]: Session 18 logged out. Waiting for processes to exit. Dec 16 13:36:07.562676 systemd[1]: session-18.scope: Deactivated successfully. Dec 16 13:36:07.564537 systemd-logind[1583]: Removed session 18. Dec 16 13:36:07.565511 systemd[1]: Started sshd@16-139.178.70.100:22-139.178.89.65:50136.service - OpenSSH per-connection server daemon (139.178.89.65:50136). Dec 16 13:36:07.621986 sshd[5113]: Accepted publickey for core from 139.178.89.65 port 50136 ssh2: RSA SHA256:ptIxrQTrYo0+chtuEVmg9+TwAphmKe0c6qeNmkN5+wE Dec 16 13:36:07.622690 sshd-session[5113]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:36:07.625482 systemd-logind[1583]: New session 19 of user core. Dec 16 13:36:07.635307 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 16 13:36:08.254312 sshd[5117]: Connection closed by 139.178.89.65 port 50136 Dec 16 13:36:08.254753 sshd-session[5113]: pam_unix(sshd:session): session closed for user core Dec 16 13:36:08.265172 systemd[1]: Started sshd@17-139.178.70.100:22-139.178.89.65:50148.service - OpenSSH per-connection server daemon (139.178.89.65:50148). Dec 16 13:36:08.276881 systemd[1]: sshd@16-139.178.70.100:22-139.178.89.65:50136.service: Deactivated successfully. Dec 16 13:36:08.277567 systemd-logind[1583]: Session 19 logged out. Waiting for processes to exit. Dec 16 13:36:08.280153 systemd[1]: session-19.scope: Deactivated successfully. Dec 16 13:36:08.282292 systemd-logind[1583]: Removed session 19. Dec 16 13:36:08.331312 sshd[5135]: Accepted publickey for core from 139.178.89.65 port 50148 ssh2: RSA SHA256:ptIxrQTrYo0+chtuEVmg9+TwAphmKe0c6qeNmkN5+wE Dec 16 13:36:08.332340 sshd-session[5135]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:36:08.338082 systemd-logind[1583]: New session 20 of user core. Dec 16 13:36:08.342330 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 16 13:36:08.561041 sshd[5142]: Connection closed by 139.178.89.65 port 50148 Dec 16 13:36:08.562244 sshd-session[5135]: pam_unix(sshd:session): session closed for user core Dec 16 13:36:08.568461 systemd[1]: sshd@17-139.178.70.100:22-139.178.89.65:50148.service: Deactivated successfully. Dec 16 13:36:08.569965 systemd[1]: session-20.scope: Deactivated successfully. Dec 16 13:36:08.571442 systemd-logind[1583]: Session 20 logged out. Waiting for processes to exit. Dec 16 13:36:08.574193 systemd[1]: Started sshd@18-139.178.70.100:22-139.178.89.65:50164.service - OpenSSH per-connection server daemon (139.178.89.65:50164). Dec 16 13:36:08.576014 systemd-logind[1583]: Removed session 20. Dec 16 13:36:08.614964 sshd[5152]: Accepted publickey for core from 139.178.89.65 port 50164 ssh2: RSA SHA256:ptIxrQTrYo0+chtuEVmg9+TwAphmKe0c6qeNmkN5+wE Dec 16 13:36:08.616079 sshd-session[5152]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:36:08.621173 systemd-logind[1583]: New session 21 of user core. Dec 16 13:36:08.626335 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 16 13:36:08.761737 sshd[5155]: Connection closed by 139.178.89.65 port 50164 Dec 16 13:36:08.762280 sshd-session[5152]: pam_unix(sshd:session): session closed for user core Dec 16 13:36:08.764636 systemd[1]: sshd@18-139.178.70.100:22-139.178.89.65:50164.service: Deactivated successfully. Dec 16 13:36:08.766659 systemd[1]: session-21.scope: Deactivated successfully. Dec 16 13:36:08.769447 systemd-logind[1583]: Session 21 logged out. Waiting for processes to exit. Dec 16 13:36:08.770579 systemd-logind[1583]: Removed session 21. Dec 16 13:36:11.306336 kubelet[2936]: E1216 13:36:11.306288 2936 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-vq4wz" podUID="6f1d1d03-3f5c-4209-adf6-a2cec01d4b01" Dec 16 13:36:12.306210 kubelet[2936]: E1216 13:36:12.305930 2936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59f94fcf66-9fr5g" podUID="d4fd9b6b-9d45-4c37-9fcc-05498ebe1ddf" Dec 16 13:36:13.772585 systemd[1]: Started sshd@19-139.178.70.100:22-139.178.89.65:32910.service - OpenSSH per-connection server daemon (139.178.89.65:32910). Dec 16 13:36:13.824849 sshd[5170]: Accepted publickey for core from 139.178.89.65 port 32910 ssh2: RSA SHA256:ptIxrQTrYo0+chtuEVmg9+TwAphmKe0c6qeNmkN5+wE Dec 16 13:36:13.825671 sshd-session[5170]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:36:13.829953 systemd-logind[1583]: New session 22 of user core. Dec 16 13:36:13.833318 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 16 13:36:13.945100 sshd[5173]: Connection closed by 139.178.89.65 port 32910 Dec 16 13:36:13.944667 sshd-session[5170]: pam_unix(sshd:session): session closed for user core Dec 16 13:36:13.946724 systemd[1]: sshd@19-139.178.70.100:22-139.178.89.65:32910.service: Deactivated successfully. Dec 16 13:36:13.948737 systemd[1]: session-22.scope: Deactivated successfully. Dec 16 13:36:13.950109 systemd-logind[1583]: Session 22 logged out. Waiting for processes to exit. Dec 16 13:36:13.951832 systemd-logind[1583]: Removed session 22. Dec 16 13:36:16.305407 kubelet[2936]: E1216 13:36:16.305368 2936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5bgh6" podUID="f5d82f91-8d34-4dd6-9053-b327d15a7af5" Dec 16 13:36:16.306038 kubelet[2936]: E1216 13:36:16.305582 2936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-78f95cbfdd-lh872" podUID="7900a761-f455-401e-a05b-5ba11ccd5975" Dec 16 13:36:17.306054 kubelet[2936]: E1216 13:36:17.306013 2936 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-d87f9cb8-v4hv4" podUID="8d8ff3b5-7370-40af-945d-8fef79b8d3a6" Dec 16 13:36:18.957073 systemd[1]: Started sshd@20-139.178.70.100:22-139.178.89.65:32918.service - OpenSSH per-connection server daemon (139.178.89.65:32918). Dec 16 13:36:19.003082 sshd[5210]: Accepted publickey for core from 139.178.89.65 port 32918 ssh2: RSA SHA256:ptIxrQTrYo0+chtuEVmg9+TwAphmKe0c6qeNmkN5+wE Dec 16 13:36:19.004175 sshd-session[5210]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:36:19.007748 systemd-logind[1583]: New session 23 of user core. Dec 16 13:36:19.011358 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 16 13:36:19.107586 sshd[5213]: Connection closed by 139.178.89.65 port 32918 Dec 16 13:36:19.108347 sshd-session[5210]: pam_unix(sshd:session): session closed for user core Dec 16 13:36:19.110580 systemd-logind[1583]: Session 23 logged out. Waiting for processes to exit. Dec 16 13:36:19.111390 systemd[1]: sshd@20-139.178.70.100:22-139.178.89.65:32918.service: Deactivated successfully. Dec 16 13:36:19.112735 systemd[1]: session-23.scope: Deactivated successfully. Dec 16 13:36:19.114260 systemd-logind[1583]: Removed session 23. Dec 16 13:36:20.306331 kubelet[2936]: E1216 13:36:20.306085 2936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59f94fcf66-7qr5n" podUID="ec4d992b-26cc-4e71-9db7-9af961649e2b"