Oct 27 08:27:41.621138 kernel: Linux version 6.12.54-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Mon Oct 27 06:24:35 -00 2025 Oct 27 08:27:41.621156 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=e6ac205aca0358d0b739fe2cba6f8244850dbdc9027fd8e7442161fce065515e Oct 27 08:27:41.621163 kernel: Disabled fast string operations Oct 27 08:27:41.621167 kernel: BIOS-provided physical RAM map: Oct 27 08:27:41.621171 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ebff] usable Oct 27 08:27:41.621176 kernel: BIOS-e820: [mem 0x000000000009ec00-0x000000000009ffff] reserved Oct 27 08:27:41.621182 kernel: BIOS-e820: [mem 0x00000000000dc000-0x00000000000fffff] reserved Oct 27 08:27:41.621187 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007fedffff] usable Oct 27 08:27:41.621191 kernel: BIOS-e820: [mem 0x000000007fee0000-0x000000007fefefff] ACPI data Oct 27 08:27:41.621196 kernel: BIOS-e820: [mem 0x000000007feff000-0x000000007fefffff] ACPI NVS Oct 27 08:27:41.621200 kernel: BIOS-e820: [mem 0x000000007ff00000-0x000000007fffffff] usable Oct 27 08:27:41.621205 kernel: BIOS-e820: [mem 0x00000000f0000000-0x00000000f7ffffff] reserved Oct 27 08:27:41.621209 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec0ffff] reserved Oct 27 08:27:41.621214 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Oct 27 08:27:41.621221 kernel: BIOS-e820: [mem 0x00000000fffe0000-0x00000000ffffffff] reserved Oct 27 08:27:41.621226 kernel: NX (Execute Disable) protection: active Oct 27 08:27:41.621231 kernel: APIC: Static calls initialized Oct 27 08:27:41.621236 kernel: SMBIOS 2.7 present. Oct 27 08:27:41.621241 kernel: DMI: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 05/28/2020 Oct 27 08:27:41.621246 kernel: DMI: Memory slots populated: 1/128 Oct 27 08:27:41.621507 kernel: vmware: hypercall mode: 0x00 Oct 27 08:27:41.621514 kernel: Hypervisor detected: VMware Oct 27 08:27:41.621519 kernel: vmware: TSC freq read from hypervisor : 3408.000 MHz Oct 27 08:27:41.621525 kernel: vmware: Host bus clock speed read from hypervisor : 66000000 Hz Oct 27 08:27:41.621530 kernel: vmware: using clock offset of 3335491517 ns Oct 27 08:27:41.621535 kernel: tsc: Detected 3408.000 MHz processor Oct 27 08:27:41.621541 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Oct 27 08:27:41.621547 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Oct 27 08:27:41.621552 kernel: last_pfn = 0x80000 max_arch_pfn = 0x400000000 Oct 27 08:27:41.621560 kernel: total RAM covered: 3072M Oct 27 08:27:41.621565 kernel: Found optimal setting for mtrr clean up Oct 27 08:27:41.621572 kernel: gran_size: 64K chunk_size: 64K num_reg: 2 lose cover RAM: 0G Oct 27 08:27:41.621577 kernel: MTRR map: 6 entries (5 fixed + 1 variable; max 21), built from 8 variable MTRRs Oct 27 08:27:41.621582 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Oct 27 08:27:41.621588 kernel: Using GB pages for direct mapping Oct 27 08:27:41.621593 kernel: ACPI: Early table checksum verification disabled Oct 27 08:27:41.621598 kernel: ACPI: RSDP 0x00000000000F6A00 000024 (v02 PTLTD ) Oct 27 08:27:41.621605 kernel: ACPI: XSDT 0x000000007FEE965B 00005C (v01 INTEL 440BX 06040000 VMW 01324272) Oct 27 08:27:41.621611 kernel: ACPI: FACP 0x000000007FEFEE73 0000F4 (v04 INTEL 440BX 06040000 PTL 000F4240) Oct 27 08:27:41.621616 kernel: ACPI: DSDT 0x000000007FEEAD55 01411E (v01 PTLTD Custom 06040000 MSFT 03000001) Oct 27 08:27:41.621624 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Oct 27 08:27:41.621630 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Oct 27 08:27:41.621636 kernel: ACPI: BOOT 0x000000007FEEAD2D 000028 (v01 PTLTD $SBFTBL$ 06040000 LTP 00000001) Oct 27 08:27:41.621642 kernel: ACPI: APIC 0x000000007FEEA5EB 000742 (v01 PTLTD ? APIC 06040000 LTP 00000000) Oct 27 08:27:41.621648 kernel: ACPI: MCFG 0x000000007FEEA5AF 00003C (v01 PTLTD $PCITBL$ 06040000 LTP 00000001) Oct 27 08:27:41.621654 kernel: ACPI: SRAT 0x000000007FEE9757 0008A8 (v02 VMWARE MEMPLUG 06040000 VMW 00000001) Oct 27 08:27:41.621660 kernel: ACPI: HPET 0x000000007FEE971F 000038 (v01 VMWARE VMW HPET 06040000 VMW 00000001) Oct 27 08:27:41.621666 kernel: ACPI: WAET 0x000000007FEE96F7 000028 (v01 VMWARE VMW WAET 06040000 VMW 00000001) Oct 27 08:27:41.621673 kernel: ACPI: Reserving FACP table memory at [mem 0x7fefee73-0x7fefef66] Oct 27 08:27:41.621679 kernel: ACPI: Reserving DSDT table memory at [mem 0x7feead55-0x7fefee72] Oct 27 08:27:41.621684 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Oct 27 08:27:41.621690 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Oct 27 08:27:41.621695 kernel: ACPI: Reserving BOOT table memory at [mem 0x7feead2d-0x7feead54] Oct 27 08:27:41.621701 kernel: ACPI: Reserving APIC table memory at [mem 0x7feea5eb-0x7feead2c] Oct 27 08:27:41.621706 kernel: ACPI: Reserving MCFG table memory at [mem 0x7feea5af-0x7feea5ea] Oct 27 08:27:41.621712 kernel: ACPI: Reserving SRAT table memory at [mem 0x7fee9757-0x7fee9ffe] Oct 27 08:27:41.621719 kernel: ACPI: Reserving HPET table memory at [mem 0x7fee971f-0x7fee9756] Oct 27 08:27:41.621724 kernel: ACPI: Reserving WAET table memory at [mem 0x7fee96f7-0x7fee971e] Oct 27 08:27:41.621730 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Oct 27 08:27:41.621735 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Oct 27 08:27:41.621741 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000-0xbfffffff] hotplug Oct 27 08:27:41.621747 kernel: NUMA: Node 0 [mem 0x00001000-0x0009ffff] + [mem 0x00100000-0x7fffffff] -> [mem 0x00001000-0x7fffffff] Oct 27 08:27:41.621752 kernel: NODE_DATA(0) allocated [mem 0x7fff8dc0-0x7fffffff] Oct 27 08:27:41.621759 kernel: Zone ranges: Oct 27 08:27:41.621765 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Oct 27 08:27:41.621770 kernel: DMA32 [mem 0x0000000001000000-0x000000007fffffff] Oct 27 08:27:41.621776 kernel: Normal empty Oct 27 08:27:41.621782 kernel: Device empty Oct 27 08:27:41.621787 kernel: Movable zone start for each node Oct 27 08:27:41.621793 kernel: Early memory node ranges Oct 27 08:27:41.621798 kernel: node 0: [mem 0x0000000000001000-0x000000000009dfff] Oct 27 08:27:41.621805 kernel: node 0: [mem 0x0000000000100000-0x000000007fedffff] Oct 27 08:27:41.621811 kernel: node 0: [mem 0x000000007ff00000-0x000000007fffffff] Oct 27 08:27:41.621816 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007fffffff] Oct 27 08:27:41.621822 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Oct 27 08:27:41.621828 kernel: On node 0, zone DMA: 98 pages in unavailable ranges Oct 27 08:27:41.621833 kernel: On node 0, zone DMA32: 32 pages in unavailable ranges Oct 27 08:27:41.621839 kernel: ACPI: PM-Timer IO Port: 0x1008 Oct 27 08:27:41.621844 kernel: ACPI: LAPIC_NMI (acpi_id[0x00] high edge lint[0x1]) Oct 27 08:27:41.621851 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Oct 27 08:27:41.621857 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Oct 27 08:27:41.621862 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Oct 27 08:27:41.621867 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Oct 27 08:27:41.621873 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Oct 27 08:27:41.621878 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Oct 27 08:27:41.621884 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Oct 27 08:27:41.621889 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Oct 27 08:27:41.621896 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Oct 27 08:27:41.621901 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Oct 27 08:27:41.621906 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Oct 27 08:27:41.621912 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Oct 27 08:27:41.621917 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Oct 27 08:27:41.621923 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Oct 27 08:27:41.621928 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Oct 27 08:27:41.621934 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Oct 27 08:27:41.621940 kernel: ACPI: LAPIC_NMI (acpi_id[0x11] high edge lint[0x1]) Oct 27 08:27:41.621946 kernel: ACPI: LAPIC_NMI (acpi_id[0x12] high edge lint[0x1]) Oct 27 08:27:41.621951 kernel: ACPI: LAPIC_NMI (acpi_id[0x13] high edge lint[0x1]) Oct 27 08:27:41.621957 kernel: ACPI: LAPIC_NMI (acpi_id[0x14] high edge lint[0x1]) Oct 27 08:27:41.621962 kernel: ACPI: LAPIC_NMI (acpi_id[0x15] high edge lint[0x1]) Oct 27 08:27:41.621967 kernel: ACPI: LAPIC_NMI (acpi_id[0x16] high edge lint[0x1]) Oct 27 08:27:41.621974 kernel: ACPI: LAPIC_NMI (acpi_id[0x17] high edge lint[0x1]) Oct 27 08:27:41.621979 kernel: ACPI: LAPIC_NMI (acpi_id[0x18] high edge lint[0x1]) Oct 27 08:27:41.621985 kernel: ACPI: LAPIC_NMI (acpi_id[0x19] high edge lint[0x1]) Oct 27 08:27:41.621991 kernel: ACPI: LAPIC_NMI (acpi_id[0x1a] high edge lint[0x1]) Oct 27 08:27:41.621996 kernel: ACPI: LAPIC_NMI (acpi_id[0x1b] high edge lint[0x1]) Oct 27 08:27:41.622002 kernel: ACPI: LAPIC_NMI (acpi_id[0x1c] high edge lint[0x1]) Oct 27 08:27:41.622007 kernel: ACPI: LAPIC_NMI (acpi_id[0x1d] high edge lint[0x1]) Oct 27 08:27:41.622013 kernel: ACPI: LAPIC_NMI (acpi_id[0x1e] high edge lint[0x1]) Oct 27 08:27:41.622018 kernel: ACPI: LAPIC_NMI (acpi_id[0x1f] high edge lint[0x1]) Oct 27 08:27:41.622024 kernel: ACPI: LAPIC_NMI (acpi_id[0x20] high edge lint[0x1]) Oct 27 08:27:41.622030 kernel: ACPI: LAPIC_NMI (acpi_id[0x21] high edge lint[0x1]) Oct 27 08:27:41.622035 kernel: ACPI: LAPIC_NMI (acpi_id[0x22] high edge lint[0x1]) Oct 27 08:27:41.622041 kernel: ACPI: LAPIC_NMI (acpi_id[0x23] high edge lint[0x1]) Oct 27 08:27:41.622046 kernel: ACPI: LAPIC_NMI (acpi_id[0x24] high edge lint[0x1]) Oct 27 08:27:41.622052 kernel: ACPI: LAPIC_NMI (acpi_id[0x25] high edge lint[0x1]) Oct 27 08:27:41.622057 kernel: ACPI: LAPIC_NMI (acpi_id[0x26] high edge lint[0x1]) Oct 27 08:27:41.622063 kernel: ACPI: LAPIC_NMI (acpi_id[0x27] high edge lint[0x1]) Oct 27 08:27:41.622073 kernel: ACPI: LAPIC_NMI (acpi_id[0x28] high edge lint[0x1]) Oct 27 08:27:41.622078 kernel: ACPI: LAPIC_NMI (acpi_id[0x29] high edge lint[0x1]) Oct 27 08:27:41.622084 kernel: ACPI: LAPIC_NMI (acpi_id[0x2a] high edge lint[0x1]) Oct 27 08:27:41.622090 kernel: ACPI: LAPIC_NMI (acpi_id[0x2b] high edge lint[0x1]) Oct 27 08:27:41.622097 kernel: ACPI: LAPIC_NMI (acpi_id[0x2c] high edge lint[0x1]) Oct 27 08:27:41.622102 kernel: ACPI: LAPIC_NMI (acpi_id[0x2d] high edge lint[0x1]) Oct 27 08:27:41.622108 kernel: ACPI: LAPIC_NMI (acpi_id[0x2e] high edge lint[0x1]) Oct 27 08:27:41.622114 kernel: ACPI: LAPIC_NMI (acpi_id[0x2f] high edge lint[0x1]) Oct 27 08:27:41.622121 kernel: ACPI: LAPIC_NMI (acpi_id[0x30] high edge lint[0x1]) Oct 27 08:27:41.622127 kernel: ACPI: LAPIC_NMI (acpi_id[0x31] high edge lint[0x1]) Oct 27 08:27:41.622133 kernel: ACPI: LAPIC_NMI (acpi_id[0x32] high edge lint[0x1]) Oct 27 08:27:41.622138 kernel: ACPI: LAPIC_NMI (acpi_id[0x33] high edge lint[0x1]) Oct 27 08:27:41.622144 kernel: ACPI: LAPIC_NMI (acpi_id[0x34] high edge lint[0x1]) Oct 27 08:27:41.622150 kernel: ACPI: LAPIC_NMI (acpi_id[0x35] high edge lint[0x1]) Oct 27 08:27:41.622156 kernel: ACPI: LAPIC_NMI (acpi_id[0x36] high edge lint[0x1]) Oct 27 08:27:41.622162 kernel: ACPI: LAPIC_NMI (acpi_id[0x37] high edge lint[0x1]) Oct 27 08:27:41.622168 kernel: ACPI: LAPIC_NMI (acpi_id[0x38] high edge lint[0x1]) Oct 27 08:27:41.622175 kernel: ACPI: LAPIC_NMI (acpi_id[0x39] high edge lint[0x1]) Oct 27 08:27:41.622180 kernel: ACPI: LAPIC_NMI (acpi_id[0x3a] high edge lint[0x1]) Oct 27 08:27:41.622186 kernel: ACPI: LAPIC_NMI (acpi_id[0x3b] high edge lint[0x1]) Oct 27 08:27:41.622192 kernel: ACPI: LAPIC_NMI (acpi_id[0x3c] high edge lint[0x1]) Oct 27 08:27:41.622198 kernel: ACPI: LAPIC_NMI (acpi_id[0x3d] high edge lint[0x1]) Oct 27 08:27:41.622204 kernel: ACPI: LAPIC_NMI (acpi_id[0x3e] high edge lint[0x1]) Oct 27 08:27:41.622210 kernel: ACPI: LAPIC_NMI (acpi_id[0x3f] high edge lint[0x1]) Oct 27 08:27:41.622217 kernel: ACPI: LAPIC_NMI (acpi_id[0x40] high edge lint[0x1]) Oct 27 08:27:41.622223 kernel: ACPI: LAPIC_NMI (acpi_id[0x41] high edge lint[0x1]) Oct 27 08:27:41.622228 kernel: ACPI: LAPIC_NMI (acpi_id[0x42] high edge lint[0x1]) Oct 27 08:27:41.622234 kernel: ACPI: LAPIC_NMI (acpi_id[0x43] high edge lint[0x1]) Oct 27 08:27:41.622240 kernel: ACPI: LAPIC_NMI (acpi_id[0x44] high edge lint[0x1]) Oct 27 08:27:41.622246 kernel: ACPI: LAPIC_NMI (acpi_id[0x45] high edge lint[0x1]) Oct 27 08:27:41.622251 kernel: ACPI: LAPIC_NMI (acpi_id[0x46] high edge lint[0x1]) Oct 27 08:27:41.622257 kernel: ACPI: LAPIC_NMI (acpi_id[0x47] high edge lint[0x1]) Oct 27 08:27:41.622263 kernel: ACPI: LAPIC_NMI (acpi_id[0x48] high edge lint[0x1]) Oct 27 08:27:41.622270 kernel: ACPI: LAPIC_NMI (acpi_id[0x49] high edge lint[0x1]) Oct 27 08:27:41.622276 kernel: ACPI: LAPIC_NMI (acpi_id[0x4a] high edge lint[0x1]) Oct 27 08:27:41.622281 kernel: ACPI: LAPIC_NMI (acpi_id[0x4b] high edge lint[0x1]) Oct 27 08:27:41.622287 kernel: ACPI: LAPIC_NMI (acpi_id[0x4c] high edge lint[0x1]) Oct 27 08:27:41.622293 kernel: ACPI: LAPIC_NMI (acpi_id[0x4d] high edge lint[0x1]) Oct 27 08:27:41.622299 kernel: ACPI: LAPIC_NMI (acpi_id[0x4e] high edge lint[0x1]) Oct 27 08:27:41.622305 kernel: ACPI: LAPIC_NMI (acpi_id[0x4f] high edge lint[0x1]) Oct 27 08:27:41.622311 kernel: ACPI: LAPIC_NMI (acpi_id[0x50] high edge lint[0x1]) Oct 27 08:27:41.622318 kernel: ACPI: LAPIC_NMI (acpi_id[0x51] high edge lint[0x1]) Oct 27 08:27:41.622324 kernel: ACPI: LAPIC_NMI (acpi_id[0x52] high edge lint[0x1]) Oct 27 08:27:41.622330 kernel: ACPI: LAPIC_NMI (acpi_id[0x53] high edge lint[0x1]) Oct 27 08:27:41.622335 kernel: ACPI: LAPIC_NMI (acpi_id[0x54] high edge lint[0x1]) Oct 27 08:27:41.622341 kernel: ACPI: LAPIC_NMI (acpi_id[0x55] high edge lint[0x1]) Oct 27 08:27:41.622347 kernel: ACPI: LAPIC_NMI (acpi_id[0x56] high edge lint[0x1]) Oct 27 08:27:41.622353 kernel: ACPI: LAPIC_NMI (acpi_id[0x57] high edge lint[0x1]) Oct 27 08:27:41.622358 kernel: ACPI: LAPIC_NMI (acpi_id[0x58] high edge lint[0x1]) Oct 27 08:27:41.622365 kernel: ACPI: LAPIC_NMI (acpi_id[0x59] high edge lint[0x1]) Oct 27 08:27:41.622372 kernel: ACPI: LAPIC_NMI (acpi_id[0x5a] high edge lint[0x1]) Oct 27 08:27:41.622377 kernel: ACPI: LAPIC_NMI (acpi_id[0x5b] high edge lint[0x1]) Oct 27 08:27:41.622383 kernel: ACPI: LAPIC_NMI (acpi_id[0x5c] high edge lint[0x1]) Oct 27 08:27:41.622389 kernel: ACPI: LAPIC_NMI (acpi_id[0x5d] high edge lint[0x1]) Oct 27 08:27:41.622395 kernel: ACPI: LAPIC_NMI (acpi_id[0x5e] high edge lint[0x1]) Oct 27 08:27:41.622401 kernel: ACPI: LAPIC_NMI (acpi_id[0x5f] high edge lint[0x1]) Oct 27 08:27:41.622406 kernel: ACPI: LAPIC_NMI (acpi_id[0x60] high edge lint[0x1]) Oct 27 08:27:41.622413 kernel: ACPI: LAPIC_NMI (acpi_id[0x61] high edge lint[0x1]) Oct 27 08:27:41.622419 kernel: ACPI: LAPIC_NMI (acpi_id[0x62] high edge lint[0x1]) Oct 27 08:27:41.622425 kernel: ACPI: LAPIC_NMI (acpi_id[0x63] high edge lint[0x1]) Oct 27 08:27:41.622431 kernel: ACPI: LAPIC_NMI (acpi_id[0x64] high edge lint[0x1]) Oct 27 08:27:41.622466 kernel: ACPI: LAPIC_NMI (acpi_id[0x65] high edge lint[0x1]) Oct 27 08:27:41.622472 kernel: ACPI: LAPIC_NMI (acpi_id[0x66] high edge lint[0x1]) Oct 27 08:27:41.622478 kernel: ACPI: LAPIC_NMI (acpi_id[0x67] high edge lint[0x1]) Oct 27 08:27:41.622484 kernel: ACPI: LAPIC_NMI (acpi_id[0x68] high edge lint[0x1]) Oct 27 08:27:41.622491 kernel: ACPI: LAPIC_NMI (acpi_id[0x69] high edge lint[0x1]) Oct 27 08:27:41.622497 kernel: ACPI: LAPIC_NMI (acpi_id[0x6a] high edge lint[0x1]) Oct 27 08:27:41.622503 kernel: ACPI: LAPIC_NMI (acpi_id[0x6b] high edge lint[0x1]) Oct 27 08:27:41.622509 kernel: ACPI: LAPIC_NMI (acpi_id[0x6c] high edge lint[0x1]) Oct 27 08:27:41.622515 kernel: ACPI: LAPIC_NMI (acpi_id[0x6d] high edge lint[0x1]) Oct 27 08:27:41.622521 kernel: ACPI: LAPIC_NMI (acpi_id[0x6e] high edge lint[0x1]) Oct 27 08:27:41.622527 kernel: ACPI: LAPIC_NMI (acpi_id[0x6f] high edge lint[0x1]) Oct 27 08:27:41.622534 kernel: ACPI: LAPIC_NMI (acpi_id[0x70] high edge lint[0x1]) Oct 27 08:27:41.622539 kernel: ACPI: LAPIC_NMI (acpi_id[0x71] high edge lint[0x1]) Oct 27 08:27:41.622545 kernel: ACPI: LAPIC_NMI (acpi_id[0x72] high edge lint[0x1]) Oct 27 08:27:41.622551 kernel: ACPI: LAPIC_NMI (acpi_id[0x73] high edge lint[0x1]) Oct 27 08:27:41.622557 kernel: ACPI: LAPIC_NMI (acpi_id[0x74] high edge lint[0x1]) Oct 27 08:27:41.622563 kernel: ACPI: LAPIC_NMI (acpi_id[0x75] high edge lint[0x1]) Oct 27 08:27:41.622568 kernel: ACPI: LAPIC_NMI (acpi_id[0x76] high edge lint[0x1]) Oct 27 08:27:41.622574 kernel: ACPI: LAPIC_NMI (acpi_id[0x77] high edge lint[0x1]) Oct 27 08:27:41.622580 kernel: ACPI: LAPIC_NMI (acpi_id[0x78] high edge lint[0x1]) Oct 27 08:27:41.622587 kernel: ACPI: LAPIC_NMI (acpi_id[0x79] high edge lint[0x1]) Oct 27 08:27:41.622592 kernel: ACPI: LAPIC_NMI (acpi_id[0x7a] high edge lint[0x1]) Oct 27 08:27:41.622598 kernel: ACPI: LAPIC_NMI (acpi_id[0x7b] high edge lint[0x1]) Oct 27 08:27:41.622604 kernel: ACPI: LAPIC_NMI (acpi_id[0x7c] high edge lint[0x1]) Oct 27 08:27:41.622610 kernel: ACPI: LAPIC_NMI (acpi_id[0x7d] high edge lint[0x1]) Oct 27 08:27:41.622615 kernel: ACPI: LAPIC_NMI (acpi_id[0x7e] high edge lint[0x1]) Oct 27 08:27:41.622621 kernel: ACPI: LAPIC_NMI (acpi_id[0x7f] high edge lint[0x1]) Oct 27 08:27:41.622627 kernel: IOAPIC[0]: apic_id 1, version 17, address 0xfec00000, GSI 0-23 Oct 27 08:27:41.622634 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 high edge) Oct 27 08:27:41.622640 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Oct 27 08:27:41.622646 kernel: ACPI: HPET id: 0x8086af01 base: 0xfed00000 Oct 27 08:27:41.622652 kernel: TSC deadline timer available Oct 27 08:27:41.622658 kernel: CPU topo: Max. logical packages: 128 Oct 27 08:27:41.622664 kernel: CPU topo: Max. logical dies: 128 Oct 27 08:27:41.622670 kernel: CPU topo: Max. dies per package: 1 Oct 27 08:27:41.622677 kernel: CPU topo: Max. threads per core: 1 Oct 27 08:27:41.622683 kernel: CPU topo: Num. cores per package: 1 Oct 27 08:27:41.622689 kernel: CPU topo: Num. threads per package: 1 Oct 27 08:27:41.622695 kernel: CPU topo: Allowing 2 present CPUs plus 126 hotplug CPUs Oct 27 08:27:41.622700 kernel: [mem 0x80000000-0xefffffff] available for PCI devices Oct 27 08:27:41.622706 kernel: Booting paravirtualized kernel on VMware hypervisor Oct 27 08:27:41.622712 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Oct 27 08:27:41.622718 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:128 nr_cpu_ids:128 nr_node_ids:1 Oct 27 08:27:41.622725 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u262144 Oct 27 08:27:41.622731 kernel: pcpu-alloc: s207832 r8192 d29736 u262144 alloc=1*2097152 Oct 27 08:27:41.622737 kernel: pcpu-alloc: [0] 000 001 002 003 004 005 006 007 Oct 27 08:27:41.622743 kernel: pcpu-alloc: [0] 008 009 010 011 012 013 014 015 Oct 27 08:27:41.622749 kernel: pcpu-alloc: [0] 016 017 018 019 020 021 022 023 Oct 27 08:27:41.622755 kernel: pcpu-alloc: [0] 024 025 026 027 028 029 030 031 Oct 27 08:27:41.622761 kernel: pcpu-alloc: [0] 032 033 034 035 036 037 038 039 Oct 27 08:27:41.622768 kernel: pcpu-alloc: [0] 040 041 042 043 044 045 046 047 Oct 27 08:27:41.622774 kernel: pcpu-alloc: [0] 048 049 050 051 052 053 054 055 Oct 27 08:27:41.622780 kernel: pcpu-alloc: [0] 056 057 058 059 060 061 062 063 Oct 27 08:27:41.622785 kernel: pcpu-alloc: [0] 064 065 066 067 068 069 070 071 Oct 27 08:27:41.622792 kernel: pcpu-alloc: [0] 072 073 074 075 076 077 078 079 Oct 27 08:27:41.622798 kernel: pcpu-alloc: [0] 080 081 082 083 084 085 086 087 Oct 27 08:27:41.622804 kernel: pcpu-alloc: [0] 088 089 090 091 092 093 094 095 Oct 27 08:27:41.622811 kernel: pcpu-alloc: [0] 096 097 098 099 100 101 102 103 Oct 27 08:27:41.622817 kernel: pcpu-alloc: [0] 104 105 106 107 108 109 110 111 Oct 27 08:27:41.622823 kernel: pcpu-alloc: [0] 112 113 114 115 116 117 118 119 Oct 27 08:27:41.622828 kernel: pcpu-alloc: [0] 120 121 122 123 124 125 126 127 Oct 27 08:27:41.622835 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=e6ac205aca0358d0b739fe2cba6f8244850dbdc9027fd8e7442161fce065515e Oct 27 08:27:41.622841 kernel: random: crng init done Oct 27 08:27:41.622847 kernel: printk: log_buf_len individual max cpu contribution: 4096 bytes Oct 27 08:27:41.622854 kernel: printk: log_buf_len total cpu_extra contributions: 520192 bytes Oct 27 08:27:41.622861 kernel: printk: log_buf_len min size: 262144 bytes Oct 27 08:27:41.622866 kernel: printk: log_buf_len: 1048576 bytes Oct 27 08:27:41.622872 kernel: printk: early log buf free: 245688(93%) Oct 27 08:27:41.622878 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 27 08:27:41.622885 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Oct 27 08:27:41.622891 kernel: Fallback order for Node 0: 0 Oct 27 08:27:41.622898 kernel: Built 1 zonelists, mobility grouping on. Total pages: 524157 Oct 27 08:27:41.622904 kernel: Policy zone: DMA32 Oct 27 08:27:41.622910 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 27 08:27:41.622916 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=128, Nodes=1 Oct 27 08:27:41.622922 kernel: ftrace: allocating 40092 entries in 157 pages Oct 27 08:27:41.622928 kernel: ftrace: allocated 157 pages with 5 groups Oct 27 08:27:41.622934 kernel: Dynamic Preempt: voluntary Oct 27 08:27:41.622941 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 27 08:27:41.622948 kernel: rcu: RCU event tracing is enabled. Oct 27 08:27:41.622954 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=128. Oct 27 08:27:41.622960 kernel: Trampoline variant of Tasks RCU enabled. Oct 27 08:27:41.622966 kernel: Rude variant of Tasks RCU enabled. Oct 27 08:27:41.622972 kernel: Tracing variant of Tasks RCU enabled. Oct 27 08:27:41.622978 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 27 08:27:41.622984 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=128 Oct 27 08:27:41.622990 kernel: RCU Tasks: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Oct 27 08:27:41.622999 kernel: RCU Tasks Rude: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Oct 27 08:27:41.623006 kernel: RCU Tasks Trace: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Oct 27 08:27:41.623012 kernel: NR_IRQS: 33024, nr_irqs: 1448, preallocated irqs: 16 Oct 27 08:27:41.623018 kernel: rcu: srcu_init: Setting srcu_struct sizes to big. Oct 27 08:27:41.623024 kernel: Console: colour VGA+ 80x25 Oct 27 08:27:41.623030 kernel: printk: legacy console [tty0] enabled Oct 27 08:27:41.623036 kernel: printk: legacy console [ttyS0] enabled Oct 27 08:27:41.623044 kernel: ACPI: Core revision 20240827 Oct 27 08:27:41.623050 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 133484882848 ns Oct 27 08:27:41.623056 kernel: APIC: Switch to symmetric I/O mode setup Oct 27 08:27:41.623062 kernel: x2apic enabled Oct 27 08:27:41.623068 kernel: APIC: Switched APIC routing to: physical x2apic Oct 27 08:27:41.623074 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Oct 27 08:27:41.623080 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Oct 27 08:27:41.623088 kernel: Calibrating delay loop (skipped) preset value.. 6816.00 BogoMIPS (lpj=3408000) Oct 27 08:27:41.623094 kernel: Disabled fast string operations Oct 27 08:27:41.623099 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Oct 27 08:27:41.623106 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Oct 27 08:27:41.623112 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Oct 27 08:27:41.623118 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall and VM exit Oct 27 08:27:41.623124 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Oct 27 08:27:41.623131 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Oct 27 08:27:41.623138 kernel: RETBleed: Mitigation: Enhanced IBRS Oct 27 08:27:41.623144 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Oct 27 08:27:41.623150 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Oct 27 08:27:41.623160 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Oct 27 08:27:41.623166 kernel: SRBDS: Unknown: Dependent on hypervisor status Oct 27 08:27:41.623172 kernel: GDS: Unknown: Dependent on hypervisor status Oct 27 08:27:41.623179 kernel: active return thunk: its_return_thunk Oct 27 08:27:41.623185 kernel: ITS: Mitigation: Aligned branch/return thunks Oct 27 08:27:41.623191 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Oct 27 08:27:41.623198 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Oct 27 08:27:41.623204 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Oct 27 08:27:41.623210 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Oct 27 08:27:41.623216 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Oct 27 08:27:41.623224 kernel: Freeing SMP alternatives memory: 32K Oct 27 08:27:41.623230 kernel: pid_max: default: 131072 minimum: 1024 Oct 27 08:27:41.623236 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Oct 27 08:27:41.623242 kernel: landlock: Up and running. Oct 27 08:27:41.623248 kernel: SELinux: Initializing. Oct 27 08:27:41.623254 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Oct 27 08:27:41.623260 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Oct 27 08:27:41.623267 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Oct 27 08:27:41.623274 kernel: Performance Events: Skylake events, core PMU driver. Oct 27 08:27:41.623281 kernel: core: CPUID marked event: 'cpu cycles' unavailable Oct 27 08:27:41.624982 kernel: core: CPUID marked event: 'instructions' unavailable Oct 27 08:27:41.624991 kernel: core: CPUID marked event: 'bus cycles' unavailable Oct 27 08:27:41.624998 kernel: core: CPUID marked event: 'cache references' unavailable Oct 27 08:27:41.625004 kernel: core: CPUID marked event: 'cache misses' unavailable Oct 27 08:27:41.625012 kernel: core: CPUID marked event: 'branch instructions' unavailable Oct 27 08:27:41.625018 kernel: core: CPUID marked event: 'branch misses' unavailable Oct 27 08:27:41.625024 kernel: ... version: 1 Oct 27 08:27:41.625030 kernel: ... bit width: 48 Oct 27 08:27:41.625037 kernel: ... generic registers: 4 Oct 27 08:27:41.625043 kernel: ... value mask: 0000ffffffffffff Oct 27 08:27:41.625049 kernel: ... max period: 000000007fffffff Oct 27 08:27:41.625056 kernel: ... fixed-purpose events: 0 Oct 27 08:27:41.625063 kernel: ... event mask: 000000000000000f Oct 27 08:27:41.625069 kernel: signal: max sigframe size: 1776 Oct 27 08:27:41.625075 kernel: rcu: Hierarchical SRCU implementation. Oct 27 08:27:41.625081 kernel: rcu: Max phase no-delay instances is 400. Oct 27 08:27:41.625087 kernel: Timer migration: 3 hierarchy levels; 8 children per group; 3 crossnode level Oct 27 08:27:41.625098 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Oct 27 08:27:41.625106 kernel: smp: Bringing up secondary CPUs ... Oct 27 08:27:41.625112 kernel: smpboot: x86: Booting SMP configuration: Oct 27 08:27:41.625118 kernel: .... node #0, CPUs: #1 Oct 27 08:27:41.625124 kernel: Disabled fast string operations Oct 27 08:27:41.625130 kernel: smp: Brought up 1 node, 2 CPUs Oct 27 08:27:41.625136 kernel: smpboot: Total of 2 processors activated (13632.00 BogoMIPS) Oct 27 08:27:41.625143 kernel: Memory: 1946772K/2096628K available (14336K kernel code, 2443K rwdata, 26064K rodata, 15964K init, 2080K bss, 138476K reserved, 0K cma-reserved) Oct 27 08:27:41.625149 kernel: devtmpfs: initialized Oct 27 08:27:41.625159 kernel: x86/mm: Memory block size: 128MB Oct 27 08:27:41.625166 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7feff000-0x7fefffff] (4096 bytes) Oct 27 08:27:41.625173 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 27 08:27:41.625179 kernel: futex hash table entries: 32768 (order: 9, 2097152 bytes, linear) Oct 27 08:27:41.625185 kernel: pinctrl core: initialized pinctrl subsystem Oct 27 08:27:41.625191 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 27 08:27:41.625197 kernel: audit: initializing netlink subsys (disabled) Oct 27 08:27:41.625204 kernel: audit: type=2000 audit(1761553659.280:1): state=initialized audit_enabled=0 res=1 Oct 27 08:27:41.625210 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 27 08:27:41.625216 kernel: thermal_sys: Registered thermal governor 'user_space' Oct 27 08:27:41.625222 kernel: cpuidle: using governor menu Oct 27 08:27:41.625228 kernel: Simple Boot Flag at 0x36 set to 0x80 Oct 27 08:27:41.625234 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 27 08:27:41.625240 kernel: dca service started, version 1.12.1 Oct 27 08:27:41.625248 kernel: PCI: ECAM [mem 0xf0000000-0xf7ffffff] (base 0xf0000000) for domain 0000 [bus 00-7f] Oct 27 08:27:41.625261 kernel: PCI: Using configuration type 1 for base access Oct 27 08:27:41.625269 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Oct 27 08:27:41.625276 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Oct 27 08:27:41.625282 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Oct 27 08:27:41.625289 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Oct 27 08:27:41.625295 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Oct 27 08:27:41.625302 kernel: ACPI: Added _OSI(Module Device) Oct 27 08:27:41.625308 kernel: ACPI: Added _OSI(Processor Device) Oct 27 08:27:41.625315 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 27 08:27:41.625322 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 27 08:27:41.625329 kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored Oct 27 08:27:41.625335 kernel: ACPI: Interpreter enabled Oct 27 08:27:41.625341 kernel: ACPI: PM: (supports S0 S1 S5) Oct 27 08:27:41.625349 kernel: ACPI: Using IOAPIC for interrupt routing Oct 27 08:27:41.625355 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Oct 27 08:27:41.625362 kernel: PCI: Using E820 reservations for host bridge windows Oct 27 08:27:41.625368 kernel: ACPI: Enabled 4 GPEs in block 00 to 0F Oct 27 08:27:41.625375 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-7f]) Oct 27 08:27:41.626160 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Oct 27 08:27:41.626239 kernel: acpi PNP0A03:00: _OSC: platform does not support [AER LTR] Oct 27 08:27:41.626307 kernel: acpi PNP0A03:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability] Oct 27 08:27:41.626317 kernel: PCI host bridge to bus 0000:00 Oct 27 08:27:41.626385 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Oct 27 08:27:41.626497 kernel: pci_bus 0000:00: root bus resource [mem 0x000cc000-0x000dbfff window] Oct 27 08:27:41.626562 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Oct 27 08:27:41.626626 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Oct 27 08:27:41.626686 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xfeff window] Oct 27 08:27:41.626745 kernel: pci_bus 0000:00: root bus resource [bus 00-7f] Oct 27 08:27:41.626824 kernel: pci 0000:00:00.0: [8086:7190] type 00 class 0x060000 conventional PCI endpoint Oct 27 08:27:41.626897 kernel: pci 0000:00:01.0: [8086:7191] type 01 class 0x060400 conventional PCI bridge Oct 27 08:27:41.626975 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Oct 27 08:27:41.627048 kernel: pci 0000:00:07.0: [8086:7110] type 00 class 0x060100 conventional PCI endpoint Oct 27 08:27:41.627120 kernel: pci 0000:00:07.1: [8086:7111] type 00 class 0x01018a conventional PCI endpoint Oct 27 08:27:41.627192 kernel: pci 0000:00:07.1: BAR 4 [io 0x1060-0x106f] Oct 27 08:27:41.627258 kernel: pci 0000:00:07.1: BAR 0 [io 0x01f0-0x01f7]: legacy IDE quirk Oct 27 08:27:41.627324 kernel: pci 0000:00:07.1: BAR 1 [io 0x03f6]: legacy IDE quirk Oct 27 08:27:41.627389 kernel: pci 0000:00:07.1: BAR 2 [io 0x0170-0x0177]: legacy IDE quirk Oct 27 08:27:41.627493 kernel: pci 0000:00:07.1: BAR 3 [io 0x0376]: legacy IDE quirk Oct 27 08:27:41.627576 kernel: pci 0000:00:07.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint Oct 27 08:27:41.627647 kernel: pci 0000:00:07.3: quirk: [io 0x1000-0x103f] claimed by PIIX4 ACPI Oct 27 08:27:41.627713 kernel: pci 0000:00:07.3: quirk: [io 0x1040-0x104f] claimed by PIIX4 SMB Oct 27 08:27:41.627782 kernel: pci 0000:00:07.7: [15ad:0740] type 00 class 0x088000 conventional PCI endpoint Oct 27 08:27:41.627849 kernel: pci 0000:00:07.7: BAR 0 [io 0x1080-0x10bf] Oct 27 08:27:41.627915 kernel: pci 0000:00:07.7: BAR 1 [mem 0xfebfe000-0xfebfffff 64bit] Oct 27 08:27:41.627988 kernel: pci 0000:00:0f.0: [15ad:0405] type 00 class 0x030000 conventional PCI endpoint Oct 27 08:27:41.628060 kernel: pci 0000:00:0f.0: BAR 0 [io 0x1070-0x107f] Oct 27 08:27:41.628127 kernel: pci 0000:00:0f.0: BAR 1 [mem 0xe8000000-0xefffffff pref] Oct 27 08:27:41.628761 kernel: pci 0000:00:0f.0: BAR 2 [mem 0xfe000000-0xfe7fffff] Oct 27 08:27:41.628832 kernel: pci 0000:00:0f.0: ROM [mem 0x00000000-0x00007fff pref] Oct 27 08:27:41.628899 kernel: pci 0000:00:0f.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Oct 27 08:27:41.628975 kernel: pci 0000:00:11.0: [15ad:0790] type 01 class 0x060401 conventional PCI bridge Oct 27 08:27:41.629043 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] (subtractive decode) Oct 27 08:27:41.629113 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Oct 27 08:27:41.629179 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Oct 27 08:27:41.629244 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Oct 27 08:27:41.629315 kernel: pci 0000:00:15.0: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 27 08:27:41.629385 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Oct 27 08:27:41.629499 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Oct 27 08:27:41.629568 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Oct 27 08:27:41.629635 kernel: pci 0000:00:15.0: PME# supported from D0 D3hot D3cold Oct 27 08:27:41.629707 kernel: pci 0000:00:15.1: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 27 08:27:41.629775 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Oct 27 08:27:41.629844 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Oct 27 08:27:41.629909 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Oct 27 08:27:41.629995 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Oct 27 08:27:41.630063 kernel: pci 0000:00:15.1: PME# supported from D0 D3hot D3cold Oct 27 08:27:41.630133 kernel: pci 0000:00:15.2: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 27 08:27:41.630199 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Oct 27 08:27:41.630269 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Oct 27 08:27:41.630335 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Oct 27 08:27:41.630402 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Oct 27 08:27:41.631314 kernel: pci 0000:00:15.2: PME# supported from D0 D3hot D3cold Oct 27 08:27:41.631397 kernel: pci 0000:00:15.3: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 27 08:27:41.631498 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Oct 27 08:27:41.631573 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Oct 27 08:27:41.631655 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Oct 27 08:27:41.631723 kernel: pci 0000:00:15.3: PME# supported from D0 D3hot D3cold Oct 27 08:27:41.631795 kernel: pci 0000:00:15.4: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 27 08:27:41.631869 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Oct 27 08:27:41.631935 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Oct 27 08:27:41.632001 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Oct 27 08:27:41.632067 kernel: pci 0000:00:15.4: PME# supported from D0 D3hot D3cold Oct 27 08:27:41.632138 kernel: pci 0000:00:15.5: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 27 08:27:41.632205 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Oct 27 08:27:41.632274 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Oct 27 08:27:41.632339 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Oct 27 08:27:41.632405 kernel: pci 0000:00:15.5: PME# supported from D0 D3hot D3cold Oct 27 08:27:41.632494 kernel: pci 0000:00:15.6: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 27 08:27:41.632563 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Oct 27 08:27:41.632628 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Oct 27 08:27:41.632697 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Oct 27 08:27:41.632763 kernel: pci 0000:00:15.6: PME# supported from D0 D3hot D3cold Oct 27 08:27:41.632833 kernel: pci 0000:00:15.7: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 27 08:27:41.632912 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Oct 27 08:27:41.632982 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Oct 27 08:27:41.633049 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Oct 27 08:27:41.633118 kernel: pci 0000:00:15.7: PME# supported from D0 D3hot D3cold Oct 27 08:27:41.633191 kernel: pci 0000:00:16.0: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 27 08:27:41.633258 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Oct 27 08:27:41.633325 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Oct 27 08:27:41.633391 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Oct 27 08:27:41.633493 kernel: pci 0000:00:16.0: PME# supported from D0 D3hot D3cold Oct 27 08:27:41.633569 kernel: pci 0000:00:16.1: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 27 08:27:41.633635 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Oct 27 08:27:41.633701 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Oct 27 08:27:41.633766 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Oct 27 08:27:41.633832 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Oct 27 08:27:41.633897 kernel: pci 0000:00:16.1: PME# supported from D0 D3hot D3cold Oct 27 08:27:41.633973 kernel: pci 0000:00:16.2: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 27 08:27:41.634040 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Oct 27 08:27:41.634106 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Oct 27 08:27:41.634172 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Oct 27 08:27:41.634237 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Oct 27 08:27:41.634303 kernel: pci 0000:00:16.2: PME# supported from D0 D3hot D3cold Oct 27 08:27:41.634375 kernel: pci 0000:00:16.3: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 27 08:27:41.634703 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Oct 27 08:27:41.634782 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Oct 27 08:27:41.634852 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Oct 27 08:27:41.634920 kernel: pci 0000:00:16.3: PME# supported from D0 D3hot D3cold Oct 27 08:27:41.634992 kernel: pci 0000:00:16.4: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 27 08:27:41.635063 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Oct 27 08:27:41.635130 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Oct 27 08:27:41.635195 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Oct 27 08:27:41.635261 kernel: pci 0000:00:16.4: PME# supported from D0 D3hot D3cold Oct 27 08:27:41.635331 kernel: pci 0000:00:16.5: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 27 08:27:41.635397 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Oct 27 08:27:41.635490 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Oct 27 08:27:41.635558 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Oct 27 08:27:41.635624 kernel: pci 0000:00:16.5: PME# supported from D0 D3hot D3cold Oct 27 08:27:41.635694 kernel: pci 0000:00:16.6: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 27 08:27:41.635760 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Oct 27 08:27:41.635826 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Oct 27 08:27:41.635895 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Oct 27 08:27:41.635960 kernel: pci 0000:00:16.6: PME# supported from D0 D3hot D3cold Oct 27 08:27:41.636029 kernel: pci 0000:00:16.7: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 27 08:27:41.636098 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Oct 27 08:27:41.636164 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Oct 27 08:27:41.636229 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Oct 27 08:27:41.636297 kernel: pci 0000:00:16.7: PME# supported from D0 D3hot D3cold Oct 27 08:27:41.636368 kernel: pci 0000:00:17.0: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 27 08:27:41.636447 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Oct 27 08:27:41.636522 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Oct 27 08:27:41.636588 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Oct 27 08:27:41.636654 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Oct 27 08:27:41.636725 kernel: pci 0000:00:17.0: PME# supported from D0 D3hot D3cold Oct 27 08:27:41.636794 kernel: pci 0000:00:17.1: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 27 08:27:41.638510 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Oct 27 08:27:41.638588 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Oct 27 08:27:41.638656 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Oct 27 08:27:41.638724 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Oct 27 08:27:41.638791 kernel: pci 0000:00:17.1: PME# supported from D0 D3hot D3cold Oct 27 08:27:41.638861 kernel: pci 0000:00:17.2: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 27 08:27:41.638932 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Oct 27 08:27:41.638999 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Oct 27 08:27:41.639065 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Oct 27 08:27:41.639131 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Oct 27 08:27:41.639197 kernel: pci 0000:00:17.2: PME# supported from D0 D3hot D3cold Oct 27 08:27:41.639266 kernel: pci 0000:00:17.3: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 27 08:27:41.639335 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Oct 27 08:27:41.639401 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Oct 27 08:27:41.639477 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Oct 27 08:27:41.639544 kernel: pci 0000:00:17.3: PME# supported from D0 D3hot D3cold Oct 27 08:27:41.639614 kernel: pci 0000:00:17.4: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 27 08:27:41.639681 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Oct 27 08:27:41.639750 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Oct 27 08:27:41.639819 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Oct 27 08:27:41.639885 kernel: pci 0000:00:17.4: PME# supported from D0 D3hot D3cold Oct 27 08:27:41.639960 kernel: pci 0000:00:17.5: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 27 08:27:41.640027 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Oct 27 08:27:41.640092 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Oct 27 08:27:41.640161 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Oct 27 08:27:41.640225 kernel: pci 0000:00:17.5: PME# supported from D0 D3hot D3cold Oct 27 08:27:41.640294 kernel: pci 0000:00:17.6: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 27 08:27:41.640360 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Oct 27 08:27:41.640426 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Oct 27 08:27:41.643224 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Oct 27 08:27:41.643300 kernel: pci 0000:00:17.6: PME# supported from D0 D3hot D3cold Oct 27 08:27:41.643375 kernel: pci 0000:00:17.7: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 27 08:27:41.643719 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Oct 27 08:27:41.643797 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Oct 27 08:27:41.643867 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Oct 27 08:27:41.643935 kernel: pci 0000:00:17.7: PME# supported from D0 D3hot D3cold Oct 27 08:27:41.644011 kernel: pci 0000:00:18.0: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 27 08:27:41.644078 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Oct 27 08:27:41.644145 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Oct 27 08:27:41.644211 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Oct 27 08:27:41.644277 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Oct 27 08:27:41.644342 kernel: pci 0000:00:18.0: PME# supported from D0 D3hot D3cold Oct 27 08:27:41.644415 kernel: pci 0000:00:18.1: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 27 08:27:41.644515 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Oct 27 08:27:41.644583 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Oct 27 08:27:41.644649 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Oct 27 08:27:41.644715 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Oct 27 08:27:41.644780 kernel: pci 0000:00:18.1: PME# supported from D0 D3hot D3cold Oct 27 08:27:41.644854 kernel: pci 0000:00:18.2: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 27 08:27:41.644921 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Oct 27 08:27:41.644988 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Oct 27 08:27:41.645054 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Oct 27 08:27:41.645120 kernel: pci 0000:00:18.2: PME# supported from D0 D3hot D3cold Oct 27 08:27:41.645191 kernel: pci 0000:00:18.3: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 27 08:27:41.645257 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Oct 27 08:27:41.645323 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Oct 27 08:27:41.645389 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Oct 27 08:27:41.645475 kernel: pci 0000:00:18.3: PME# supported from D0 D3hot D3cold Oct 27 08:27:41.645549 kernel: pci 0000:00:18.4: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 27 08:27:41.645619 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Oct 27 08:27:41.645684 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Oct 27 08:27:41.645754 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Oct 27 08:27:41.645821 kernel: pci 0000:00:18.4: PME# supported from D0 D3hot D3cold Oct 27 08:27:41.645891 kernel: pci 0000:00:18.5: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 27 08:27:41.645963 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Oct 27 08:27:41.646032 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Oct 27 08:27:41.646098 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Oct 27 08:27:41.646162 kernel: pci 0000:00:18.5: PME# supported from D0 D3hot D3cold Oct 27 08:27:41.646233 kernel: pci 0000:00:18.6: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 27 08:27:41.646298 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Oct 27 08:27:41.646363 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Oct 27 08:27:41.646430 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Oct 27 08:27:41.647764 kernel: pci 0000:00:18.6: PME# supported from D0 D3hot D3cold Oct 27 08:27:41.647848 kernel: pci 0000:00:18.7: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 27 08:27:41.647920 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Oct 27 08:27:41.647988 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Oct 27 08:27:41.648055 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Oct 27 08:27:41.648124 kernel: pci 0000:00:18.7: PME# supported from D0 D3hot D3cold Oct 27 08:27:41.648213 kernel: pci_bus 0000:01: extended config space not accessible Oct 27 08:27:41.648305 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Oct 27 08:27:41.648399 kernel: pci_bus 0000:02: extended config space not accessible Oct 27 08:27:41.648414 kernel: acpiphp: Slot [32] registered Oct 27 08:27:41.648421 kernel: acpiphp: Slot [33] registered Oct 27 08:27:41.648430 kernel: acpiphp: Slot [34] registered Oct 27 08:27:41.648462 kernel: acpiphp: Slot [35] registered Oct 27 08:27:41.648468 kernel: acpiphp: Slot [36] registered Oct 27 08:27:41.648478 kernel: acpiphp: Slot [37] registered Oct 27 08:27:41.648485 kernel: acpiphp: Slot [38] registered Oct 27 08:27:41.648491 kernel: acpiphp: Slot [39] registered Oct 27 08:27:41.648498 kernel: acpiphp: Slot [40] registered Oct 27 08:27:41.648504 kernel: acpiphp: Slot [41] registered Oct 27 08:27:41.648517 kernel: acpiphp: Slot [42] registered Oct 27 08:27:41.648523 kernel: acpiphp: Slot [43] registered Oct 27 08:27:41.648530 kernel: acpiphp: Slot [44] registered Oct 27 08:27:41.648536 kernel: acpiphp: Slot [45] registered Oct 27 08:27:41.648545 kernel: acpiphp: Slot [46] registered Oct 27 08:27:41.648552 kernel: acpiphp: Slot [47] registered Oct 27 08:27:41.648558 kernel: acpiphp: Slot [48] registered Oct 27 08:27:41.648566 kernel: acpiphp: Slot [49] registered Oct 27 08:27:41.648577 kernel: acpiphp: Slot [50] registered Oct 27 08:27:41.648585 kernel: acpiphp: Slot [51] registered Oct 27 08:27:41.648591 kernel: acpiphp: Slot [52] registered Oct 27 08:27:41.648598 kernel: acpiphp: Slot [53] registered Oct 27 08:27:41.648608 kernel: acpiphp: Slot [54] registered Oct 27 08:27:41.648614 kernel: acpiphp: Slot [55] registered Oct 27 08:27:41.648620 kernel: acpiphp: Slot [56] registered Oct 27 08:27:41.648629 kernel: acpiphp: Slot [57] registered Oct 27 08:27:41.648638 kernel: acpiphp: Slot [58] registered Oct 27 08:27:41.648645 kernel: acpiphp: Slot [59] registered Oct 27 08:27:41.648651 kernel: acpiphp: Slot [60] registered Oct 27 08:27:41.648657 kernel: acpiphp: Slot [61] registered Oct 27 08:27:41.648668 kernel: acpiphp: Slot [62] registered Oct 27 08:27:41.648675 kernel: acpiphp: Slot [63] registered Oct 27 08:27:41.648766 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] (subtractive decode) Oct 27 08:27:41.648845 kernel: pci 0000:00:11.0: bridge window [mem 0x000a0000-0x000bffff window] (subtractive decode) Oct 27 08:27:41.648912 kernel: pci 0000:00:11.0: bridge window [mem 0x000cc000-0x000dbfff window] (subtractive decode) Oct 27 08:27:41.648978 kernel: pci 0000:00:11.0: bridge window [mem 0xc0000000-0xfebfffff window] (subtractive decode) Oct 27 08:27:41.649043 kernel: pci 0000:00:11.0: bridge window [io 0x0000-0x0cf7 window] (subtractive decode) Oct 27 08:27:41.649108 kernel: pci 0000:00:11.0: bridge window [io 0x0d00-0xfeff window] (subtractive decode) Oct 27 08:27:41.649184 kernel: pci 0000:03:00.0: [15ad:07c0] type 00 class 0x010700 PCIe Endpoint Oct 27 08:27:41.649253 kernel: pci 0000:03:00.0: BAR 0 [io 0x4000-0x4007] Oct 27 08:27:41.649321 kernel: pci 0000:03:00.0: BAR 1 [mem 0xfd5f8000-0xfd5fffff 64bit] Oct 27 08:27:41.649388 kernel: pci 0000:03:00.0: ROM [mem 0x00000000-0x0000ffff pref] Oct 27 08:27:41.649471 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Oct 27 08:27:41.649540 kernel: pci 0000:03:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Oct 27 08:27:41.649610 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Oct 27 08:27:41.649686 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Oct 27 08:27:41.649763 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Oct 27 08:27:41.649838 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Oct 27 08:27:41.649913 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Oct 27 08:27:41.650020 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Oct 27 08:27:41.650137 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Oct 27 08:27:41.650246 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Oct 27 08:27:41.650362 kernel: pci 0000:0b:00.0: [15ad:07b0] type 00 class 0x020000 PCIe Endpoint Oct 27 08:27:41.650475 kernel: pci 0000:0b:00.0: BAR 0 [mem 0xfd4fc000-0xfd4fcfff] Oct 27 08:27:41.652981 kernel: pci 0000:0b:00.0: BAR 1 [mem 0xfd4fd000-0xfd4fdfff] Oct 27 08:27:41.653076 kernel: pci 0000:0b:00.0: BAR 2 [mem 0xfd4fe000-0xfd4fffff] Oct 27 08:27:41.653150 kernel: pci 0000:0b:00.0: BAR 3 [io 0x5000-0x500f] Oct 27 08:27:41.653220 kernel: pci 0000:0b:00.0: ROM [mem 0x00000000-0x0000ffff pref] Oct 27 08:27:41.653288 kernel: pci 0000:0b:00.0: supports D1 D2 Oct 27 08:27:41.653356 kernel: pci 0000:0b:00.0: PME# supported from D0 D1 D2 D3hot D3cold Oct 27 08:27:41.653424 kernel: pci 0000:0b:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Oct 27 08:27:41.653507 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Oct 27 08:27:41.653579 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Oct 27 08:27:41.653649 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Oct 27 08:27:41.653729 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Oct 27 08:27:41.653800 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Oct 27 08:27:41.653868 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Oct 27 08:27:41.653936 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Oct 27 08:27:41.654007 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Oct 27 08:27:41.654074 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Oct 27 08:27:41.654141 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Oct 27 08:27:41.654218 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Oct 27 08:27:41.654286 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Oct 27 08:27:41.654354 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Oct 27 08:27:41.654437 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Oct 27 08:27:41.654513 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Oct 27 08:27:41.654585 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Oct 27 08:27:41.654659 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Oct 27 08:27:41.654727 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Oct 27 08:27:41.654806 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Oct 27 08:27:41.654882 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Oct 27 08:27:41.654958 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Oct 27 08:27:41.655027 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Oct 27 08:27:41.655093 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Oct 27 08:27:41.655161 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Oct 27 08:27:41.655170 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 9 Oct 27 08:27:41.655179 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 0 Oct 27 08:27:41.655186 kernel: ACPI: PCI: Interrupt link LNKB disabled Oct 27 08:27:41.655192 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Oct 27 08:27:41.655199 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 10 Oct 27 08:27:41.655205 kernel: iommu: Default domain type: Translated Oct 27 08:27:41.655212 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Oct 27 08:27:41.655218 kernel: PCI: Using ACPI for IRQ routing Oct 27 08:27:41.655225 kernel: PCI: pci_cache_line_size set to 64 bytes Oct 27 08:27:41.655233 kernel: e820: reserve RAM buffer [mem 0x0009ec00-0x0009ffff] Oct 27 08:27:41.655239 kernel: e820: reserve RAM buffer [mem 0x7fee0000-0x7fffffff] Oct 27 08:27:41.655305 kernel: pci 0000:00:0f.0: vgaarb: setting as boot VGA device Oct 27 08:27:41.655370 kernel: pci 0000:00:0f.0: vgaarb: bridge control possible Oct 27 08:27:41.656600 kernel: pci 0000:00:0f.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Oct 27 08:27:41.656613 kernel: vgaarb: loaded Oct 27 08:27:41.656621 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 Oct 27 08:27:41.656630 kernel: hpet0: 16 comparators, 64-bit 14.318180 MHz counter Oct 27 08:27:41.656637 kernel: clocksource: Switched to clocksource tsc-early Oct 27 08:27:41.656644 kernel: VFS: Disk quotas dquot_6.6.0 Oct 27 08:27:41.656650 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 27 08:27:41.656657 kernel: pnp: PnP ACPI init Oct 27 08:27:41.656738 kernel: system 00:00: [io 0x1000-0x103f] has been reserved Oct 27 08:27:41.656808 kernel: system 00:00: [io 0x1040-0x104f] has been reserved Oct 27 08:27:41.656871 kernel: system 00:00: [io 0x0cf0-0x0cf1] has been reserved Oct 27 08:27:41.656944 kernel: system 00:04: [mem 0xfed00000-0xfed003ff] has been reserved Oct 27 08:27:41.657037 kernel: pnp 00:06: [dma 2] Oct 27 08:27:41.657104 kernel: system 00:07: [io 0xfce0-0xfcff] has been reserved Oct 27 08:27:41.657169 kernel: system 00:07: [mem 0xf0000000-0xf7ffffff] has been reserved Oct 27 08:27:41.657231 kernel: system 00:07: [mem 0xfe800000-0xfe9fffff] has been reserved Oct 27 08:27:41.657240 kernel: pnp: PnP ACPI: found 8 devices Oct 27 08:27:41.657247 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Oct 27 08:27:41.657254 kernel: NET: Registered PF_INET protocol family Oct 27 08:27:41.657261 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 27 08:27:41.657268 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Oct 27 08:27:41.657276 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 27 08:27:41.657283 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Oct 27 08:27:41.657289 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Oct 27 08:27:41.657296 kernel: TCP: Hash tables configured (established 16384 bind 16384) Oct 27 08:27:41.657303 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Oct 27 08:27:41.657310 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Oct 27 08:27:41.657316 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 27 08:27:41.657324 kernel: NET: Registered PF_XDP protocol family Oct 27 08:27:41.657393 kernel: pci 0000:00:15.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Oct 27 08:27:41.657483 kernel: pci 0000:00:15.3: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Oct 27 08:27:41.657555 kernel: pci 0000:00:15.4: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Oct 27 08:27:41.657625 kernel: pci 0000:00:15.5: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Oct 27 08:27:41.658093 kernel: pci 0000:00:15.6: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Oct 27 08:27:41.658167 kernel: pci 0000:00:15.7: bridge window [io 0x1000-0x0fff] to [bus 0a] add_size 1000 Oct 27 08:27:41.658236 kernel: pci 0000:00:16.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0b] add_size 200000 add_align 100000 Oct 27 08:27:41.658304 kernel: pci 0000:00:16.3: bridge window [io 0x1000-0x0fff] to [bus 0e] add_size 1000 Oct 27 08:27:41.658373 kernel: pci 0000:00:16.4: bridge window [io 0x1000-0x0fff] to [bus 0f] add_size 1000 Oct 27 08:27:41.658457 kernel: pci 0000:00:16.5: bridge window [io 0x1000-0x0fff] to [bus 10] add_size 1000 Oct 27 08:27:41.658529 kernel: pci 0000:00:16.6: bridge window [io 0x1000-0x0fff] to [bus 11] add_size 1000 Oct 27 08:27:41.658598 kernel: pci 0000:00:16.7: bridge window [io 0x1000-0x0fff] to [bus 12] add_size 1000 Oct 27 08:27:41.658669 kernel: pci 0000:00:17.3: bridge window [io 0x1000-0x0fff] to [bus 16] add_size 1000 Oct 27 08:27:41.658736 kernel: pci 0000:00:17.4: bridge window [io 0x1000-0x0fff] to [bus 17] add_size 1000 Oct 27 08:27:41.658803 kernel: pci 0000:00:17.5: bridge window [io 0x1000-0x0fff] to [bus 18] add_size 1000 Oct 27 08:27:41.658871 kernel: pci 0000:00:17.6: bridge window [io 0x1000-0x0fff] to [bus 19] add_size 1000 Oct 27 08:27:41.658938 kernel: pci 0000:00:17.7: bridge window [io 0x1000-0x0fff] to [bus 1a] add_size 1000 Oct 27 08:27:41.659006 kernel: pci 0000:00:18.2: bridge window [io 0x1000-0x0fff] to [bus 1d] add_size 1000 Oct 27 08:27:41.659076 kernel: pci 0000:00:18.3: bridge window [io 0x1000-0x0fff] to [bus 1e] add_size 1000 Oct 27 08:27:41.659143 kernel: pci 0000:00:18.4: bridge window [io 0x1000-0x0fff] to [bus 1f] add_size 1000 Oct 27 08:27:41.659210 kernel: pci 0000:00:18.5: bridge window [io 0x1000-0x0fff] to [bus 20] add_size 1000 Oct 27 08:27:41.659276 kernel: pci 0000:00:18.6: bridge window [io 0x1000-0x0fff] to [bus 21] add_size 1000 Oct 27 08:27:41.659343 kernel: pci 0000:00:18.7: bridge window [io 0x1000-0x0fff] to [bus 22] add_size 1000 Oct 27 08:27:41.659410 kernel: pci 0000:00:15.0: bridge window [mem 0xc0000000-0xc01fffff 64bit pref]: assigned Oct 27 08:27:41.659499 kernel: pci 0000:00:16.0: bridge window [mem 0xc0200000-0xc03fffff 64bit pref]: assigned Oct 27 08:27:41.659566 kernel: pci 0000:00:15.3: bridge window [io size 0x1000]: can't assign; no space Oct 27 08:27:41.659632 kernel: pci 0000:00:15.3: bridge window [io size 0x1000]: failed to assign Oct 27 08:27:41.659698 kernel: pci 0000:00:15.4: bridge window [io size 0x1000]: can't assign; no space Oct 27 08:27:41.659764 kernel: pci 0000:00:15.4: bridge window [io size 0x1000]: failed to assign Oct 27 08:27:41.659829 kernel: pci 0000:00:15.5: bridge window [io size 0x1000]: can't assign; no space Oct 27 08:27:41.659894 kernel: pci 0000:00:15.5: bridge window [io size 0x1000]: failed to assign Oct 27 08:27:41.659969 kernel: pci 0000:00:15.6: bridge window [io size 0x1000]: can't assign; no space Oct 27 08:27:41.660036 kernel: pci 0000:00:15.6: bridge window [io size 0x1000]: failed to assign Oct 27 08:27:41.660103 kernel: pci 0000:00:15.7: bridge window [io size 0x1000]: can't assign; no space Oct 27 08:27:41.660169 kernel: pci 0000:00:15.7: bridge window [io size 0x1000]: failed to assign Oct 27 08:27:41.660234 kernel: pci 0000:00:16.3: bridge window [io size 0x1000]: can't assign; no space Oct 27 08:27:41.660300 kernel: pci 0000:00:16.3: bridge window [io size 0x1000]: failed to assign Oct 27 08:27:41.660369 kernel: pci 0000:00:16.4: bridge window [io size 0x1000]: can't assign; no space Oct 27 08:27:41.664469 kernel: pci 0000:00:16.4: bridge window [io size 0x1000]: failed to assign Oct 27 08:27:41.664577 kernel: pci 0000:00:16.5: bridge window [io size 0x1000]: can't assign; no space Oct 27 08:27:41.664666 kernel: pci 0000:00:16.5: bridge window [io size 0x1000]: failed to assign Oct 27 08:27:41.664757 kernel: pci 0000:00:16.6: bridge window [io size 0x1000]: can't assign; no space Oct 27 08:27:41.664837 kernel: pci 0000:00:16.6: bridge window [io size 0x1000]: failed to assign Oct 27 08:27:41.664917 kernel: pci 0000:00:16.7: bridge window [io size 0x1000]: can't assign; no space Oct 27 08:27:41.664994 kernel: pci 0000:00:16.7: bridge window [io size 0x1000]: failed to assign Oct 27 08:27:41.665072 kernel: pci 0000:00:17.3: bridge window [io size 0x1000]: can't assign; no space Oct 27 08:27:41.665375 kernel: pci 0000:00:17.3: bridge window [io size 0x1000]: failed to assign Oct 27 08:27:41.665476 kernel: pci 0000:00:17.4: bridge window [io size 0x1000]: can't assign; no space Oct 27 08:27:41.665549 kernel: pci 0000:00:17.4: bridge window [io size 0x1000]: failed to assign Oct 27 08:27:41.665686 kernel: pci 0000:00:17.5: bridge window [io size 0x1000]: can't assign; no space Oct 27 08:27:41.665761 kernel: pci 0000:00:17.5: bridge window [io size 0x1000]: failed to assign Oct 27 08:27:41.665832 kernel: pci 0000:00:17.6: bridge window [io size 0x1000]: can't assign; no space Oct 27 08:27:41.665900 kernel: pci 0000:00:17.6: bridge window [io size 0x1000]: failed to assign Oct 27 08:27:41.665970 kernel: pci 0000:00:17.7: bridge window [io size 0x1000]: can't assign; no space Oct 27 08:27:41.666039 kernel: pci 0000:00:17.7: bridge window [io size 0x1000]: failed to assign Oct 27 08:27:41.666109 kernel: pci 0000:00:18.2: bridge window [io size 0x1000]: can't assign; no space Oct 27 08:27:41.666180 kernel: pci 0000:00:18.2: bridge window [io size 0x1000]: failed to assign Oct 27 08:27:41.666249 kernel: pci 0000:00:18.3: bridge window [io size 0x1000]: can't assign; no space Oct 27 08:27:41.666319 kernel: pci 0000:00:18.3: bridge window [io size 0x1000]: failed to assign Oct 27 08:27:41.666390 kernel: pci 0000:00:18.4: bridge window [io size 0x1000]: can't assign; no space Oct 27 08:27:41.666476 kernel: pci 0000:00:18.4: bridge window [io size 0x1000]: failed to assign Oct 27 08:27:41.666546 kernel: pci 0000:00:18.5: bridge window [io size 0x1000]: can't assign; no space Oct 27 08:27:41.666619 kernel: pci 0000:00:18.5: bridge window [io size 0x1000]: failed to assign Oct 27 08:27:41.666689 kernel: pci 0000:00:18.6: bridge window [io size 0x1000]: can't assign; no space Oct 27 08:27:41.666759 kernel: pci 0000:00:18.6: bridge window [io size 0x1000]: failed to assign Oct 27 08:27:41.666828 kernel: pci 0000:00:18.7: bridge window [io size 0x1000]: can't assign; no space Oct 27 08:27:41.666897 kernel: pci 0000:00:18.7: bridge window [io size 0x1000]: failed to assign Oct 27 08:27:41.666977 kernel: pci 0000:00:18.7: bridge window [io size 0x1000]: can't assign; no space Oct 27 08:27:41.667047 kernel: pci 0000:00:18.7: bridge window [io size 0x1000]: failed to assign Oct 27 08:27:41.667174 kernel: pci 0000:00:18.6: bridge window [io size 0x1000]: can't assign; no space Oct 27 08:27:41.667243 kernel: pci 0000:00:18.6: bridge window [io size 0x1000]: failed to assign Oct 27 08:27:41.667310 kernel: pci 0000:00:18.5: bridge window [io size 0x1000]: can't assign; no space Oct 27 08:27:41.667376 kernel: pci 0000:00:18.5: bridge window [io size 0x1000]: failed to assign Oct 27 08:27:41.667456 kernel: pci 0000:00:18.4: bridge window [io size 0x1000]: can't assign; no space Oct 27 08:27:41.667526 kernel: pci 0000:00:18.4: bridge window [io size 0x1000]: failed to assign Oct 27 08:27:41.667593 kernel: pci 0000:00:18.3: bridge window [io size 0x1000]: can't assign; no space Oct 27 08:27:41.667677 kernel: pci 0000:00:18.3: bridge window [io size 0x1000]: failed to assign Oct 27 08:27:41.667747 kernel: pci 0000:00:18.2: bridge window [io size 0x1000]: can't assign; no space Oct 27 08:27:41.667814 kernel: pci 0000:00:18.2: bridge window [io size 0x1000]: failed to assign Oct 27 08:27:41.667882 kernel: pci 0000:00:17.7: bridge window [io size 0x1000]: can't assign; no space Oct 27 08:27:41.667950 kernel: pci 0000:00:17.7: bridge window [io size 0x1000]: failed to assign Oct 27 08:27:41.668017 kernel: pci 0000:00:17.6: bridge window [io size 0x1000]: can't assign; no space Oct 27 08:27:41.668083 kernel: pci 0000:00:17.6: bridge window [io size 0x1000]: failed to assign Oct 27 08:27:41.668151 kernel: pci 0000:00:17.5: bridge window [io size 0x1000]: can't assign; no space Oct 27 08:27:41.668221 kernel: pci 0000:00:17.5: bridge window [io size 0x1000]: failed to assign Oct 27 08:27:41.668289 kernel: pci 0000:00:17.4: bridge window [io size 0x1000]: can't assign; no space Oct 27 08:27:41.668356 kernel: pci 0000:00:17.4: bridge window [io size 0x1000]: failed to assign Oct 27 08:27:41.668425 kernel: pci 0000:00:17.3: bridge window [io size 0x1000]: can't assign; no space Oct 27 08:27:41.668508 kernel: pci 0000:00:17.3: bridge window [io size 0x1000]: failed to assign Oct 27 08:27:41.668577 kernel: pci 0000:00:16.7: bridge window [io size 0x1000]: can't assign; no space Oct 27 08:27:41.668644 kernel: pci 0000:00:16.7: bridge window [io size 0x1000]: failed to assign Oct 27 08:27:41.668732 kernel: pci 0000:00:16.6: bridge window [io size 0x1000]: can't assign; no space Oct 27 08:27:41.668799 kernel: pci 0000:00:16.6: bridge window [io size 0x1000]: failed to assign Oct 27 08:27:41.668867 kernel: pci 0000:00:16.5: bridge window [io size 0x1000]: can't assign; no space Oct 27 08:27:41.668934 kernel: pci 0000:00:16.5: bridge window [io size 0x1000]: failed to assign Oct 27 08:27:41.669002 kernel: pci 0000:00:16.4: bridge window [io size 0x1000]: can't assign; no space Oct 27 08:27:41.669068 kernel: pci 0000:00:16.4: bridge window [io size 0x1000]: failed to assign Oct 27 08:27:41.669136 kernel: pci 0000:00:16.3: bridge window [io size 0x1000]: can't assign; no space Oct 27 08:27:41.669203 kernel: pci 0000:00:16.3: bridge window [io size 0x1000]: failed to assign Oct 27 08:27:41.669276 kernel: pci 0000:00:15.7: bridge window [io size 0x1000]: can't assign; no space Oct 27 08:27:41.669343 kernel: pci 0000:00:15.7: bridge window [io size 0x1000]: failed to assign Oct 27 08:27:41.669410 kernel: pci 0000:00:15.6: bridge window [io size 0x1000]: can't assign; no space Oct 27 08:27:41.669487 kernel: pci 0000:00:15.6: bridge window [io size 0x1000]: failed to assign Oct 27 08:27:41.669558 kernel: pci 0000:00:15.5: bridge window [io size 0x1000]: can't assign; no space Oct 27 08:27:41.669630 kernel: pci 0000:00:15.5: bridge window [io size 0x1000]: failed to assign Oct 27 08:27:41.669704 kernel: pci 0000:00:15.4: bridge window [io size 0x1000]: can't assign; no space Oct 27 08:27:41.669771 kernel: pci 0000:00:15.4: bridge window [io size 0x1000]: failed to assign Oct 27 08:27:41.669839 kernel: pci 0000:00:15.3: bridge window [io size 0x1000]: can't assign; no space Oct 27 08:27:41.669904 kernel: pci 0000:00:15.3: bridge window [io size 0x1000]: failed to assign Oct 27 08:27:41.669976 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Oct 27 08:27:41.670044 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] Oct 27 08:27:41.670192 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Oct 27 08:27:41.670260 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Oct 27 08:27:41.670328 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Oct 27 08:27:41.670400 kernel: pci 0000:03:00.0: ROM [mem 0xfd500000-0xfd50ffff pref]: assigned Oct 27 08:27:41.670486 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Oct 27 08:27:41.670553 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Oct 27 08:27:41.670628 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Oct 27 08:27:41.670695 kernel: pci 0000:00:15.0: bridge window [mem 0xc0000000-0xc01fffff 64bit pref] Oct 27 08:27:41.670817 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Oct 27 08:27:41.670886 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Oct 27 08:27:41.670952 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Oct 27 08:27:41.671016 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Oct 27 08:27:41.671084 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Oct 27 08:27:41.671150 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Oct 27 08:27:41.671220 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Oct 27 08:27:41.671284 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Oct 27 08:27:41.671350 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Oct 27 08:27:41.671415 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Oct 27 08:27:41.671491 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Oct 27 08:27:41.671558 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Oct 27 08:27:41.671624 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Oct 27 08:27:41.671706 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Oct 27 08:27:41.671773 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Oct 27 08:27:41.671838 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Oct 27 08:27:41.671903 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Oct 27 08:27:41.671969 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Oct 27 08:27:41.672035 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Oct 27 08:27:41.672103 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Oct 27 08:27:41.672168 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Oct 27 08:27:41.672233 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Oct 27 08:27:41.672297 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Oct 27 08:27:41.672366 kernel: pci 0000:0b:00.0: ROM [mem 0xfd400000-0xfd40ffff pref]: assigned Oct 27 08:27:41.672447 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Oct 27 08:27:41.672518 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Oct 27 08:27:41.672584 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Oct 27 08:27:41.672650 kernel: pci 0000:00:16.0: bridge window [mem 0xc0200000-0xc03fffff 64bit pref] Oct 27 08:27:41.672716 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Oct 27 08:27:41.672783 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Oct 27 08:27:41.672850 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Oct 27 08:27:41.672917 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Oct 27 08:27:41.672993 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Oct 27 08:27:41.673060 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Oct 27 08:27:41.674784 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Oct 27 08:27:41.674869 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Oct 27 08:27:41.674942 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Oct 27 08:27:41.675013 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Oct 27 08:27:41.675081 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Oct 27 08:27:41.675154 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Oct 27 08:27:41.675236 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Oct 27 08:27:41.675305 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Oct 27 08:27:41.675375 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Oct 27 08:27:41.675451 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Oct 27 08:27:41.675519 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Oct 27 08:27:41.675591 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Oct 27 08:27:41.675658 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Oct 27 08:27:41.675725 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Oct 27 08:27:41.675794 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Oct 27 08:27:41.675861 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Oct 27 08:27:41.675927 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Oct 27 08:27:41.675997 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Oct 27 08:27:41.676064 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Oct 27 08:27:41.676345 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Oct 27 08:27:41.676416 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Oct 27 08:27:41.676501 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Oct 27 08:27:41.676569 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Oct 27 08:27:41.678015 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Oct 27 08:27:41.678090 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Oct 27 08:27:41.678162 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Oct 27 08:27:41.678231 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Oct 27 08:27:41.678299 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Oct 27 08:27:41.678366 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Oct 27 08:27:41.678446 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Oct 27 08:27:41.678519 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Oct 27 08:27:41.678584 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Oct 27 08:27:41.678657 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Oct 27 08:27:41.678724 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Oct 27 08:27:41.678790 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Oct 27 08:27:41.678859 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Oct 27 08:27:41.678925 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Oct 27 08:27:41.679008 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Oct 27 08:27:41.679079 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Oct 27 08:27:41.679144 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Oct 27 08:27:41.679209 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Oct 27 08:27:41.679275 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Oct 27 08:27:41.679341 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Oct 27 08:27:41.679407 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Oct 27 08:27:41.683301 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Oct 27 08:27:41.683375 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Oct 27 08:27:41.683454 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Oct 27 08:27:41.683930 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Oct 27 08:27:41.684004 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Oct 27 08:27:41.684072 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Oct 27 08:27:41.684142 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Oct 27 08:27:41.684209 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Oct 27 08:27:41.684277 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Oct 27 08:27:41.684343 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Oct 27 08:27:41.684409 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Oct 27 08:27:41.684487 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Oct 27 08:27:41.684555 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Oct 27 08:27:41.684624 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Oct 27 08:27:41.684693 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Oct 27 08:27:41.684760 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Oct 27 08:27:41.684827 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Oct 27 08:27:41.684895 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Oct 27 08:27:41.684962 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Oct 27 08:27:41.685031 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Oct 27 08:27:41.685099 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Oct 27 08:27:41.685166 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Oct 27 08:27:41.685231 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Oct 27 08:27:41.685299 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Oct 27 08:27:41.685365 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Oct 27 08:27:41.685457 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Oct 27 08:27:41.685530 kernel: pci_bus 0000:00: resource 4 [mem 0x000a0000-0x000bffff window] Oct 27 08:27:41.685590 kernel: pci_bus 0000:00: resource 5 [mem 0x000cc000-0x000dbfff window] Oct 27 08:27:41.685649 kernel: pci_bus 0000:00: resource 6 [mem 0xc0000000-0xfebfffff window] Oct 27 08:27:41.685707 kernel: pci_bus 0000:00: resource 7 [io 0x0000-0x0cf7 window] Oct 27 08:27:41.685765 kernel: pci_bus 0000:00: resource 8 [io 0x0d00-0xfeff window] Oct 27 08:27:41.685834 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x3fff] Oct 27 08:27:41.685895 kernel: pci_bus 0000:02: resource 1 [mem 0xfd600000-0xfdffffff] Oct 27 08:27:41.685966 kernel: pci_bus 0000:02: resource 2 [mem 0xe7b00000-0xe7ffffff 64bit pref] Oct 27 08:27:41.686028 kernel: pci_bus 0000:02: resource 4 [mem 0x000a0000-0x000bffff window] Oct 27 08:27:41.686303 kernel: pci_bus 0000:02: resource 5 [mem 0x000cc000-0x000dbfff window] Oct 27 08:27:41.686369 kernel: pci_bus 0000:02: resource 6 [mem 0xc0000000-0xfebfffff window] Oct 27 08:27:41.686441 kernel: pci_bus 0000:02: resource 7 [io 0x0000-0x0cf7 window] Oct 27 08:27:41.686504 kernel: pci_bus 0000:02: resource 8 [io 0x0d00-0xfeff window] Oct 27 08:27:41.686571 kernel: pci_bus 0000:03: resource 0 [io 0x4000-0x4fff] Oct 27 08:27:41.686633 kernel: pci_bus 0000:03: resource 1 [mem 0xfd500000-0xfd5fffff] Oct 27 08:27:41.686694 kernel: pci_bus 0000:03: resource 2 [mem 0xc0000000-0xc01fffff 64bit pref] Oct 27 08:27:41.686759 kernel: pci_bus 0000:04: resource 0 [io 0x8000-0x8fff] Oct 27 08:27:41.686823 kernel: pci_bus 0000:04: resource 1 [mem 0xfd100000-0xfd1fffff] Oct 27 08:27:41.686884 kernel: pci_bus 0000:04: resource 2 [mem 0xe7800000-0xe78fffff 64bit pref] Oct 27 08:27:41.686962 kernel: pci_bus 0000:05: resource 0 [io 0xc000-0xcfff] Oct 27 08:27:41.687027 kernel: pci_bus 0000:05: resource 1 [mem 0xfcd00000-0xfcdfffff] Oct 27 08:27:41.687087 kernel: pci_bus 0000:05: resource 2 [mem 0xe7400000-0xe74fffff 64bit pref] Oct 27 08:27:41.687156 kernel: pci_bus 0000:06: resource 1 [mem 0xfc900000-0xfc9fffff] Oct 27 08:27:41.690225 kernel: pci_bus 0000:06: resource 2 [mem 0xe7000000-0xe70fffff 64bit pref] Oct 27 08:27:41.690299 kernel: pci_bus 0000:07: resource 1 [mem 0xfc500000-0xfc5fffff] Oct 27 08:27:41.690363 kernel: pci_bus 0000:07: resource 2 [mem 0xe6c00000-0xe6cfffff 64bit pref] Oct 27 08:27:41.690440 kernel: pci_bus 0000:08: resource 1 [mem 0xfc100000-0xfc1fffff] Oct 27 08:27:41.690510 kernel: pci_bus 0000:08: resource 2 [mem 0xe6800000-0xe68fffff 64bit pref] Oct 27 08:27:41.690580 kernel: pci_bus 0000:09: resource 1 [mem 0xfbd00000-0xfbdfffff] Oct 27 08:27:41.690642 kernel: pci_bus 0000:09: resource 2 [mem 0xe6400000-0xe64fffff 64bit pref] Oct 27 08:27:41.690709 kernel: pci_bus 0000:0a: resource 1 [mem 0xfb900000-0xfb9fffff] Oct 27 08:27:41.690785 kernel: pci_bus 0000:0a: resource 2 [mem 0xe6000000-0xe60fffff 64bit pref] Oct 27 08:27:41.690861 kernel: pci_bus 0000:0b: resource 0 [io 0x5000-0x5fff] Oct 27 08:27:41.690923 kernel: pci_bus 0000:0b: resource 1 [mem 0xfd400000-0xfd4fffff] Oct 27 08:27:41.690983 kernel: pci_bus 0000:0b: resource 2 [mem 0xc0200000-0xc03fffff 64bit pref] Oct 27 08:27:41.691051 kernel: pci_bus 0000:0c: resource 0 [io 0x9000-0x9fff] Oct 27 08:27:41.691112 kernel: pci_bus 0000:0c: resource 1 [mem 0xfd000000-0xfd0fffff] Oct 27 08:27:41.691181 kernel: pci_bus 0000:0c: resource 2 [mem 0xe7700000-0xe77fffff 64bit pref] Oct 27 08:27:41.691246 kernel: pci_bus 0000:0d: resource 0 [io 0xd000-0xdfff] Oct 27 08:27:41.691307 kernel: pci_bus 0000:0d: resource 1 [mem 0xfcc00000-0xfccfffff] Oct 27 08:27:41.691367 kernel: pci_bus 0000:0d: resource 2 [mem 0xe7300000-0xe73fffff 64bit pref] Oct 27 08:27:41.691439 kernel: pci_bus 0000:0e: resource 1 [mem 0xfc800000-0xfc8fffff] Oct 27 08:27:41.691504 kernel: pci_bus 0000:0e: resource 2 [mem 0xe6f00000-0xe6ffffff 64bit pref] Oct 27 08:27:41.691571 kernel: pci_bus 0000:0f: resource 1 [mem 0xfc400000-0xfc4fffff] Oct 27 08:27:41.691634 kernel: pci_bus 0000:0f: resource 2 [mem 0xe6b00000-0xe6bfffff 64bit pref] Oct 27 08:27:41.691699 kernel: pci_bus 0000:10: resource 1 [mem 0xfc000000-0xfc0fffff] Oct 27 08:27:41.691760 kernel: pci_bus 0000:10: resource 2 [mem 0xe6700000-0xe67fffff 64bit pref] Oct 27 08:27:41.691827 kernel: pci_bus 0000:11: resource 1 [mem 0xfbc00000-0xfbcfffff] Oct 27 08:27:41.691891 kernel: pci_bus 0000:11: resource 2 [mem 0xe6300000-0xe63fffff 64bit pref] Oct 27 08:27:41.691956 kernel: pci_bus 0000:12: resource 1 [mem 0xfb800000-0xfb8fffff] Oct 27 08:27:41.692017 kernel: pci_bus 0000:12: resource 2 [mem 0xe5f00000-0xe5ffffff 64bit pref] Oct 27 08:27:41.692084 kernel: pci_bus 0000:13: resource 0 [io 0x6000-0x6fff] Oct 27 08:27:41.692145 kernel: pci_bus 0000:13: resource 1 [mem 0xfd300000-0xfd3fffff] Oct 27 08:27:41.692220 kernel: pci_bus 0000:13: resource 2 [mem 0xe7a00000-0xe7afffff 64bit pref] Oct 27 08:27:41.692289 kernel: pci_bus 0000:14: resource 0 [io 0xa000-0xafff] Oct 27 08:27:41.692357 kernel: pci_bus 0000:14: resource 1 [mem 0xfcf00000-0xfcffffff] Oct 27 08:27:41.692422 kernel: pci_bus 0000:14: resource 2 [mem 0xe7600000-0xe76fffff 64bit pref] Oct 27 08:27:41.692501 kernel: pci_bus 0000:15: resource 0 [io 0xe000-0xefff] Oct 27 08:27:41.692563 kernel: pci_bus 0000:15: resource 1 [mem 0xfcb00000-0xfcbfffff] Oct 27 08:27:41.692627 kernel: pci_bus 0000:15: resource 2 [mem 0xe7200000-0xe72fffff 64bit pref] Oct 27 08:27:41.692693 kernel: pci_bus 0000:16: resource 1 [mem 0xfc700000-0xfc7fffff] Oct 27 08:27:41.692754 kernel: pci_bus 0000:16: resource 2 [mem 0xe6e00000-0xe6efffff 64bit pref] Oct 27 08:27:41.692857 kernel: pci_bus 0000:17: resource 1 [mem 0xfc300000-0xfc3fffff] Oct 27 08:27:41.692925 kernel: pci_bus 0000:17: resource 2 [mem 0xe6a00000-0xe6afffff 64bit pref] Oct 27 08:27:41.692990 kernel: pci_bus 0000:18: resource 1 [mem 0xfbf00000-0xfbffffff] Oct 27 08:27:41.693054 kernel: pci_bus 0000:18: resource 2 [mem 0xe6600000-0xe66fffff 64bit pref] Oct 27 08:27:41.693123 kernel: pci_bus 0000:19: resource 1 [mem 0xfbb00000-0xfbbfffff] Oct 27 08:27:41.693191 kernel: pci_bus 0000:19: resource 2 [mem 0xe6200000-0xe62fffff 64bit pref] Oct 27 08:27:41.693260 kernel: pci_bus 0000:1a: resource 1 [mem 0xfb700000-0xfb7fffff] Oct 27 08:27:41.693337 kernel: pci_bus 0000:1a: resource 2 [mem 0xe5e00000-0xe5efffff 64bit pref] Oct 27 08:27:41.693405 kernel: pci_bus 0000:1b: resource 0 [io 0x7000-0x7fff] Oct 27 08:27:41.693475 kernel: pci_bus 0000:1b: resource 1 [mem 0xfd200000-0xfd2fffff] Oct 27 08:27:41.693536 kernel: pci_bus 0000:1b: resource 2 [mem 0xe7900000-0xe79fffff 64bit pref] Oct 27 08:27:41.693602 kernel: pci_bus 0000:1c: resource 0 [io 0xb000-0xbfff] Oct 27 08:27:41.693663 kernel: pci_bus 0000:1c: resource 1 [mem 0xfce00000-0xfcefffff] Oct 27 08:27:41.693725 kernel: pci_bus 0000:1c: resource 2 [mem 0xe7500000-0xe75fffff 64bit pref] Oct 27 08:27:41.693792 kernel: pci_bus 0000:1d: resource 1 [mem 0xfca00000-0xfcafffff] Oct 27 08:27:41.693853 kernel: pci_bus 0000:1d: resource 2 [mem 0xe7100000-0xe71fffff 64bit pref] Oct 27 08:27:41.693920 kernel: pci_bus 0000:1e: resource 1 [mem 0xfc600000-0xfc6fffff] Oct 27 08:27:41.693995 kernel: pci_bus 0000:1e: resource 2 [mem 0xe6d00000-0xe6dfffff 64bit pref] Oct 27 08:27:41.694062 kernel: pci_bus 0000:1f: resource 1 [mem 0xfc200000-0xfc2fffff] Oct 27 08:27:41.694126 kernel: pci_bus 0000:1f: resource 2 [mem 0xe6900000-0xe69fffff 64bit pref] Oct 27 08:27:41.694191 kernel: pci_bus 0000:20: resource 1 [mem 0xfbe00000-0xfbefffff] Oct 27 08:27:41.694253 kernel: pci_bus 0000:20: resource 2 [mem 0xe6500000-0xe65fffff 64bit pref] Oct 27 08:27:41.694318 kernel: pci_bus 0000:21: resource 1 [mem 0xfba00000-0xfbafffff] Oct 27 08:27:41.694380 kernel: pci_bus 0000:21: resource 2 [mem 0xe6100000-0xe61fffff 64bit pref] Oct 27 08:27:41.694456 kernel: pci_bus 0000:22: resource 1 [mem 0xfb600000-0xfb6fffff] Oct 27 08:27:41.694519 kernel: pci_bus 0000:22: resource 2 [mem 0xe5d00000-0xe5dfffff 64bit pref] Oct 27 08:27:41.694593 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Oct 27 08:27:41.694603 kernel: PCI: CLS 32 bytes, default 64 Oct 27 08:27:41.694610 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Oct 27 08:27:41.694617 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Oct 27 08:27:41.694626 kernel: clocksource: Switched to clocksource tsc Oct 27 08:27:41.694633 kernel: Initialise system trusted keyrings Oct 27 08:27:41.694640 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Oct 27 08:27:41.694647 kernel: Key type asymmetric registered Oct 27 08:27:41.694653 kernel: Asymmetric key parser 'x509' registered Oct 27 08:27:41.694659 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Oct 27 08:27:41.694666 kernel: io scheduler mq-deadline registered Oct 27 08:27:41.694674 kernel: io scheduler kyber registered Oct 27 08:27:41.694680 kernel: io scheduler bfq registered Oct 27 08:27:41.694749 kernel: pcieport 0000:00:15.0: PME: Signaling with IRQ 24 Oct 27 08:27:41.694819 kernel: pcieport 0000:00:15.0: pciehp: Slot #160 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 27 08:27:41.694889 kernel: pcieport 0000:00:15.1: PME: Signaling with IRQ 25 Oct 27 08:27:41.694960 kernel: pcieport 0000:00:15.1: pciehp: Slot #161 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 27 08:27:41.695041 kernel: pcieport 0000:00:15.2: PME: Signaling with IRQ 26 Oct 27 08:27:41.695110 kernel: pcieport 0000:00:15.2: pciehp: Slot #162 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 27 08:27:41.695176 kernel: pcieport 0000:00:15.3: PME: Signaling with IRQ 27 Oct 27 08:27:41.695244 kernel: pcieport 0000:00:15.3: pciehp: Slot #163 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 27 08:27:41.695312 kernel: pcieport 0000:00:15.4: PME: Signaling with IRQ 28 Oct 27 08:27:41.695394 kernel: pcieport 0000:00:15.4: pciehp: Slot #164 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 27 08:27:41.695500 kernel: pcieport 0000:00:15.5: PME: Signaling with IRQ 29 Oct 27 08:27:41.695570 kernel: pcieport 0000:00:15.5: pciehp: Slot #165 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 27 08:27:41.695637 kernel: pcieport 0000:00:15.6: PME: Signaling with IRQ 30 Oct 27 08:27:41.695704 kernel: pcieport 0000:00:15.6: pciehp: Slot #166 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 27 08:27:41.695787 kernel: pcieport 0000:00:15.7: PME: Signaling with IRQ 31 Oct 27 08:27:41.695856 kernel: pcieport 0000:00:15.7: pciehp: Slot #167 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 27 08:27:41.695928 kernel: pcieport 0000:00:16.0: PME: Signaling with IRQ 32 Oct 27 08:27:41.695994 kernel: pcieport 0000:00:16.0: pciehp: Slot #192 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 27 08:27:41.696082 kernel: pcieport 0000:00:16.1: PME: Signaling with IRQ 33 Oct 27 08:27:41.696151 kernel: pcieport 0000:00:16.1: pciehp: Slot #193 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 27 08:27:41.696225 kernel: pcieport 0000:00:16.2: PME: Signaling with IRQ 34 Oct 27 08:27:41.696292 kernel: pcieport 0000:00:16.2: pciehp: Slot #194 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 27 08:27:41.696364 kernel: pcieport 0000:00:16.3: PME: Signaling with IRQ 35 Oct 27 08:27:41.696437 kernel: pcieport 0000:00:16.3: pciehp: Slot #195 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 27 08:27:41.696517 kernel: pcieport 0000:00:16.4: PME: Signaling with IRQ 36 Oct 27 08:27:41.696588 kernel: pcieport 0000:00:16.4: pciehp: Slot #196 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 27 08:27:41.696658 kernel: pcieport 0000:00:16.5: PME: Signaling with IRQ 37 Oct 27 08:27:41.696725 kernel: pcieport 0000:00:16.5: pciehp: Slot #197 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 27 08:27:41.696795 kernel: pcieport 0000:00:16.6: PME: Signaling with IRQ 38 Oct 27 08:27:41.696864 kernel: pcieport 0000:00:16.6: pciehp: Slot #198 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 27 08:27:41.696933 kernel: pcieport 0000:00:16.7: PME: Signaling with IRQ 39 Oct 27 08:27:41.697015 kernel: pcieport 0000:00:16.7: pciehp: Slot #199 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 27 08:27:41.697085 kernel: pcieport 0000:00:17.0: PME: Signaling with IRQ 40 Oct 27 08:27:41.697153 kernel: pcieport 0000:00:17.0: pciehp: Slot #224 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 27 08:27:41.697224 kernel: pcieport 0000:00:17.1: PME: Signaling with IRQ 41 Oct 27 08:27:41.697291 kernel: pcieport 0000:00:17.1: pciehp: Slot #225 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 27 08:27:41.697360 kernel: pcieport 0000:00:17.2: PME: Signaling with IRQ 42 Oct 27 08:27:41.697426 kernel: pcieport 0000:00:17.2: pciehp: Slot #226 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 27 08:27:41.697509 kernel: pcieport 0000:00:17.3: PME: Signaling with IRQ 43 Oct 27 08:27:41.697577 kernel: pcieport 0000:00:17.3: pciehp: Slot #227 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 27 08:27:41.697645 kernel: pcieport 0000:00:17.4: PME: Signaling with IRQ 44 Oct 27 08:27:41.697716 kernel: pcieport 0000:00:17.4: pciehp: Slot #228 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 27 08:27:41.697785 kernel: pcieport 0000:00:17.5: PME: Signaling with IRQ 45 Oct 27 08:27:41.697853 kernel: pcieport 0000:00:17.5: pciehp: Slot #229 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 27 08:27:41.697921 kernel: pcieport 0000:00:17.6: PME: Signaling with IRQ 46 Oct 27 08:27:41.697989 kernel: pcieport 0000:00:17.6: pciehp: Slot #230 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 27 08:27:41.698058 kernel: pcieport 0000:00:17.7: PME: Signaling with IRQ 47 Oct 27 08:27:41.698129 kernel: pcieport 0000:00:17.7: pciehp: Slot #231 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 27 08:27:41.698199 kernel: pcieport 0000:00:18.0: PME: Signaling with IRQ 48 Oct 27 08:27:41.698267 kernel: pcieport 0000:00:18.0: pciehp: Slot #256 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 27 08:27:41.698335 kernel: pcieport 0000:00:18.1: PME: Signaling with IRQ 49 Oct 27 08:27:41.698403 kernel: pcieport 0000:00:18.1: pciehp: Slot #257 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 27 08:27:41.698486 kernel: pcieport 0000:00:18.2: PME: Signaling with IRQ 50 Oct 27 08:27:41.698558 kernel: pcieport 0000:00:18.2: pciehp: Slot #258 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 27 08:27:41.698627 kernel: pcieport 0000:00:18.3: PME: Signaling with IRQ 51 Oct 27 08:27:41.698694 kernel: pcieport 0000:00:18.3: pciehp: Slot #259 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 27 08:27:41.698770 kernel: pcieport 0000:00:18.4: PME: Signaling with IRQ 52 Oct 27 08:27:41.698838 kernel: pcieport 0000:00:18.4: pciehp: Slot #260 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 27 08:27:41.698907 kernel: pcieport 0000:00:18.5: PME: Signaling with IRQ 53 Oct 27 08:27:41.698984 kernel: pcieport 0000:00:18.5: pciehp: Slot #261 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 27 08:27:41.699052 kernel: pcieport 0000:00:18.6: PME: Signaling with IRQ 54 Oct 27 08:27:41.699119 kernel: pcieport 0000:00:18.6: pciehp: Slot #262 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 27 08:27:41.699186 kernel: pcieport 0000:00:18.7: PME: Signaling with IRQ 55 Oct 27 08:27:41.699254 kernel: pcieport 0000:00:18.7: pciehp: Slot #263 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 27 08:27:41.699267 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Oct 27 08:27:41.699275 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 27 08:27:41.699282 kernel: 00:05: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Oct 27 08:27:41.699289 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBC,PNP0f13:MOUS] at 0x60,0x64 irq 1,12 Oct 27 08:27:41.699296 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Oct 27 08:27:41.699303 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Oct 27 08:27:41.699372 kernel: rtc_cmos 00:01: registered as rtc0 Oct 27 08:27:41.699448 kernel: rtc_cmos 00:01: setting system clock to 2025-10-27T08:27:40 UTC (1761553660) Oct 27 08:27:41.699459 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Oct 27 08:27:41.699531 kernel: rtc_cmos 00:01: alarms up to one month, y3k, 114 bytes nvram Oct 27 08:27:41.699543 kernel: intel_pstate: CPU model not supported Oct 27 08:27:41.699550 kernel: NET: Registered PF_INET6 protocol family Oct 27 08:27:41.699557 kernel: Segment Routing with IPv6 Oct 27 08:27:41.699563 kernel: In-situ OAM (IOAM) with IPv6 Oct 27 08:27:41.699573 kernel: NET: Registered PF_PACKET protocol family Oct 27 08:27:41.699580 kernel: Key type dns_resolver registered Oct 27 08:27:41.699586 kernel: IPI shorthand broadcast: enabled Oct 27 08:27:41.699593 kernel: sched_clock: Marking stable (1448380421, 169510678)->(1633373621, -15482522) Oct 27 08:27:41.699600 kernel: registered taskstats version 1 Oct 27 08:27:41.699607 kernel: Loading compiled-in X.509 certificates Oct 27 08:27:41.699620 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.54-flatcar: 6c7ef547b8d769f7afd2708799fb9c3145695bfb' Oct 27 08:27:41.699633 kernel: Demotion targets for Node 0: null Oct 27 08:27:41.699643 kernel: Key type .fscrypt registered Oct 27 08:27:41.699653 kernel: Key type fscrypt-provisioning registered Oct 27 08:27:41.699663 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 27 08:27:41.699675 kernel: ima: Allocated hash algorithm: sha1 Oct 27 08:27:41.699682 kernel: ima: No architecture policies found Oct 27 08:27:41.699689 kernel: clk: Disabling unused clocks Oct 27 08:27:41.699698 kernel: Freeing unused kernel image (initmem) memory: 15964K Oct 27 08:27:41.699705 kernel: Write protecting the kernel read-only data: 40960k Oct 27 08:27:41.699712 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Oct 27 08:27:41.699719 kernel: Run /init as init process Oct 27 08:27:41.699732 kernel: with arguments: Oct 27 08:27:41.699740 kernel: /init Oct 27 08:27:41.699746 kernel: with environment: Oct 27 08:27:41.699755 kernel: HOME=/ Oct 27 08:27:41.699761 kernel: TERM=linux Oct 27 08:27:41.699768 kernel: SCSI subsystem initialized Oct 27 08:27:41.699775 kernel: VMware PVSCSI driver - version 1.0.7.0-k Oct 27 08:27:41.699782 kernel: vmw_pvscsi: using 64bit dma Oct 27 08:27:41.699788 kernel: vmw_pvscsi: max_id: 16 Oct 27 08:27:41.699795 kernel: vmw_pvscsi: setting ring_pages to 8 Oct 27 08:27:41.699802 kernel: vmw_pvscsi: enabling reqCallThreshold Oct 27 08:27:41.699811 kernel: vmw_pvscsi: driver-based request coalescing enabled Oct 27 08:27:41.699817 kernel: vmw_pvscsi: using MSI-X Oct 27 08:27:41.699902 kernel: scsi host0: VMware PVSCSI storage adapter rev 2, req/cmp/msg rings: 8/8/1 pages, cmd_per_lun=254 Oct 27 08:27:41.699985 kernel: vmw_pvscsi 0000:03:00.0: VMware PVSCSI rev 2 host #0 Oct 27 08:27:41.700068 kernel: scsi 0:0:0:0: Direct-Access VMware Virtual disk 2.0 PQ: 0 ANSI: 6 Oct 27 08:27:41.700147 kernel: sd 0:0:0:0: [sda] 25804800 512-byte logical blocks: (13.2 GB/12.3 GiB) Oct 27 08:27:41.700222 kernel: sd 0:0:0:0: [sda] Write Protect is off Oct 27 08:27:41.700292 kernel: sd 0:0:0:0: [sda] Mode Sense: 31 00 00 00 Oct 27 08:27:41.700368 kernel: sd 0:0:0:0: [sda] Cache data unavailable Oct 27 08:27:41.700819 kernel: sd 0:0:0:0: [sda] Assuming drive cache: write through Oct 27 08:27:41.700832 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Oct 27 08:27:41.700910 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Oct 27 08:27:41.700923 kernel: libata version 3.00 loaded. Oct 27 08:27:41.700994 kernel: ata_piix 0000:00:07.1: version 2.13 Oct 27 08:27:41.701066 kernel: scsi host1: ata_piix Oct 27 08:27:41.701139 kernel: scsi host2: ata_piix Oct 27 08:27:41.701150 kernel: ata1: PATA max UDMA/33 cmd 0x1f0 ctl 0x3f6 bmdma 0x1060 irq 14 lpm-pol 0 Oct 27 08:27:41.701157 kernel: ata2: PATA max UDMA/33 cmd 0x170 ctl 0x376 bmdma 0x1068 irq 15 lpm-pol 0 Oct 27 08:27:41.701166 kernel: ata2.00: ATAPI: VMware Virtual IDE CDROM Drive, 00000001, max UDMA/33 Oct 27 08:27:41.701246 kernel: scsi 2:0:0:0: CD-ROM NECVMWar VMware IDE CDR10 1.00 PQ: 0 ANSI: 5 Oct 27 08:27:41.701320 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 1x/1x writer dvd-ram cd/rw xa/form2 cdda tray Oct 27 08:27:41.701330 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Oct 27 08:27:41.701338 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 27 08:27:41.701345 kernel: device-mapper: uevent: version 1.0.3 Oct 27 08:27:41.701354 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Oct 27 08:27:41.701425 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Oct 27 08:27:41.701441 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Oct 27 08:27:41.701450 kernel: Invalid ELF header magic: != \u007fELF Oct 27 08:27:41.701457 kernel: Invalid ELF header magic: != \u007fELF Oct 27 08:27:41.701464 kernel: raid6: avx2x4 gen() 47119 MB/s Oct 27 08:27:41.701471 kernel: raid6: avx2x2 gen() 53002 MB/s Oct 27 08:27:41.701480 kernel: raid6: avx2x1 gen() 44107 MB/s Oct 27 08:27:41.701486 kernel: raid6: using algorithm avx2x2 gen() 53002 MB/s Oct 27 08:27:41.701493 kernel: raid6: .... xor() 31932 MB/s, rmw enabled Oct 27 08:27:41.701500 kernel: raid6: using avx2x2 recovery algorithm Oct 27 08:27:41.701507 kernel: Invalid ELF header magic: != \u007fELF Oct 27 08:27:41.701514 kernel: Invalid ELF header magic: != \u007fELF Oct 27 08:27:41.701520 kernel: Invalid ELF header magic: != \u007fELF Oct 27 08:27:41.701527 kernel: xor: automatically using best checksumming function avx Oct 27 08:27:41.701535 kernel: Invalid ELF header magic: != \u007fELF Oct 27 08:27:41.701541 kernel: Btrfs loaded, zoned=no, fsverity=no Oct 27 08:27:41.701548 kernel: BTRFS: device fsid bf514789-bcec-4c15-ac9d-e4c3d19a42b2 devid 1 transid 36 /dev/mapper/usr (254:0) scanned by mount (191) Oct 27 08:27:41.701555 kernel: BTRFS info (device dm-0): first mount of filesystem bf514789-bcec-4c15-ac9d-e4c3d19a42b2 Oct 27 08:27:41.701562 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Oct 27 08:27:41.701569 kernel: BTRFS info (device dm-0): enabling ssd optimizations Oct 27 08:27:41.701576 kernel: BTRFS info (device dm-0): disabling log replay at mount time Oct 27 08:27:41.701584 kernel: BTRFS info (device dm-0): enabling free space tree Oct 27 08:27:41.701591 kernel: Invalid ELF header magic: != \u007fELF Oct 27 08:27:41.701597 kernel: loop: module loaded Oct 27 08:27:41.701605 kernel: loop0: detected capacity change from 0 to 100120 Oct 27 08:27:41.701612 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Oct 27 08:27:41.701619 systemd[1]: Successfully made /usr/ read-only. Oct 27 08:27:41.701629 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Oct 27 08:27:41.701637 systemd[1]: Detected virtualization vmware. Oct 27 08:27:41.701644 systemd[1]: Detected architecture x86-64. Oct 27 08:27:41.701651 systemd[1]: Running in initrd. Oct 27 08:27:41.701658 systemd[1]: No hostname configured, using default hostname. Oct 27 08:27:41.701666 systemd[1]: Hostname set to . Oct 27 08:27:41.701672 systemd[1]: Initializing machine ID from random generator. Oct 27 08:27:41.701680 systemd[1]: Queued start job for default target initrd.target. Oct 27 08:27:41.701687 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Oct 27 08:27:41.701695 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 27 08:27:41.701702 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 27 08:27:41.701709 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Oct 27 08:27:41.701716 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 27 08:27:41.701724 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Oct 27 08:27:41.701732 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Oct 27 08:27:41.701739 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 27 08:27:41.701746 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 27 08:27:41.701753 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Oct 27 08:27:41.701760 systemd[1]: Reached target paths.target - Path Units. Oct 27 08:27:41.701768 systemd[1]: Reached target slices.target - Slice Units. Oct 27 08:27:41.701776 systemd[1]: Reached target swap.target - Swaps. Oct 27 08:27:41.701783 systemd[1]: Reached target timers.target - Timer Units. Oct 27 08:27:41.701790 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Oct 27 08:27:41.701797 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 27 08:27:41.701804 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 27 08:27:41.701812 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Oct 27 08:27:41.701820 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 27 08:27:41.701827 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 27 08:27:41.701834 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 27 08:27:41.701841 systemd[1]: Reached target sockets.target - Socket Units. Oct 27 08:27:41.701848 systemd[1]: Starting afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments... Oct 27 08:27:41.701855 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Oct 27 08:27:41.701863 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 27 08:27:41.701871 systemd[1]: Finished network-cleanup.service - Network Cleanup. Oct 27 08:27:41.701878 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Oct 27 08:27:41.701886 systemd[1]: Starting systemd-fsck-usr.service... Oct 27 08:27:41.701893 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 27 08:27:41.701900 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 27 08:27:41.701907 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 27 08:27:41.701915 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Oct 27 08:27:41.701923 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 27 08:27:41.701945 systemd-journald[328]: Collecting audit messages is disabled. Oct 27 08:27:41.701964 systemd[1]: Finished systemd-fsck-usr.service. Oct 27 08:27:41.701972 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 27 08:27:41.701979 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 27 08:27:41.701987 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 27 08:27:41.701995 kernel: Bridge firewalling registered Oct 27 08:27:41.702002 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 27 08:27:41.702009 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 27 08:27:41.702016 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 27 08:27:41.702024 systemd-journald[328]: Journal started Oct 27 08:27:41.702040 systemd-journald[328]: Runtime Journal (/run/log/journal/656a298e221244e3b57ed2614d019753) is 4.8M, max 38.5M, 33.7M free. Oct 27 08:27:41.667650 systemd-modules-load[329]: Inserted module 'br_netfilter' Oct 27 08:27:41.705462 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 27 08:27:41.709466 systemd[1]: Started systemd-journald.service - Journal Service. Oct 27 08:27:41.709758 systemd[1]: Finished afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Oct 27 08:27:41.710035 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 27 08:27:41.710296 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 27 08:27:41.712269 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 27 08:27:41.713107 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 27 08:27:41.715509 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 27 08:27:41.722377 systemd-tmpfiles[358]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Oct 27 08:27:41.724983 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 27 08:27:41.729889 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 27 08:27:41.731509 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Oct 27 08:27:41.746395 dracut-cmdline[373]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 ip=139.178.70.104::139.178.70.97:28::ens192:off:1.1.1.1:1.0.0.1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=e6ac205aca0358d0b739fe2cba6f8244850dbdc9027fd8e7442161fce065515e Oct 27 08:27:41.756289 systemd-resolved[357]: Positive Trust Anchors: Oct 27 08:27:41.756300 systemd-resolved[357]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 27 08:27:41.756303 systemd-resolved[357]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Oct 27 08:27:41.756325 systemd-resolved[357]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 27 08:27:41.774706 systemd-resolved[357]: Defaulting to hostname 'linux'. Oct 27 08:27:41.775358 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 27 08:27:41.775530 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 27 08:27:41.818453 kernel: Loading iSCSI transport class v2.0-870. Oct 27 08:27:41.830453 kernel: iscsi: registered transport (tcp) Oct 27 08:27:41.855687 kernel: iscsi: registered transport (qla4xxx) Oct 27 08:27:41.855737 kernel: QLogic iSCSI HBA Driver Oct 27 08:27:41.872083 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 27 08:27:41.887236 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 27 08:27:41.888363 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 27 08:27:41.913217 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Oct 27 08:27:41.914124 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Oct 27 08:27:41.915530 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Oct 27 08:27:41.943990 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Oct 27 08:27:41.945249 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 27 08:27:41.962845 systemd-udevd[616]: Using default interface naming scheme 'v257'. Oct 27 08:27:41.969673 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 27 08:27:41.974768 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Oct 27 08:27:41.987297 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 27 08:27:41.988322 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 27 08:27:41.991358 dracut-pre-trigger[696]: rd.md=0: removing MD RAID activation Oct 27 08:27:42.007216 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Oct 27 08:27:42.008627 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 27 08:27:42.022746 systemd-networkd[723]: lo: Link UP Oct 27 08:27:42.022752 systemd-networkd[723]: lo: Gained carrier Oct 27 08:27:42.023030 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 27 08:27:42.023209 systemd[1]: Reached target network.target - Network. Oct 27 08:27:42.092705 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 27 08:27:42.093923 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Oct 27 08:27:42.163736 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_disk ROOT. Oct 27 08:27:42.174401 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_disk EFI-SYSTEM. Oct 27 08:27:42.181828 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_disk USR-A. Oct 27 08:27:42.182948 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Oct 27 08:27:42.193947 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. Oct 27 08:27:42.229456 kernel: VMware vmxnet3 virtual NIC driver - version 1.9.0.0-k-NAPI Oct 27 08:27:42.231553 kernel: vmxnet3 0000:0b:00.0: # of Tx queues : 2, # of Rx queues : 2 Oct 27 08:27:42.234454 kernel: vmxnet3 0000:0b:00.0 eth0: NIC Link is Up 10000 Mbps Oct 27 08:27:42.269447 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input2 Oct 27 08:27:42.275717 kernel: cryptd: max_cpu_qlen set to 1000 Oct 27 08:27:42.285449 kernel: AES CTR mode by8 optimization enabled Oct 27 08:27:42.310453 kernel: vmxnet3 0000:0b:00.0 ens192: renamed from eth0 Oct 27 08:27:42.310726 systemd-networkd[723]: eth0: Interface name change detected, renamed to ens192. Oct 27 08:27:42.311569 (udev-worker)[765]: id: Truncating stdout of 'dmi_memory_id' up to 16384 byte. Oct 27 08:27:42.316671 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 27 08:27:42.316757 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 27 08:27:42.317243 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 27 08:27:42.319847 systemd-networkd[723]: ens192: Configuring with /etc/systemd/network/10-dracut-cmdline-99.network. Oct 27 08:27:42.322886 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Oct 27 08:27:42.323030 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Oct 27 08:27:42.323892 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 27 08:27:42.328496 systemd-networkd[723]: ens192: Link UP Oct 27 08:27:42.328501 systemd-networkd[723]: ens192: Gained carrier Oct 27 08:27:42.356857 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 27 08:27:42.384757 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Oct 27 08:27:42.385138 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Oct 27 08:27:42.385280 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 27 08:27:42.385484 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 27 08:27:42.386200 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Oct 27 08:27:42.400906 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Oct 27 08:27:43.277050 disk-uuid[788]: Warning: The kernel is still using the old partition table. Oct 27 08:27:43.277050 disk-uuid[788]: The new table will be used at the next reboot or after you Oct 27 08:27:43.277050 disk-uuid[788]: run partprobe(8) or kpartx(8) Oct 27 08:27:43.277050 disk-uuid[788]: The operation has completed successfully. Oct 27 08:27:43.284029 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 27 08:27:43.284101 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Oct 27 08:27:43.284758 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Oct 27 08:27:43.372448 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (882) Oct 27 08:27:43.383109 kernel: BTRFS info (device sda6): first mount of filesystem 3c7e1d30-69bc-4811-963d-029e55854883 Oct 27 08:27:43.383139 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Oct 27 08:27:43.449450 kernel: BTRFS info (device sda6): enabling ssd optimizations Oct 27 08:27:43.449494 kernel: BTRFS info (device sda6): enabling free space tree Oct 27 08:27:43.453450 kernel: BTRFS info (device sda6): last unmount of filesystem 3c7e1d30-69bc-4811-963d-029e55854883 Oct 27 08:27:43.454183 systemd[1]: Finished ignition-setup.service - Ignition (setup). Oct 27 08:27:43.454976 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Oct 27 08:27:43.743942 ignition[901]: Ignition 2.22.0 Oct 27 08:27:43.743955 ignition[901]: Stage: fetch-offline Oct 27 08:27:43.743979 ignition[901]: no configs at "/usr/lib/ignition/base.d" Oct 27 08:27:43.743986 ignition[901]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Oct 27 08:27:43.744035 ignition[901]: parsed url from cmdline: "" Oct 27 08:27:43.744037 ignition[901]: no config URL provided Oct 27 08:27:43.744040 ignition[901]: reading system config file "/usr/lib/ignition/user.ign" Oct 27 08:27:43.744045 ignition[901]: no config at "/usr/lib/ignition/user.ign" Oct 27 08:27:43.744420 ignition[901]: config successfully fetched Oct 27 08:27:43.744445 ignition[901]: parsing config with SHA512: 902ba7a231bc6fe4d1fe8e68d56dc34a1899a1e757da641f29347a8d549ba6d0935da6abacc4833074c202454261a56a7342640ca47320ea1e31a3206de9e843 Oct 27 08:27:43.747879 unknown[901]: fetched base config from "system" Oct 27 08:27:43.748088 ignition[901]: fetch-offline: fetch-offline passed Oct 27 08:27:43.747884 unknown[901]: fetched user config from "vmware" Oct 27 08:27:43.748118 ignition[901]: Ignition finished successfully Oct 27 08:27:43.749169 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Oct 27 08:27:43.749405 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Oct 27 08:27:43.749917 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Oct 27 08:27:43.770262 ignition[907]: Ignition 2.22.0 Oct 27 08:27:43.770279 ignition[907]: Stage: kargs Oct 27 08:27:43.770365 ignition[907]: no configs at "/usr/lib/ignition/base.d" Oct 27 08:27:43.770371 ignition[907]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Oct 27 08:27:43.770834 ignition[907]: kargs: kargs passed Oct 27 08:27:43.770860 ignition[907]: Ignition finished successfully Oct 27 08:27:43.772200 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Oct 27 08:27:43.772901 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Oct 27 08:27:43.798057 ignition[913]: Ignition 2.22.0 Oct 27 08:27:43.798344 ignition[913]: Stage: disks Oct 27 08:27:43.798537 ignition[913]: no configs at "/usr/lib/ignition/base.d" Oct 27 08:27:43.798655 ignition[913]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Oct 27 08:27:43.799286 ignition[913]: disks: disks passed Oct 27 08:27:43.799418 ignition[913]: Ignition finished successfully Oct 27 08:27:43.800212 systemd[1]: Finished ignition-disks.service - Ignition (disks). Oct 27 08:27:43.800618 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Oct 27 08:27:43.800750 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 27 08:27:43.800941 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 27 08:27:43.801132 systemd[1]: Reached target sysinit.target - System Initialization. Oct 27 08:27:43.801305 systemd[1]: Reached target basic.target - Basic System. Oct 27 08:27:43.802022 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Oct 27 08:27:43.844386 systemd-fsck[921]: ROOT: clean, 15/1631200 files, 112378/1617920 blocks Oct 27 08:27:43.845694 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Oct 27 08:27:43.847502 systemd[1]: Mounting sysroot.mount - /sysroot... Oct 27 08:27:43.930445 kernel: EXT4-fs (sda9): mounted filesystem e90e2fe3-e1db-4bff-abac-c8d1d032f674 r/w with ordered data mode. Quota mode: none. Oct 27 08:27:43.930702 systemd[1]: Mounted sysroot.mount - /sysroot. Oct 27 08:27:43.931171 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Oct 27 08:27:43.932438 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 27 08:27:43.933478 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Oct 27 08:27:43.933867 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Oct 27 08:27:43.934505 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 27 08:27:43.934726 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Oct 27 08:27:43.943381 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Oct 27 08:27:43.944411 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Oct 27 08:27:43.949454 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (930) Oct 27 08:27:43.952106 kernel: BTRFS info (device sda6): first mount of filesystem 3c7e1d30-69bc-4811-963d-029e55854883 Oct 27 08:27:43.952139 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Oct 27 08:27:43.956891 kernel: BTRFS info (device sda6): enabling ssd optimizations Oct 27 08:27:43.956918 kernel: BTRFS info (device sda6): enabling free space tree Oct 27 08:27:43.957749 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 27 08:27:43.983571 initrd-setup-root[954]: cut: /sysroot/etc/passwd: No such file or directory Oct 27 08:27:43.984518 systemd-networkd[723]: ens192: Gained IPv6LL Oct 27 08:27:43.987478 initrd-setup-root[961]: cut: /sysroot/etc/group: No such file or directory Oct 27 08:27:43.990077 initrd-setup-root[968]: cut: /sysroot/etc/shadow: No such file or directory Oct 27 08:27:43.992453 initrd-setup-root[975]: cut: /sysroot/etc/gshadow: No such file or directory Oct 27 08:27:44.048985 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Oct 27 08:27:44.049940 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Oct 27 08:27:44.051498 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Oct 27 08:27:44.059217 systemd[1]: sysroot-oem.mount: Deactivated successfully. Oct 27 08:27:44.061456 kernel: BTRFS info (device sda6): last unmount of filesystem 3c7e1d30-69bc-4811-963d-029e55854883 Oct 27 08:27:44.073848 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Oct 27 08:27:44.081638 ignition[1045]: INFO : Ignition 2.22.0 Oct 27 08:27:44.081638 ignition[1045]: INFO : Stage: mount Oct 27 08:27:44.082061 ignition[1045]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 27 08:27:44.082061 ignition[1045]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Oct 27 08:27:44.082326 ignition[1045]: INFO : mount: mount passed Oct 27 08:27:44.082326 ignition[1045]: INFO : Ignition finished successfully Oct 27 08:27:44.083213 systemd[1]: Finished ignition-mount.service - Ignition (mount). Oct 27 08:27:44.084116 systemd[1]: Starting ignition-files.service - Ignition (files)... Oct 27 08:27:44.931676 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 27 08:27:44.987280 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (1055) Oct 27 08:27:44.987319 kernel: BTRFS info (device sda6): first mount of filesystem 3c7e1d30-69bc-4811-963d-029e55854883 Oct 27 08:27:44.987329 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Oct 27 08:27:44.991669 kernel: BTRFS info (device sda6): enabling ssd optimizations Oct 27 08:27:44.991705 kernel: BTRFS info (device sda6): enabling free space tree Oct 27 08:27:44.992704 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 27 08:27:45.014405 ignition[1072]: INFO : Ignition 2.22.0 Oct 27 08:27:45.014405 ignition[1072]: INFO : Stage: files Oct 27 08:27:45.014811 ignition[1072]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 27 08:27:45.014811 ignition[1072]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Oct 27 08:27:45.015137 ignition[1072]: DEBUG : files: compiled without relabeling support, skipping Oct 27 08:27:45.025702 ignition[1072]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 27 08:27:45.025702 ignition[1072]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 27 08:27:45.055690 ignition[1072]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 27 08:27:45.055910 ignition[1072]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 27 08:27:45.079648 unknown[1072]: wrote ssh authorized keys file for user: core Oct 27 08:27:45.079971 ignition[1072]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 27 08:27:45.082426 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Oct 27 08:27:45.082707 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Oct 27 08:27:45.126475 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Oct 27 08:27:45.198174 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Oct 27 08:27:45.198174 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Oct 27 08:27:45.198174 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Oct 27 08:27:45.198174 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 27 08:27:45.200540 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 27 08:27:45.200884 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 27 08:27:45.201134 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 27 08:27:45.201134 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 27 08:27:45.201134 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 27 08:27:45.206091 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 27 08:27:45.206266 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 27 08:27:45.206440 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Oct 27 08:27:45.209550 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Oct 27 08:27:45.209550 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Oct 27 08:27:45.210055 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Oct 27 08:27:45.476879 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Oct 27 08:27:45.814030 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Oct 27 08:27:45.814030 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/systemd/network/00-vmware.network" Oct 27 08:27:45.820559 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/systemd/network/00-vmware.network" Oct 27 08:27:45.820559 ignition[1072]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Oct 27 08:27:45.826077 ignition[1072]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 27 08:27:45.830339 ignition[1072]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 27 08:27:45.830339 ignition[1072]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Oct 27 08:27:45.830339 ignition[1072]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Oct 27 08:27:45.830909 ignition[1072]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 27 08:27:45.830909 ignition[1072]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 27 08:27:45.830909 ignition[1072]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Oct 27 08:27:45.830909 ignition[1072]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Oct 27 08:27:46.188379 ignition[1072]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Oct 27 08:27:46.191446 ignition[1072]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Oct 27 08:27:46.191446 ignition[1072]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Oct 27 08:27:46.191446 ignition[1072]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Oct 27 08:27:46.191446 ignition[1072]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Oct 27 08:27:46.191446 ignition[1072]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 27 08:27:46.191446 ignition[1072]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 27 08:27:46.191446 ignition[1072]: INFO : files: files passed Oct 27 08:27:46.191446 ignition[1072]: INFO : Ignition finished successfully Oct 27 08:27:46.192428 systemd[1]: Finished ignition-files.service - Ignition (files). Oct 27 08:27:46.193796 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Oct 27 08:27:46.195530 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Oct 27 08:27:46.214062 initrd-setup-root-after-ignition[1105]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 27 08:27:46.214062 initrd-setup-root-after-ignition[1105]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Oct 27 08:27:46.214832 initrd-setup-root-after-ignition[1109]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 27 08:27:46.215668 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 27 08:27:46.215969 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Oct 27 08:27:46.216695 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 27 08:27:46.217019 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Oct 27 08:27:46.217786 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Oct 27 08:27:46.260364 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 27 08:27:46.260501 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Oct 27 08:27:46.260836 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Oct 27 08:27:46.260990 systemd[1]: Reached target initrd.target - Initrd Default Target. Oct 27 08:27:46.261409 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Oct 27 08:27:46.262033 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Oct 27 08:27:46.276079 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 27 08:27:46.276913 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Oct 27 08:27:46.288312 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Oct 27 08:27:46.288468 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Oct 27 08:27:46.288700 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 27 08:27:46.288930 systemd[1]: Stopped target timers.target - Timer Units. Oct 27 08:27:46.289138 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 27 08:27:46.289213 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 27 08:27:46.289594 systemd[1]: Stopped target initrd.target - Initrd Default Target. Oct 27 08:27:46.289756 systemd[1]: Stopped target basic.target - Basic System. Oct 27 08:27:46.289938 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Oct 27 08:27:46.290137 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Oct 27 08:27:46.290344 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Oct 27 08:27:46.290566 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Oct 27 08:27:46.290780 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Oct 27 08:27:46.290985 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Oct 27 08:27:46.291196 systemd[1]: Stopped target sysinit.target - System Initialization. Oct 27 08:27:46.291412 systemd[1]: Stopped target local-fs.target - Local File Systems. Oct 27 08:27:46.291615 systemd[1]: Stopped target swap.target - Swaps. Oct 27 08:27:46.291798 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 27 08:27:46.291870 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Oct 27 08:27:46.292207 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Oct 27 08:27:46.292376 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 27 08:27:46.292571 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Oct 27 08:27:46.292618 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 27 08:27:46.292802 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 27 08:27:46.292862 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Oct 27 08:27:46.293145 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 27 08:27:46.293211 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Oct 27 08:27:46.293446 systemd[1]: Stopped target paths.target - Path Units. Oct 27 08:27:46.293597 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 27 08:27:46.293646 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 27 08:27:46.293818 systemd[1]: Stopped target slices.target - Slice Units. Oct 27 08:27:46.294025 systemd[1]: Stopped target sockets.target - Socket Units. Oct 27 08:27:46.294236 systemd[1]: iscsid.socket: Deactivated successfully. Oct 27 08:27:46.294284 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Oct 27 08:27:46.294501 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 27 08:27:46.294545 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 27 08:27:46.294723 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 27 08:27:46.294790 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 27 08:27:46.295053 systemd[1]: ignition-files.service: Deactivated successfully. Oct 27 08:27:46.295122 systemd[1]: Stopped ignition-files.service - Ignition (files). Oct 27 08:27:46.296528 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Oct 27 08:27:46.297033 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Oct 27 08:27:46.297145 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Oct 27 08:27:46.297213 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 27 08:27:46.298794 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 27 08:27:46.298862 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Oct 27 08:27:46.299061 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 27 08:27:46.299123 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Oct 27 08:27:46.301779 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 27 08:27:46.307100 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Oct 27 08:27:46.315751 ignition[1130]: INFO : Ignition 2.22.0 Oct 27 08:27:46.315751 ignition[1130]: INFO : Stage: umount Oct 27 08:27:46.316167 ignition[1130]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 27 08:27:46.316167 ignition[1130]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Oct 27 08:27:46.317004 ignition[1130]: INFO : umount: umount passed Oct 27 08:27:46.317004 ignition[1130]: INFO : Ignition finished successfully Oct 27 08:27:46.317336 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 27 08:27:46.317407 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Oct 27 08:27:46.318170 systemd[1]: Stopped target network.target - Network. Oct 27 08:27:46.318496 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 27 08:27:46.318688 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Oct 27 08:27:46.318969 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 27 08:27:46.319006 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Oct 27 08:27:46.319401 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 27 08:27:46.319559 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Oct 27 08:27:46.319854 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Oct 27 08:27:46.320008 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Oct 27 08:27:46.320394 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Oct 27 08:27:46.322158 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Oct 27 08:27:46.329254 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 27 08:27:46.329324 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Oct 27 08:27:46.332162 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 27 08:27:46.332327 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Oct 27 08:27:46.333568 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 27 08:27:46.334050 systemd[1]: Stopped target network-pre.target - Preparation for Network. Oct 27 08:27:46.334313 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 27 08:27:46.334463 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Oct 27 08:27:46.335188 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Oct 27 08:27:46.335395 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 27 08:27:46.335558 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 27 08:27:46.335839 systemd[1]: afterburn-network-kargs.service: Deactivated successfully. Oct 27 08:27:46.335965 systemd[1]: Stopped afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Oct 27 08:27:46.336362 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 27 08:27:46.336385 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 27 08:27:46.336715 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 27 08:27:46.336739 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Oct 27 08:27:46.337129 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 27 08:27:46.348706 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 27 08:27:46.348917 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 27 08:27:46.349161 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 27 08:27:46.349184 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Oct 27 08:27:46.349299 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 27 08:27:46.349314 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Oct 27 08:27:46.349426 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 27 08:27:46.349467 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Oct 27 08:27:46.349752 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 27 08:27:46.349776 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Oct 27 08:27:46.350071 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 27 08:27:46.350096 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 27 08:27:46.351583 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Oct 27 08:27:46.351678 systemd[1]: systemd-network-generator.service: Deactivated successfully. Oct 27 08:27:46.351708 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Oct 27 08:27:46.351823 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 27 08:27:46.351847 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 27 08:27:46.351956 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Oct 27 08:27:46.351981 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 27 08:27:46.352089 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 27 08:27:46.352112 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Oct 27 08:27:46.352216 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 27 08:27:46.352238 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 27 08:27:46.360775 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 27 08:27:46.360995 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Oct 27 08:27:46.388167 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 27 08:27:46.388243 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Oct 27 08:27:46.628991 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 27 08:27:46.629080 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Oct 27 08:27:46.629548 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Oct 27 08:27:46.629716 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 27 08:27:46.629753 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Oct 27 08:27:46.630584 systemd[1]: Starting initrd-switch-root.service - Switch Root... Oct 27 08:27:46.649493 systemd[1]: Switching root. Oct 27 08:27:46.686249 systemd-journald[328]: Journal stopped Oct 27 08:27:47.689528 systemd-journald[328]: Received SIGTERM from PID 1 (systemd). Oct 27 08:27:47.689560 kernel: SELinux: policy capability network_peer_controls=1 Oct 27 08:27:47.689569 kernel: SELinux: policy capability open_perms=1 Oct 27 08:27:47.689576 kernel: SELinux: policy capability extended_socket_class=1 Oct 27 08:27:47.689582 kernel: SELinux: policy capability always_check_network=0 Oct 27 08:27:47.689588 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 27 08:27:47.689596 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 27 08:27:47.689603 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 27 08:27:47.689610 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 27 08:27:47.689616 kernel: SELinux: policy capability userspace_initial_context=0 Oct 27 08:27:47.689623 kernel: audit: type=1403 audit(1761553667.102:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 27 08:27:47.689631 systemd[1]: Successfully loaded SELinux policy in 56.637ms. Oct 27 08:27:47.689640 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 3.943ms. Oct 27 08:27:47.689648 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Oct 27 08:27:47.689656 systemd[1]: Detected virtualization vmware. Oct 27 08:27:47.689665 systemd[1]: Detected architecture x86-64. Oct 27 08:27:47.689672 systemd[1]: Detected first boot. Oct 27 08:27:47.689679 systemd[1]: Initializing machine ID from random generator. Oct 27 08:27:47.689686 zram_generator::config[1173]: No configuration found. Oct 27 08:27:47.689806 kernel: vmw_vmci 0000:00:07.7: Using capabilities 0xc Oct 27 08:27:47.689820 kernel: Guest personality initialized and is active Oct 27 08:27:47.689827 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Oct 27 08:27:47.689833 kernel: Initialized host personality Oct 27 08:27:47.689840 kernel: NET: Registered PF_VSOCK protocol family Oct 27 08:27:47.689847 systemd[1]: Populated /etc with preset unit settings. Oct 27 08:27:47.689856 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Oct 27 08:27:47.689865 systemd[1]: COREOS_CUSTOM_PUBLIC_IPV4=$(ip addr show ens192 | grep -v "inet 10." | grep -Po "inet \K[\d.]+")" > ${OUTPUT}" Oct 27 08:27:47.689872 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 27 08:27:47.689879 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Oct 27 08:27:47.689886 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 27 08:27:47.689893 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Oct 27 08:27:47.689901 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Oct 27 08:27:47.689911 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Oct 27 08:27:47.689919 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Oct 27 08:27:47.689926 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Oct 27 08:27:47.689934 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Oct 27 08:27:47.689943 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Oct 27 08:27:47.689950 systemd[1]: Created slice user.slice - User and Session Slice. Oct 27 08:27:47.689958 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 27 08:27:47.689966 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 27 08:27:47.689975 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Oct 27 08:27:47.689983 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Oct 27 08:27:47.689990 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Oct 27 08:27:47.689998 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 27 08:27:47.690005 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Oct 27 08:27:47.690014 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 27 08:27:47.690022 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 27 08:27:47.690029 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Oct 27 08:27:47.690037 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Oct 27 08:27:47.690044 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Oct 27 08:27:47.690052 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Oct 27 08:27:47.690061 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 27 08:27:47.690068 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 27 08:27:47.690076 systemd[1]: Reached target slices.target - Slice Units. Oct 27 08:27:47.690084 systemd[1]: Reached target swap.target - Swaps. Oct 27 08:27:47.690091 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Oct 27 08:27:47.690098 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Oct 27 08:27:47.690108 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Oct 27 08:27:47.690116 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 27 08:27:47.690124 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 27 08:27:47.690131 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 27 08:27:47.690140 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Oct 27 08:27:47.690148 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Oct 27 08:27:47.690155 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Oct 27 08:27:47.690163 systemd[1]: Mounting media.mount - External Media Directory... Oct 27 08:27:47.690170 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 27 08:27:47.690178 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Oct 27 08:27:47.690186 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Oct 27 08:27:47.690195 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Oct 27 08:27:47.690203 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 27 08:27:47.690211 systemd[1]: Reached target machines.target - Containers. Oct 27 08:27:47.690218 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Oct 27 08:27:47.690226 systemd[1]: Starting ignition-delete-config.service - Ignition (delete config)... Oct 27 08:27:47.690233 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 27 08:27:47.690241 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Oct 27 08:27:47.690250 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 27 08:27:47.690257 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 27 08:27:47.690265 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 27 08:27:47.690272 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Oct 27 08:27:47.690280 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 27 08:27:47.690287 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 27 08:27:47.690296 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 27 08:27:47.690304 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Oct 27 08:27:47.690311 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 27 08:27:47.690319 systemd[1]: Stopped systemd-fsck-usr.service. Oct 27 08:27:47.690327 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 27 08:27:47.690334 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 27 08:27:47.690342 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 27 08:27:47.690351 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 27 08:27:47.690359 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Oct 27 08:27:47.690366 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Oct 27 08:27:47.690374 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 27 08:27:47.690382 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 27 08:27:47.690391 kernel: fuse: init (API version 7.41) Oct 27 08:27:47.690403 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Oct 27 08:27:47.690418 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Oct 27 08:27:47.690426 systemd[1]: Mounted media.mount - External Media Directory. Oct 27 08:27:47.691602 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Oct 27 08:27:47.691617 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Oct 27 08:27:47.691625 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Oct 27 08:27:47.691633 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 27 08:27:47.691643 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 27 08:27:47.691651 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 27 08:27:47.691658 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 27 08:27:47.691666 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 27 08:27:47.691674 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 27 08:27:47.691681 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Oct 27 08:27:47.691689 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 27 08:27:47.691698 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Oct 27 08:27:47.691705 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 27 08:27:47.691713 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 27 08:27:47.691721 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 27 08:27:47.691729 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 27 08:27:47.691736 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Oct 27 08:27:47.691744 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 27 08:27:47.691753 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Oct 27 08:27:47.691761 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Oct 27 08:27:47.691768 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Oct 27 08:27:47.691779 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 27 08:27:47.691789 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 27 08:27:47.691797 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Oct 27 08:27:47.691805 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 27 08:27:47.691813 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Oct 27 08:27:47.691823 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 27 08:27:47.691831 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Oct 27 08:27:47.691839 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 27 08:27:47.691847 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 27 08:27:47.691856 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Oct 27 08:27:47.691864 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 27 08:27:47.691872 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Oct 27 08:27:47.691880 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Oct 27 08:27:47.691888 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Oct 27 08:27:47.691896 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Oct 27 08:27:47.691906 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Oct 27 08:27:47.691937 systemd-journald[1256]: Collecting audit messages is disabled. Oct 27 08:27:47.691956 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Oct 27 08:27:47.691966 systemd-journald[1256]: Journal started Oct 27 08:27:47.691983 systemd-journald[1256]: Runtime Journal (/run/log/journal/758fef2f916b4465a760b1e2ef5ae42d) is 4.8M, max 38.5M, 33.7M free. Oct 27 08:27:47.696644 systemd[1]: Started systemd-journald.service - Journal Service. Oct 27 08:27:47.455579 systemd[1]: Queued start job for default target multi-user.target. Oct 27 08:27:47.464556 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Oct 27 08:27:47.464848 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 27 08:27:47.695496 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Oct 27 08:27:47.701928 kernel: loop1: detected capacity change from 0 to 110984 Oct 27 08:27:47.705732 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 27 08:27:47.710023 jq[1243]: true Oct 27 08:27:47.710962 jq[1268]: true Oct 27 08:27:47.722022 kernel: ACPI: bus type drm_connector registered Oct 27 08:27:47.719865 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 27 08:27:47.720016 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 27 08:27:47.722309 ignition[1274]: Ignition 2.22.0 Oct 27 08:27:47.725559 ignition[1274]: deleting config from guestinfo properties Oct 27 08:27:47.730478 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Oct 27 08:27:47.745454 kernel: loop2: detected capacity change from 0 to 2968 Oct 27 08:27:47.747671 systemd-journald[1256]: Time spent on flushing to /var/log/journal/758fef2f916b4465a760b1e2ef5ae42d is 58.172ms for 1762 entries. Oct 27 08:27:47.747671 systemd-journald[1256]: System Journal (/var/log/journal/758fef2f916b4465a760b1e2ef5ae42d) is 8M, max 588.1M, 580.1M free. Oct 27 08:27:47.830101 systemd-journald[1256]: Received client request to flush runtime journal. Oct 27 08:27:47.830143 kernel: loop3: detected capacity change from 0 to 229808 Oct 27 08:27:47.830165 kernel: loop4: detected capacity change from 0 to 128048 Oct 27 08:27:47.749612 systemd-tmpfiles[1294]: ACLs are not supported, ignoring. Oct 27 08:27:47.759254 ignition[1274]: Successfully deleted config Oct 27 08:27:47.749622 systemd-tmpfiles[1294]: ACLs are not supported, ignoring. Oct 27 08:27:47.761544 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 27 08:27:47.763537 systemd[1]: Finished ignition-delete-config.service - Ignition (delete config). Oct 27 08:27:47.763882 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Oct 27 08:27:47.767729 systemd[1]: Starting systemd-sysusers.service - Create System Users... Oct 27 08:27:47.820310 systemd[1]: Finished systemd-sysusers.service - Create System Users. Oct 27 08:27:47.822724 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 27 08:27:47.824567 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 27 08:27:47.829744 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 27 08:27:47.831425 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Oct 27 08:27:47.838566 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Oct 27 08:27:47.852965 systemd-tmpfiles[1343]: ACLs are not supported, ignoring. Oct 27 08:27:47.852980 systemd-tmpfiles[1343]: ACLs are not supported, ignoring. Oct 27 08:27:47.857210 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 27 08:27:47.859448 kernel: loop5: detected capacity change from 0 to 110984 Oct 27 08:27:47.868252 systemd[1]: Started systemd-userdbd.service - User Database Manager. Oct 27 08:27:47.874448 kernel: loop6: detected capacity change from 0 to 2968 Oct 27 08:27:47.883448 kernel: loop7: detected capacity change from 0 to 229808 Oct 27 08:27:47.916079 systemd-resolved[1342]: Positive Trust Anchors: Oct 27 08:27:47.916104 systemd-resolved[1342]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 27 08:27:47.916107 systemd-resolved[1342]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Oct 27 08:27:47.916131 systemd-resolved[1342]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 27 08:27:47.919207 systemd-resolved[1342]: Defaulting to hostname 'linux'. Oct 27 08:27:47.920090 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 27 08:27:47.920250 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 27 08:27:47.939458 kernel: loop1: detected capacity change from 0 to 128048 Oct 27 08:27:47.958952 (sd-merge)[1351]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw', 'oem-vmware.raw'. Oct 27 08:27:47.961135 (sd-merge)[1351]: Merged extensions into '/usr'. Oct 27 08:27:47.964532 systemd[1]: Reload requested from client PID 1292 ('systemd-sysext') (unit systemd-sysext.service)... Oct 27 08:27:47.964542 systemd[1]: Reloading... Oct 27 08:27:48.022729 zram_generator::config[1387]: No configuration found. Oct 27 08:27:48.109453 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Oct 27 08:27:48.159357 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 27 08:27:48.159470 systemd[1]: Reloading finished in 194 ms. Oct 27 08:27:48.177442 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Oct 27 08:27:48.183454 systemd[1]: Starting ensure-sysext.service... Oct 27 08:27:48.184379 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 27 08:27:48.200965 systemd[1]: Reload requested from client PID 1440 ('systemctl') (unit ensure-sysext.service)... Oct 27 08:27:48.201060 systemd[1]: Reloading... Oct 27 08:27:48.212595 systemd-tmpfiles[1441]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Oct 27 08:27:48.212617 systemd-tmpfiles[1441]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Oct 27 08:27:48.212896 systemd-tmpfiles[1441]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 27 08:27:48.213121 systemd-tmpfiles[1441]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Oct 27 08:27:48.213828 systemd-tmpfiles[1441]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 27 08:27:48.214035 systemd-tmpfiles[1441]: ACLs are not supported, ignoring. Oct 27 08:27:48.214069 systemd-tmpfiles[1441]: ACLs are not supported, ignoring. Oct 27 08:27:48.240474 zram_generator::config[1467]: No configuration found. Oct 27 08:27:48.256246 systemd-tmpfiles[1441]: Detected autofs mount point /boot during canonicalization of boot. Oct 27 08:27:48.256256 systemd-tmpfiles[1441]: Skipping /boot Oct 27 08:27:48.262817 systemd-tmpfiles[1441]: Detected autofs mount point /boot during canonicalization of boot. Oct 27 08:27:48.262989 systemd-tmpfiles[1441]: Skipping /boot Oct 27 08:27:48.328757 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Oct 27 08:27:48.376391 systemd[1]: Reloading finished in 175 ms. Oct 27 08:27:48.565563 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 27 08:27:48.571735 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 27 08:27:48.574632 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 27 08:27:48.577660 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Oct 27 08:27:48.581691 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Oct 27 08:27:48.585149 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 27 08:27:48.586381 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 27 08:27:48.587485 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 27 08:27:48.588641 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 27 08:27:48.588838 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 27 08:27:48.588912 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 27 08:27:48.591630 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Oct 27 08:27:48.595481 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Oct 27 08:27:48.595664 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 27 08:27:48.598584 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Oct 27 08:27:48.603514 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 27 08:27:48.603976 systemd[1]: Finished ensure-sysext.service. Oct 27 08:27:48.604202 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 27 08:27:48.604317 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 27 08:27:48.606823 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 27 08:27:48.606942 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 27 08:27:48.607353 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 27 08:27:48.615709 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Oct 27 08:27:48.616060 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 27 08:27:48.616173 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 27 08:27:48.616429 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 27 08:27:48.616711 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 27 08:27:48.620945 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 27 08:27:48.626633 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Oct 27 08:27:48.660539 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Oct 27 08:27:48.660865 systemd[1]: Reached target time-set.target - System Time Set. Oct 27 08:27:48.727828 systemd-udevd[1538]: Using default interface naming scheme 'v257'. Oct 27 08:27:48.728913 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Oct 27 08:27:48.805042 augenrules[1570]: No rules Oct 27 08:27:48.805303 systemd[1]: audit-rules.service: Deactivated successfully. Oct 27 08:27:48.805466 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 27 08:27:49.006629 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 27 08:27:49.008825 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 27 08:27:49.047838 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Oct 27 08:27:49.094454 kernel: mousedev: PS/2 mouse device common for all mice Oct 27 08:27:49.099511 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Oct 27 08:27:49.105457 kernel: ACPI: button: Power Button [PWRF] Oct 27 08:27:49.107955 systemd-networkd[1580]: lo: Link UP Oct 27 08:27:49.107960 systemd-networkd[1580]: lo: Gained carrier Oct 27 08:27:49.109631 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 27 08:27:49.109794 systemd[1]: Reached target network.target - Network. Oct 27 08:27:49.110529 systemd-networkd[1580]: ens192: Configuring with /etc/systemd/network/00-vmware.network. Oct 27 08:27:49.113689 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Oct 27 08:27:49.113903 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Oct 27 08:27:49.111504 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Oct 27 08:27:49.114554 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Oct 27 08:27:49.117946 systemd-networkd[1580]: ens192: Link UP Oct 27 08:27:49.118383 systemd-networkd[1580]: ens192: Gained carrier Oct 27 08:27:49.123686 systemd-timesyncd[1544]: Network configuration changed, trying to establish connection. Oct 27 08:27:49.154683 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Oct 27 08:27:49.174458 kernel: piix4_smbus 0000:00:07.3: SMBus Host Controller not enabled! Oct 27 08:27:49.249474 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Oct 27 08:27:49.249925 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 27 08:27:49.250325 (udev-worker)[1586]: id: Truncating stdout of 'dmi_memory_id' up to 16384 byte. Oct 27 08:27:49.298580 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 27 08:27:49.320956 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. Oct 27 08:27:49.322474 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Oct 27 08:27:49.339676 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Oct 27 08:27:49.373022 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 27 08:27:49.595471 ldconfig[1530]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 27 08:27:49.597210 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Oct 27 08:27:49.598281 systemd[1]: Starting systemd-update-done.service - Update is Completed... Oct 27 08:27:49.611185 systemd[1]: Finished systemd-update-done.service - Update is Completed. Oct 27 08:27:49.611450 systemd[1]: Reached target sysinit.target - System Initialization. Oct 27 08:27:49.611615 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Oct 27 08:27:49.611740 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Oct 27 08:27:49.611856 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Oct 27 08:27:49.612035 systemd[1]: Started logrotate.timer - Daily rotation of log files. Oct 27 08:27:49.612186 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Oct 27 08:27:49.612295 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Oct 27 08:27:49.612404 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 27 08:27:49.612423 systemd[1]: Reached target paths.target - Path Units. Oct 27 08:27:49.612559 systemd[1]: Reached target timers.target - Timer Units. Oct 27 08:27:49.613269 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Oct 27 08:27:49.614541 systemd[1]: Starting docker.socket - Docker Socket for the API... Oct 27 08:27:49.615878 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Oct 27 08:27:49.616068 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Oct 27 08:27:49.616184 systemd[1]: Reached target ssh-access.target - SSH Access Available. Oct 27 08:27:49.618502 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Oct 27 08:27:49.618779 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Oct 27 08:27:49.619266 systemd[1]: Listening on docker.socket - Docker Socket for the API. Oct 27 08:27:49.619849 systemd[1]: Reached target sockets.target - Socket Units. Oct 27 08:27:49.619943 systemd[1]: Reached target basic.target - Basic System. Oct 27 08:27:49.620065 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Oct 27 08:27:49.620083 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Oct 27 08:27:49.620956 systemd[1]: Starting containerd.service - containerd container runtime... Oct 27 08:27:49.623516 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Oct 27 08:27:49.624801 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Oct 27 08:27:49.626498 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Oct 27 08:27:49.628551 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Oct 27 08:27:49.628674 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Oct 27 08:27:49.634571 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Oct 27 08:27:49.635476 jq[1642]: false Oct 27 08:27:49.635774 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Oct 27 08:27:49.637559 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Oct 27 08:27:49.640574 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Oct 27 08:27:49.641994 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Oct 27 08:27:49.648525 extend-filesystems[1643]: Found /dev/sda6 Oct 27 08:27:49.650728 systemd[1]: Starting systemd-logind.service - User Login Management... Oct 27 08:27:49.650858 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 27 08:27:49.651369 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 27 08:27:49.654621 systemd[1]: Starting update-engine.service - Update Engine... Oct 27 08:27:49.656468 google_oslogin_nss_cache[1644]: oslogin_cache_refresh[1644]: Refreshing passwd entry cache Oct 27 08:27:49.656247 oslogin_cache_refresh[1644]: Refreshing passwd entry cache Oct 27 08:27:49.656875 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Oct 27 08:27:49.662457 extend-filesystems[1643]: Found /dev/sda9 Oct 27 08:27:49.661705 systemd[1]: Starting vgauthd.service - VGAuth Service for open-vm-tools... Oct 27 08:27:49.664311 google_oslogin_nss_cache[1644]: oslogin_cache_refresh[1644]: Failure getting users, quitting Oct 27 08:27:49.664311 google_oslogin_nss_cache[1644]: oslogin_cache_refresh[1644]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Oct 27 08:27:49.664311 google_oslogin_nss_cache[1644]: oslogin_cache_refresh[1644]: Refreshing group entry cache Oct 27 08:27:49.663881 oslogin_cache_refresh[1644]: Failure getting users, quitting Oct 27 08:27:49.664395 extend-filesystems[1643]: Checking size of /dev/sda9 Oct 27 08:27:49.663893 oslogin_cache_refresh[1644]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Oct 27 08:27:49.663921 oslogin_cache_refresh[1644]: Refreshing group entry cache Oct 27 08:27:49.666021 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Oct 27 08:27:49.666334 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 27 08:27:49.667142 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Oct 27 08:27:49.668724 google_oslogin_nss_cache[1644]: oslogin_cache_refresh[1644]: Failure getting groups, quitting Oct 27 08:27:49.668724 google_oslogin_nss_cache[1644]: oslogin_cache_refresh[1644]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Oct 27 08:27:49.668716 oslogin_cache_refresh[1644]: Failure getting groups, quitting Oct 27 08:27:49.668724 oslogin_cache_refresh[1644]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Oct 27 08:27:49.670882 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Oct 27 08:27:49.671020 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Oct 27 08:27:49.671288 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 27 08:27:49.673586 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Oct 27 08:27:49.681995 extend-filesystems[1643]: Resized partition /dev/sda9 Oct 27 08:27:49.684383 extend-filesystems[1680]: resize2fs 1.47.3 (8-Jul-2025) Oct 27 08:27:49.688218 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 1635323 blocks Oct 27 08:27:49.702857 kernel: EXT4-fs (sda9): resized filesystem to 1635323 Oct 27 08:27:49.689918 systemd[1]: motdgen.service: Deactivated successfully. Oct 27 08:27:49.702946 update_engine[1654]: I20251027 08:27:49.700934 1654 main.cc:92] Flatcar Update Engine starting Oct 27 08:27:49.703190 jq[1655]: true Oct 27 08:27:49.690076 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Oct 27 08:27:49.703855 extend-filesystems[1680]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Oct 27 08:27:49.703855 extend-filesystems[1680]: old_desc_blocks = 1, new_desc_blocks = 1 Oct 27 08:27:49.703855 extend-filesystems[1680]: The filesystem on /dev/sda9 is now 1635323 (4k) blocks long. Oct 27 08:27:49.707142 extend-filesystems[1643]: Resized filesystem in /dev/sda9 Oct 27 08:27:49.704351 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 27 08:27:49.704516 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Oct 27 08:27:49.707833 systemd[1]: Started vgauthd.service - VGAuth Service for open-vm-tools. Oct 27 08:27:49.709176 tar[1663]: linux-amd64/LICENSE Oct 27 08:27:49.709176 tar[1663]: linux-amd64/helm Oct 27 08:27:49.712457 systemd[1]: Starting vmtoolsd.service - Service for virtual machines hosted on VMware... Oct 27 08:27:49.719669 (ntainerd)[1685]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Oct 27 08:27:49.728403 jq[1684]: true Oct 27 08:27:49.740406 unknown[1695]: Pref_Init: Using '/etc/vmware-tools/vgauth.conf' as preferences filepath Oct 27 08:27:49.745764 dbus-daemon[1640]: [system] SELinux support is enabled Oct 27 08:27:49.745871 systemd[1]: Started dbus.service - D-Bus System Message Bus. Oct 27 08:27:49.749119 unknown[1695]: Core dump limit set to -1 Oct 27 08:27:49.750003 systemd[1]: Started vmtoolsd.service - Service for virtual machines hosted on VMware. Oct 27 08:27:49.750266 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 27 08:27:49.750282 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Oct 27 08:27:49.751614 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 27 08:27:49.751627 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Oct 27 08:27:49.755039 systemd[1]: Started update-engine.service - Update Engine. Oct 27 08:27:49.756647 update_engine[1654]: I20251027 08:27:49.756518 1654 update_check_scheduler.cc:74] Next update check in 3m33s Oct 27 08:27:49.790066 systemd[1]: Started locksmithd.service - Cluster reboot manager. Oct 27 08:27:49.837720 bash[1719]: Updated "/home/core/.ssh/authorized_keys" Oct 27 08:27:49.840114 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Oct 27 08:27:49.841015 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Oct 27 08:27:49.854221 systemd-logind[1652]: Watching system buttons on /dev/input/event2 (Power Button) Oct 27 08:27:49.854237 systemd-logind[1652]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Oct 27 08:27:49.855284 systemd-logind[1652]: New seat seat0. Oct 27 08:27:49.858102 systemd[1]: Started systemd-logind.service - User Login Management. Oct 27 08:27:49.938016 sshd_keygen[1688]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 27 08:27:49.977302 locksmithd[1703]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 27 08:27:49.989980 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Oct 27 08:27:49.992484 systemd[1]: Starting issuegen.service - Generate /run/issue... Oct 27 08:27:50.011881 containerd[1685]: time="2025-10-27T08:27:50Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Oct 27 08:27:50.013594 containerd[1685]: time="2025-10-27T08:27:50.013579331Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Oct 27 08:27:50.016704 systemd[1]: issuegen.service: Deactivated successfully. Oct 27 08:27:50.016848 systemd[1]: Finished issuegen.service - Generate /run/issue. Oct 27 08:27:50.019581 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Oct 27 08:27:50.025798 containerd[1685]: time="2025-10-27T08:27:50.025300413Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="5.875µs" Oct 27 08:27:50.027795 containerd[1685]: time="2025-10-27T08:27:50.027781423Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Oct 27 08:27:50.027841 containerd[1685]: time="2025-10-27T08:27:50.027832983Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Oct 27 08:27:50.028009 containerd[1685]: time="2025-10-27T08:27:50.027999555Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Oct 27 08:27:50.028197 containerd[1685]: time="2025-10-27T08:27:50.028188587Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Oct 27 08:27:50.028288 containerd[1685]: time="2025-10-27T08:27:50.028279950Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Oct 27 08:27:50.028374 containerd[1685]: time="2025-10-27T08:27:50.028364740Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Oct 27 08:27:50.028912 containerd[1685]: time="2025-10-27T08:27:50.028900551Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Oct 27 08:27:50.029111 containerd[1685]: time="2025-10-27T08:27:50.029098090Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Oct 27 08:27:50.033182 containerd[1685]: time="2025-10-27T08:27:50.032697795Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Oct 27 08:27:50.033182 containerd[1685]: time="2025-10-27T08:27:50.032722440Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Oct 27 08:27:50.033182 containerd[1685]: time="2025-10-27T08:27:50.032728534Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Oct 27 08:27:50.033182 containerd[1685]: time="2025-10-27T08:27:50.032785933Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Oct 27 08:27:50.033182 containerd[1685]: time="2025-10-27T08:27:50.032901180Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Oct 27 08:27:50.033182 containerd[1685]: time="2025-10-27T08:27:50.032918245Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Oct 27 08:27:50.033182 containerd[1685]: time="2025-10-27T08:27:50.032929005Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Oct 27 08:27:50.033182 containerd[1685]: time="2025-10-27T08:27:50.032952609Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Oct 27 08:27:50.033182 containerd[1685]: time="2025-10-27T08:27:50.033073559Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Oct 27 08:27:50.033182 containerd[1685]: time="2025-10-27T08:27:50.033103098Z" level=info msg="metadata content store policy set" policy=shared Oct 27 08:27:50.037486 containerd[1685]: time="2025-10-27T08:27:50.037468568Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Oct 27 08:27:50.037504 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Oct 27 08:27:50.038197 containerd[1685]: time="2025-10-27T08:27:50.037588289Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Oct 27 08:27:50.038197 containerd[1685]: time="2025-10-27T08:27:50.037601926Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Oct 27 08:27:50.038197 containerd[1685]: time="2025-10-27T08:27:50.037614613Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Oct 27 08:27:50.038197 containerd[1685]: time="2025-10-27T08:27:50.037622771Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Oct 27 08:27:50.038197 containerd[1685]: time="2025-10-27T08:27:50.037628752Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Oct 27 08:27:50.038197 containerd[1685]: time="2025-10-27T08:27:50.037651113Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Oct 27 08:27:50.038197 containerd[1685]: time="2025-10-27T08:27:50.037661658Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Oct 27 08:27:50.038197 containerd[1685]: time="2025-10-27T08:27:50.037670619Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Oct 27 08:27:50.038197 containerd[1685]: time="2025-10-27T08:27:50.037676392Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Oct 27 08:27:50.038197 containerd[1685]: time="2025-10-27T08:27:50.037681142Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Oct 27 08:27:50.038197 containerd[1685]: time="2025-10-27T08:27:50.037688192Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Oct 27 08:27:50.038197 containerd[1685]: time="2025-10-27T08:27:50.037748028Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Oct 27 08:27:50.038197 containerd[1685]: time="2025-10-27T08:27:50.037759757Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Oct 27 08:27:50.038197 containerd[1685]: time="2025-10-27T08:27:50.037772320Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Oct 27 08:27:50.038389 containerd[1685]: time="2025-10-27T08:27:50.037779246Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Oct 27 08:27:50.038389 containerd[1685]: time="2025-10-27T08:27:50.037784856Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Oct 27 08:27:50.038389 containerd[1685]: time="2025-10-27T08:27:50.037790867Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Oct 27 08:27:50.038389 containerd[1685]: time="2025-10-27T08:27:50.037797163Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Oct 27 08:27:50.038389 containerd[1685]: time="2025-10-27T08:27:50.037802584Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Oct 27 08:27:50.038389 containerd[1685]: time="2025-10-27T08:27:50.037816283Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Oct 27 08:27:50.038389 containerd[1685]: time="2025-10-27T08:27:50.037822971Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Oct 27 08:27:50.038389 containerd[1685]: time="2025-10-27T08:27:50.037829343Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Oct 27 08:27:50.038389 containerd[1685]: time="2025-10-27T08:27:50.037869725Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Oct 27 08:27:50.038389 containerd[1685]: time="2025-10-27T08:27:50.037878731Z" level=info msg="Start snapshots syncer" Oct 27 08:27:50.038389 containerd[1685]: time="2025-10-27T08:27:50.037901452Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Oct 27 08:27:50.038544 containerd[1685]: time="2025-10-27T08:27:50.038076425Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Oct 27 08:27:50.038544 containerd[1685]: time="2025-10-27T08:27:50.038107584Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Oct 27 08:27:50.038623 containerd[1685]: time="2025-10-27T08:27:50.038153338Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Oct 27 08:27:50.038993 containerd[1685]: time="2025-10-27T08:27:50.038753995Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Oct 27 08:27:50.038993 containerd[1685]: time="2025-10-27T08:27:50.038771436Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Oct 27 08:27:50.038993 containerd[1685]: time="2025-10-27T08:27:50.038778344Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Oct 27 08:27:50.038993 containerd[1685]: time="2025-10-27T08:27:50.038784520Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Oct 27 08:27:50.038993 containerd[1685]: time="2025-10-27T08:27:50.038791051Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Oct 27 08:27:50.038993 containerd[1685]: time="2025-10-27T08:27:50.038796506Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Oct 27 08:27:50.038993 containerd[1685]: time="2025-10-27T08:27:50.038802251Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Oct 27 08:27:50.038993 containerd[1685]: time="2025-10-27T08:27:50.038816281Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Oct 27 08:27:50.038993 containerd[1685]: time="2025-10-27T08:27:50.038822919Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Oct 27 08:27:50.038993 containerd[1685]: time="2025-10-27T08:27:50.038829260Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Oct 27 08:27:50.038993 containerd[1685]: time="2025-10-27T08:27:50.038862943Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Oct 27 08:27:50.038993 containerd[1685]: time="2025-10-27T08:27:50.038874552Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Oct 27 08:27:50.038993 containerd[1685]: time="2025-10-27T08:27:50.038879558Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Oct 27 08:27:50.039169 containerd[1685]: time="2025-10-27T08:27:50.038884765Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Oct 27 08:27:50.039169 containerd[1685]: time="2025-10-27T08:27:50.038888865Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Oct 27 08:27:50.039169 containerd[1685]: time="2025-10-27T08:27:50.038893887Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Oct 27 08:27:50.039169 containerd[1685]: time="2025-10-27T08:27:50.038899344Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Oct 27 08:27:50.039169 containerd[1685]: time="2025-10-27T08:27:50.038909231Z" level=info msg="runtime interface created" Oct 27 08:27:50.039169 containerd[1685]: time="2025-10-27T08:27:50.038939942Z" level=info msg="created NRI interface" Oct 27 08:27:50.039169 containerd[1685]: time="2025-10-27T08:27:50.038946748Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Oct 27 08:27:50.039169 containerd[1685]: time="2025-10-27T08:27:50.038954881Z" level=info msg="Connect containerd service" Oct 27 08:27:50.039169 containerd[1685]: time="2025-10-27T08:27:50.038975275Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Oct 27 08:27:50.039592 systemd[1]: Started getty@tty1.service - Getty on tty1. Oct 27 08:27:50.040218 containerd[1685]: time="2025-10-27T08:27:50.040205909Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 27 08:27:50.040533 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Oct 27 08:27:50.040798 systemd[1]: Reached target getty.target - Login Prompts. Oct 27 08:27:50.132712 tar[1663]: linux-amd64/README.md Oct 27 08:27:50.140566 containerd[1685]: time="2025-10-27T08:27:50.140545935Z" level=info msg="Start subscribing containerd event" Oct 27 08:27:50.140663 containerd[1685]: time="2025-10-27T08:27:50.140625396Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 27 08:27:50.140710 containerd[1685]: time="2025-10-27T08:27:50.140656050Z" level=info msg="Start recovering state" Oct 27 08:27:50.140795 containerd[1685]: time="2025-10-27T08:27:50.140693545Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 27 08:27:50.140831 containerd[1685]: time="2025-10-27T08:27:50.140824869Z" level=info msg="Start event monitor" Oct 27 08:27:50.140861 containerd[1685]: time="2025-10-27T08:27:50.140856372Z" level=info msg="Start cni network conf syncer for default" Oct 27 08:27:50.140887 containerd[1685]: time="2025-10-27T08:27:50.140882246Z" level=info msg="Start streaming server" Oct 27 08:27:50.140929 containerd[1685]: time="2025-10-27T08:27:50.140924294Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Oct 27 08:27:50.140980 containerd[1685]: time="2025-10-27T08:27:50.140974666Z" level=info msg="runtime interface starting up..." Oct 27 08:27:50.141006 containerd[1685]: time="2025-10-27T08:27:50.141001197Z" level=info msg="starting plugins..." Oct 27 08:27:50.141057 containerd[1685]: time="2025-10-27T08:27:50.141051439Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Oct 27 08:27:50.141377 containerd[1685]: time="2025-10-27T08:27:50.141364349Z" level=info msg="containerd successfully booted in 0.129666s" Oct 27 08:27:50.141475 systemd[1]: Started containerd.service - containerd container runtime. Oct 27 08:27:50.145765 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Oct 27 08:27:50.512603 systemd-networkd[1580]: ens192: Gained IPv6LL Oct 27 08:27:50.512990 systemd-timesyncd[1544]: Network configuration changed, trying to establish connection. Oct 27 08:27:50.514046 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Oct 27 08:27:50.514832 systemd[1]: Reached target network-online.target - Network is Online. Oct 27 08:27:50.516375 systemd[1]: Starting coreos-metadata.service - VMware metadata agent... Oct 27 08:27:50.517890 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 27 08:27:50.520858 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Oct 27 08:27:50.540185 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Oct 27 08:27:50.578797 systemd[1]: coreos-metadata.service: Deactivated successfully. Oct 27 08:27:50.579066 systemd[1]: Finished coreos-metadata.service - VMware metadata agent. Oct 27 08:27:50.579952 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Oct 27 08:27:51.365882 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 27 08:27:51.366942 systemd[1]: Reached target multi-user.target - Multi-User System. Oct 27 08:27:51.367522 systemd[1]: Startup finished in 2.285s (kernel) + 5.703s (initrd) + 4.319s (userspace) = 12.308s. Oct 27 08:27:51.378643 (kubelet)[1847]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 27 08:27:51.402824 login[1787]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Oct 27 08:27:51.409926 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Oct 27 08:27:51.410981 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Oct 27 08:27:51.414258 systemd-logind[1652]: New session 1 of user core. Oct 27 08:27:51.429183 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Oct 27 08:27:51.431679 systemd[1]: Starting user@500.service - User Manager for UID 500... Oct 27 08:27:51.438910 (systemd)[1852]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 27 08:27:51.440557 systemd-logind[1652]: New session c1 of user core. Oct 27 08:27:51.532864 systemd[1852]: Queued start job for default target default.target. Oct 27 08:27:51.542488 systemd[1852]: Created slice app.slice - User Application Slice. Oct 27 08:27:51.542662 systemd[1852]: Reached target paths.target - Paths. Oct 27 08:27:51.542733 systemd[1852]: Reached target timers.target - Timers. Oct 27 08:27:51.543566 systemd[1852]: Starting dbus.socket - D-Bus User Message Bus Socket... Oct 27 08:27:51.551864 systemd[1852]: Listening on dbus.socket - D-Bus User Message Bus Socket. Oct 27 08:27:51.551968 systemd[1852]: Reached target sockets.target - Sockets. Oct 27 08:27:51.552047 systemd[1852]: Reached target basic.target - Basic System. Oct 27 08:27:51.552111 systemd[1852]: Reached target default.target - Main User Target. Oct 27 08:27:51.552144 systemd[1]: Started user@500.service - User Manager for UID 500. Oct 27 08:27:51.552200 systemd[1852]: Startup finished in 107ms. Oct 27 08:27:51.553170 systemd[1]: Started session-1.scope - Session 1 of User core. Oct 27 08:27:51.722797 login[1788]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Oct 27 08:27:51.726718 systemd-logind[1652]: New session 2 of user core. Oct 27 08:27:51.732508 systemd[1]: Started session-2.scope - Session 2 of User core. Oct 27 08:27:51.942061 kubelet[1847]: E1027 08:27:51.942023 1847 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 27 08:27:51.943566 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 27 08:27:51.943710 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 27 08:27:51.944113 systemd[1]: kubelet.service: Consumed 649ms CPU time, 265.6M memory peak. Oct 27 08:27:52.216796 systemd-timesyncd[1544]: Network configuration changed, trying to establish connection. Oct 27 08:28:02.193989 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Oct 27 08:28:02.195777 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 27 08:28:02.539486 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 27 08:28:02.542386 (kubelet)[1897]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 27 08:28:02.596215 kubelet[1897]: E1027 08:28:02.596189 1897 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 27 08:28:02.598976 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 27 08:28:02.599064 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 27 08:28:02.599413 systemd[1]: kubelet.service: Consumed 113ms CPU time, 109.2M memory peak. Oct 27 08:28:12.849581 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Oct 27 08:28:12.851042 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 27 08:28:13.206095 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 27 08:28:13.209176 (kubelet)[1911]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 27 08:28:13.270038 kubelet[1911]: E1027 08:28:13.269998 1911 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 27 08:28:13.271448 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 27 08:28:13.271543 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 27 08:28:13.271779 systemd[1]: kubelet.service: Consumed 112ms CPU time, 108.5M memory peak. Oct 27 08:28:19.928600 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Oct 27 08:28:19.929572 systemd[1]: Started sshd@0-139.178.70.104:22-147.75.109.163:57068.service - OpenSSH per-connection server daemon (147.75.109.163:57068). Oct 27 08:28:19.999167 sshd[1920]: Accepted publickey for core from 147.75.109.163 port 57068 ssh2: RSA SHA256:SdSJLhtzP1zCvmJPnLHIt4mznFlZ9Z/x6JYKhlZWnDw Oct 27 08:28:19.999993 sshd-session[1920]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 08:28:20.003514 systemd-logind[1652]: New session 3 of user core. Oct 27 08:28:20.010531 systemd[1]: Started session-3.scope - Session 3 of User core. Oct 27 08:28:20.063423 systemd[1]: Started sshd@1-139.178.70.104:22-147.75.109.163:57078.service - OpenSSH per-connection server daemon (147.75.109.163:57078). Oct 27 08:28:20.107984 sshd[1926]: Accepted publickey for core from 147.75.109.163 port 57078 ssh2: RSA SHA256:SdSJLhtzP1zCvmJPnLHIt4mznFlZ9Z/x6JYKhlZWnDw Oct 27 08:28:20.109215 sshd-session[1926]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 08:28:20.112273 systemd-logind[1652]: New session 4 of user core. Oct 27 08:28:20.116524 systemd[1]: Started session-4.scope - Session 4 of User core. Oct 27 08:28:20.166082 sshd[1929]: Connection closed by 147.75.109.163 port 57078 Oct 27 08:28:20.166587 sshd-session[1926]: pam_unix(sshd:session): session closed for user core Oct 27 08:28:20.177602 systemd[1]: sshd@1-139.178.70.104:22-147.75.109.163:57078.service: Deactivated successfully. Oct 27 08:28:20.179047 systemd[1]: session-4.scope: Deactivated successfully. Oct 27 08:28:20.180282 systemd-logind[1652]: Session 4 logged out. Waiting for processes to exit. Oct 27 08:28:20.182337 systemd[1]: Started sshd@2-139.178.70.104:22-147.75.109.163:57084.service - OpenSSH per-connection server daemon (147.75.109.163:57084). Oct 27 08:28:20.183069 systemd-logind[1652]: Removed session 4. Oct 27 08:28:20.221718 sshd[1935]: Accepted publickey for core from 147.75.109.163 port 57084 ssh2: RSA SHA256:SdSJLhtzP1zCvmJPnLHIt4mznFlZ9Z/x6JYKhlZWnDw Oct 27 08:28:20.222648 sshd-session[1935]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 08:28:20.225938 systemd-logind[1652]: New session 5 of user core. Oct 27 08:28:20.235712 systemd[1]: Started session-5.scope - Session 5 of User core. Oct 27 08:28:20.283285 sshd[1938]: Connection closed by 147.75.109.163 port 57084 Oct 27 08:28:20.284170 sshd-session[1935]: pam_unix(sshd:session): session closed for user core Oct 27 08:28:20.293075 systemd[1]: sshd@2-139.178.70.104:22-147.75.109.163:57084.service: Deactivated successfully. Oct 27 08:28:20.294595 systemd[1]: session-5.scope: Deactivated successfully. Oct 27 08:28:20.295173 systemd-logind[1652]: Session 5 logged out. Waiting for processes to exit. Oct 27 08:28:20.296974 systemd[1]: Started sshd@3-139.178.70.104:22-147.75.109.163:57092.service - OpenSSH per-connection server daemon (147.75.109.163:57092). Oct 27 08:28:20.297845 systemd-logind[1652]: Removed session 5. Oct 27 08:28:20.346677 sshd[1944]: Accepted publickey for core from 147.75.109.163 port 57092 ssh2: RSA SHA256:SdSJLhtzP1zCvmJPnLHIt4mznFlZ9Z/x6JYKhlZWnDw Oct 27 08:28:20.347598 sshd-session[1944]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 08:28:20.350813 systemd-logind[1652]: New session 6 of user core. Oct 27 08:28:20.360551 systemd[1]: Started session-6.scope - Session 6 of User core. Oct 27 08:28:20.409720 sshd[1947]: Connection closed by 147.75.109.163 port 57092 Oct 27 08:28:20.410047 sshd-session[1944]: pam_unix(sshd:session): session closed for user core Oct 27 08:28:20.419722 systemd[1]: sshd@3-139.178.70.104:22-147.75.109.163:57092.service: Deactivated successfully. Oct 27 08:28:20.420851 systemd[1]: session-6.scope: Deactivated successfully. Oct 27 08:28:20.421820 systemd-logind[1652]: Session 6 logged out. Waiting for processes to exit. Oct 27 08:28:20.422985 systemd-logind[1652]: Removed session 6. Oct 27 08:28:20.424151 systemd[1]: Started sshd@4-139.178.70.104:22-147.75.109.163:57094.service - OpenSSH per-connection server daemon (147.75.109.163:57094). Oct 27 08:28:20.464469 sshd[1953]: Accepted publickey for core from 147.75.109.163 port 57094 ssh2: RSA SHA256:SdSJLhtzP1zCvmJPnLHIt4mznFlZ9Z/x6JYKhlZWnDw Oct 27 08:28:20.465196 sshd-session[1953]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 08:28:20.467879 systemd-logind[1652]: New session 7 of user core. Oct 27 08:28:20.479700 systemd[1]: Started session-7.scope - Session 7 of User core. Oct 27 08:28:20.569055 sudo[1957]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 27 08:28:20.569280 sudo[1957]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 27 08:28:20.584287 sudo[1957]: pam_unix(sudo:session): session closed for user root Oct 27 08:28:20.585303 sshd[1956]: Connection closed by 147.75.109.163 port 57094 Oct 27 08:28:20.586279 sshd-session[1953]: pam_unix(sshd:session): session closed for user core Oct 27 08:28:20.596703 systemd[1]: sshd@4-139.178.70.104:22-147.75.109.163:57094.service: Deactivated successfully. Oct 27 08:28:20.597762 systemd[1]: session-7.scope: Deactivated successfully. Oct 27 08:28:20.598296 systemd-logind[1652]: Session 7 logged out. Waiting for processes to exit. Oct 27 08:28:20.599652 systemd[1]: Started sshd@5-139.178.70.104:22-147.75.109.163:57108.service - OpenSSH per-connection server daemon (147.75.109.163:57108). Oct 27 08:28:20.600458 systemd-logind[1652]: Removed session 7. Oct 27 08:28:20.647136 sshd[1963]: Accepted publickey for core from 147.75.109.163 port 57108 ssh2: RSA SHA256:SdSJLhtzP1zCvmJPnLHIt4mznFlZ9Z/x6JYKhlZWnDw Oct 27 08:28:20.648157 sshd-session[1963]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 08:28:20.651044 systemd-logind[1652]: New session 8 of user core. Oct 27 08:28:20.661579 systemd[1]: Started session-8.scope - Session 8 of User core. Oct 27 08:28:20.711576 sudo[1968]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 27 08:28:20.712002 sudo[1968]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 27 08:28:20.720277 sudo[1968]: pam_unix(sudo:session): session closed for user root Oct 27 08:28:20.725426 sudo[1967]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Oct 27 08:28:20.725651 sudo[1967]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 27 08:28:20.732785 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 27 08:28:20.762890 augenrules[1990]: No rules Oct 27 08:28:20.763589 systemd[1]: audit-rules.service: Deactivated successfully. Oct 27 08:28:20.763833 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 27 08:28:20.764682 sudo[1967]: pam_unix(sudo:session): session closed for user root Oct 27 08:28:20.766464 sshd[1966]: Connection closed by 147.75.109.163 port 57108 Oct 27 08:28:20.766686 sshd-session[1963]: pam_unix(sshd:session): session closed for user core Oct 27 08:28:20.776074 systemd[1]: sshd@5-139.178.70.104:22-147.75.109.163:57108.service: Deactivated successfully. Oct 27 08:28:20.777055 systemd[1]: session-8.scope: Deactivated successfully. Oct 27 08:28:20.777645 systemd-logind[1652]: Session 8 logged out. Waiting for processes to exit. Oct 27 08:28:20.778753 systemd-logind[1652]: Removed session 8. Oct 27 08:28:20.779843 systemd[1]: Started sshd@6-139.178.70.104:22-147.75.109.163:57112.service - OpenSSH per-connection server daemon (147.75.109.163:57112). Oct 27 08:28:20.820012 sshd[1999]: Accepted publickey for core from 147.75.109.163 port 57112 ssh2: RSA SHA256:SdSJLhtzP1zCvmJPnLHIt4mznFlZ9Z/x6JYKhlZWnDw Oct 27 08:28:20.821007 sshd-session[1999]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 08:28:20.824958 systemd-logind[1652]: New session 9 of user core. Oct 27 08:28:20.829540 systemd[1]: Started session-9.scope - Session 9 of User core. Oct 27 08:28:20.879115 sudo[2003]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 27 08:28:20.879519 sudo[2003]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 27 08:28:21.422392 systemd[1]: Starting docker.service - Docker Application Container Engine... Oct 27 08:28:21.432679 (dockerd)[2020]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Oct 27 08:28:21.773922 dockerd[2020]: time="2025-10-27T08:28:21.773843767Z" level=info msg="Starting up" Oct 27 08:28:21.775272 dockerd[2020]: time="2025-10-27T08:28:21.775209881Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Oct 27 08:28:21.781674 dockerd[2020]: time="2025-10-27T08:28:21.781635428Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Oct 27 08:28:21.796803 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport809261949-merged.mount: Deactivated successfully. Oct 27 08:28:21.813919 dockerd[2020]: time="2025-10-27T08:28:21.813772469Z" level=info msg="Loading containers: start." Oct 27 08:28:21.823467 kernel: Initializing XFRM netlink socket Oct 27 08:28:22.074384 systemd-timesyncd[1544]: Network configuration changed, trying to establish connection. Oct 27 08:28:22.148542 systemd-networkd[1580]: docker0: Link UP Oct 27 08:30:00.267645 systemd-timesyncd[1544]: Contacted time server 162.159.200.123:123 (2.flatcar.pool.ntp.org). Oct 27 08:30:00.267677 systemd-timesyncd[1544]: Initial clock synchronization to Mon 2025-10-27 08:30:00.267525 UTC. Oct 27 08:30:00.267865 systemd-resolved[1342]: Clock change detected. Flushing caches. Oct 27 08:30:00.291190 dockerd[2020]: time="2025-10-27T08:30:00.291155195Z" level=info msg="Loading containers: done." Oct 27 08:30:00.307254 dockerd[2020]: time="2025-10-27T08:30:00.307214344Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Oct 27 08:30:00.307366 dockerd[2020]: time="2025-10-27T08:30:00.307288190Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Oct 27 08:30:00.307366 dockerd[2020]: time="2025-10-27T08:30:00.307354495Z" level=info msg="Initializing buildkit" Oct 27 08:30:00.321689 dockerd[2020]: time="2025-10-27T08:30:00.321655757Z" level=info msg="Completed buildkit initialization" Oct 27 08:30:00.328088 dockerd[2020]: time="2025-10-27T08:30:00.328053158Z" level=info msg="Daemon has completed initialization" Oct 27 08:30:00.328290 dockerd[2020]: time="2025-10-27T08:30:00.328144202Z" level=info msg="API listen on /run/docker.sock" Oct 27 08:30:00.328469 systemd[1]: Started docker.service - Docker Application Container Engine. Oct 27 08:30:01.639464 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Oct 27 08:30:01.640765 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 27 08:30:01.701303 containerd[1685]: time="2025-10-27T08:30:01.701232460Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\"" Oct 27 08:30:02.088466 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 27 08:30:02.093083 (kubelet)[2238]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 27 08:30:02.115279 kubelet[2238]: E1027 08:30:02.115246 2238 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 27 08:30:02.116693 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 27 08:30:02.116833 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 27 08:30:02.117417 systemd[1]: kubelet.service: Consumed 109ms CPU time, 108.5M memory peak. Oct 27 08:30:02.445351 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount475627411.mount: Deactivated successfully. Oct 27 08:30:03.397719 containerd[1685]: time="2025-10-27T08:30:03.397521309Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:30:03.398088 containerd[1685]: time="2025-10-27T08:30:03.398073560Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.5: active requests=0, bytes read=30114893" Oct 27 08:30:03.398187 containerd[1685]: time="2025-10-27T08:30:03.398173248Z" level=info msg="ImageCreate event name:\"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:30:03.399815 containerd[1685]: time="2025-10-27T08:30:03.399798605Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:30:03.400724 containerd[1685]: time="2025-10-27T08:30:03.400485260Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.5\" with image id \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\", size \"30111492\" in 1.699202825s" Oct 27 08:30:03.400724 containerd[1685]: time="2025-10-27T08:30:03.400504968Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\" returns image reference \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\"" Oct 27 08:30:03.400838 containerd[1685]: time="2025-10-27T08:30:03.400810102Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\"" Oct 27 08:30:04.890356 containerd[1685]: time="2025-10-27T08:30:04.890318509Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:30:04.897109 containerd[1685]: time="2025-10-27T08:30:04.897093477Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.5: active requests=0, bytes read=26020844" Oct 27 08:30:04.905798 containerd[1685]: time="2025-10-27T08:30:04.905779520Z" level=info msg="ImageCreate event name:\"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:30:04.912977 containerd[1685]: time="2025-10-27T08:30:04.912950315Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:30:04.913532 containerd[1685]: time="2025-10-27T08:30:04.913236970Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.5\" with image id \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\", size \"27681301\" in 1.512406033s" Oct 27 08:30:04.913532 containerd[1685]: time="2025-10-27T08:30:04.913254540Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\" returns image reference \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\"" Oct 27 08:30:04.913731 containerd[1685]: time="2025-10-27T08:30:04.913721560Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\"" Oct 27 08:30:06.067922 containerd[1685]: time="2025-10-27T08:30:06.067589291Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:30:06.068455 containerd[1685]: time="2025-10-27T08:30:06.068420995Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.5: active requests=0, bytes read=20155568" Oct 27 08:30:06.068769 containerd[1685]: time="2025-10-27T08:30:06.068758071Z" level=info msg="ImageCreate event name:\"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:30:06.070315 containerd[1685]: time="2025-10-27T08:30:06.070299195Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:30:06.070903 containerd[1685]: time="2025-10-27T08:30:06.070858391Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.5\" with image id \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\", size \"21816043\" in 1.15707547s" Oct 27 08:30:06.070903 containerd[1685]: time="2025-10-27T08:30:06.070874874Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\" returns image reference \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\"" Oct 27 08:30:06.071345 containerd[1685]: time="2025-10-27T08:30:06.071270601Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\"" Oct 27 08:30:07.115172 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3406565502.mount: Deactivated successfully. Oct 27 08:30:07.512886 containerd[1685]: time="2025-10-27T08:30:07.512751888Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:30:07.517536 containerd[1685]: time="2025-10-27T08:30:07.517519069Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.5: active requests=0, bytes read=31929469" Oct 27 08:30:07.525106 containerd[1685]: time="2025-10-27T08:30:07.525082305Z" level=info msg="ImageCreate event name:\"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:30:07.535089 containerd[1685]: time="2025-10-27T08:30:07.535065821Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:30:07.535363 containerd[1685]: time="2025-10-27T08:30:07.535235405Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.5\" with image id \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\", repo tag \"registry.k8s.io/kube-proxy:v1.33.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\", size \"31928488\" in 1.463950539s" Oct 27 08:30:07.535363 containerd[1685]: time="2025-10-27T08:30:07.535253567Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\" returns image reference \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\"" Oct 27 08:30:07.535566 containerd[1685]: time="2025-10-27T08:30:07.535485560Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Oct 27 08:30:08.121364 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4084519602.mount: Deactivated successfully. Oct 27 08:30:09.014927 containerd[1685]: time="2025-10-27T08:30:09.014748976Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:30:09.022813 containerd[1685]: time="2025-10-27T08:30:09.022796995Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Oct 27 08:30:09.033374 containerd[1685]: time="2025-10-27T08:30:09.033356827Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:30:09.042042 containerd[1685]: time="2025-10-27T08:30:09.042016475Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:30:09.042595 containerd[1685]: time="2025-10-27T08:30:09.042512923Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.507010598s" Oct 27 08:30:09.042595 containerd[1685]: time="2025-10-27T08:30:09.042530351Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Oct 27 08:30:09.043082 containerd[1685]: time="2025-10-27T08:30:09.043062665Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Oct 27 08:30:09.713139 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3521980605.mount: Deactivated successfully. Oct 27 08:30:09.715871 containerd[1685]: time="2025-10-27T08:30:09.715852769Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 27 08:30:09.716467 containerd[1685]: time="2025-10-27T08:30:09.716452973Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Oct 27 08:30:09.716709 containerd[1685]: time="2025-10-27T08:30:09.716688827Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 27 08:30:09.718197 containerd[1685]: time="2025-10-27T08:30:09.718184947Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 27 08:30:09.718657 containerd[1685]: time="2025-10-27T08:30:09.718432647Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 675.309833ms" Oct 27 08:30:09.718657 containerd[1685]: time="2025-10-27T08:30:09.718598338Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Oct 27 08:30:09.719051 containerd[1685]: time="2025-10-27T08:30:09.719041529Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Oct 27 08:30:10.160938 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3113928970.mount: Deactivated successfully. Oct 27 08:30:12.145196 containerd[1685]: time="2025-10-27T08:30:12.144456536Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:30:12.145899 containerd[1685]: time="2025-10-27T08:30:12.145882724Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58378433" Oct 27 08:30:12.146260 containerd[1685]: time="2025-10-27T08:30:12.146245129Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:30:12.148964 containerd[1685]: time="2025-10-27T08:30:12.148947679Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:30:12.149510 containerd[1685]: time="2025-10-27T08:30:12.149417821Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 2.430194158s" Oct 27 08:30:12.149576 containerd[1685]: time="2025-10-27T08:30:12.149564987Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Oct 27 08:30:12.367230 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Oct 27 08:30:12.369359 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 27 08:30:12.824516 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 27 08:30:12.832195 (kubelet)[2456]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 27 08:30:12.873966 kubelet[2456]: E1027 08:30:12.873942 2456 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 27 08:30:12.874894 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 27 08:30:12.874987 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 27 08:30:12.875290 systemd[1]: kubelet.service: Consumed 94ms CPU time, 107.4M memory peak. Oct 27 08:30:13.180098 update_engine[1654]: I20251027 08:30:13.180006 1654 update_attempter.cc:509] Updating boot flags... Oct 27 08:30:15.160324 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 27 08:30:15.160736 systemd[1]: kubelet.service: Consumed 94ms CPU time, 107.4M memory peak. Oct 27 08:30:15.162451 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 27 08:30:15.182452 systemd[1]: Reload requested from client PID 2493 ('systemctl') (unit session-9.scope)... Oct 27 08:30:15.182464 systemd[1]: Reloading... Oct 27 08:30:15.255947 zram_generator::config[2541]: No configuration found. Oct 27 08:30:15.326735 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Oct 27 08:30:15.395091 systemd[1]: Reloading finished in 212 ms. Oct 27 08:30:15.430973 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Oct 27 08:30:15.431029 systemd[1]: kubelet.service: Failed with result 'signal'. Oct 27 08:30:15.431204 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 27 08:30:15.432334 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 27 08:30:15.792063 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 27 08:30:15.797066 (kubelet)[2605]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 27 08:30:15.825608 kubelet[2605]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 27 08:30:15.825608 kubelet[2605]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 27 08:30:15.825608 kubelet[2605]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 27 08:30:15.843808 kubelet[2605]: I1027 08:30:15.843782 2605 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 27 08:30:16.118583 kubelet[2605]: I1027 08:30:16.118351 2605 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Oct 27 08:30:16.118583 kubelet[2605]: I1027 08:30:16.118382 2605 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 27 08:30:16.118673 kubelet[2605]: I1027 08:30:16.118622 2605 server.go:956] "Client rotation is on, will bootstrap in background" Oct 27 08:30:16.176600 kubelet[2605]: E1027 08:30:16.176521 2605 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://139.178.70.104:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 139.178.70.104:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Oct 27 08:30:16.180123 kubelet[2605]: I1027 08:30:16.179902 2605 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 27 08:30:16.203967 kubelet[2605]: I1027 08:30:16.203951 2605 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 27 08:30:16.220159 kubelet[2605]: I1027 08:30:16.220133 2605 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 27 08:30:16.225324 kubelet[2605]: I1027 08:30:16.225110 2605 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 27 08:30:16.233872 kubelet[2605]: I1027 08:30:16.225165 2605 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 27 08:30:16.245753 kubelet[2605]: I1027 08:30:16.245524 2605 topology_manager.go:138] "Creating topology manager with none policy" Oct 27 08:30:16.245753 kubelet[2605]: I1027 08:30:16.245557 2605 container_manager_linux.go:303] "Creating device plugin manager" Oct 27 08:30:16.246401 kubelet[2605]: I1027 08:30:16.246393 2605 state_mem.go:36] "Initialized new in-memory state store" Oct 27 08:30:16.249193 kubelet[2605]: I1027 08:30:16.249181 2605 kubelet.go:480] "Attempting to sync node with API server" Oct 27 08:30:16.249276 kubelet[2605]: I1027 08:30:16.249267 2605 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 27 08:30:16.250294 kubelet[2605]: I1027 08:30:16.250286 2605 kubelet.go:386] "Adding apiserver pod source" Oct 27 08:30:16.251831 kubelet[2605]: I1027 08:30:16.251777 2605 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 27 08:30:16.274678 kubelet[2605]: E1027 08:30:16.274656 2605 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://139.178.70.104:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 139.178.70.104:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Oct 27 08:30:16.274891 kubelet[2605]: E1027 08:30:16.274810 2605 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://139.178.70.104:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.70.104:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Oct 27 08:30:16.275098 kubelet[2605]: I1027 08:30:16.275090 2605 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Oct 27 08:30:16.275466 kubelet[2605]: I1027 08:30:16.275454 2605 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Oct 27 08:30:16.279258 kubelet[2605]: W1027 08:30:16.279247 2605 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 27 08:30:16.300366 kubelet[2605]: I1027 08:30:16.300352 2605 watchdog_linux.go:99] "Systemd watchdog is not enabled" Oct 27 08:30:16.300472 kubelet[2605]: I1027 08:30:16.300466 2605 server.go:1289] "Started kubelet" Oct 27 08:30:16.304746 kubelet[2605]: I1027 08:30:16.304715 2605 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Oct 27 08:30:16.305937 kubelet[2605]: I1027 08:30:16.305548 2605 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 27 08:30:16.305937 kubelet[2605]: I1027 08:30:16.305855 2605 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 27 08:30:16.312682 kubelet[2605]: E1027 08:30:16.309462 2605 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://139.178.70.104:6443/api/v1/namespaces/default/events\": dial tcp 139.178.70.104:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18724bdce1526457 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-10-27 08:30:16.300446807 +0000 UTC m=+0.501392625,LastTimestamp:2025-10-27 08:30:16.300446807 +0000 UTC m=+0.501392625,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Oct 27 08:30:16.320680 kubelet[2605]: I1027 08:30:16.320656 2605 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 27 08:30:16.329636 kubelet[2605]: I1027 08:30:16.329173 2605 server.go:317] "Adding debug handlers to kubelet server" Oct 27 08:30:16.343946 kubelet[2605]: I1027 08:30:16.343250 2605 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 27 08:30:16.345243 kubelet[2605]: I1027 08:30:16.345229 2605 volume_manager.go:297] "Starting Kubelet Volume Manager" Oct 27 08:30:16.345528 kubelet[2605]: E1027 08:30:16.345516 2605 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 27 08:30:16.354992 kubelet[2605]: I1027 08:30:16.354613 2605 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Oct 27 08:30:16.354992 kubelet[2605]: I1027 08:30:16.354666 2605 reconciler.go:26] "Reconciler: start to sync state" Oct 27 08:30:16.355469 kubelet[2605]: E1027 08:30:16.355452 2605 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://139.178.70.104:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.70.104:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Oct 27 08:30:16.355568 kubelet[2605]: E1027 08:30:16.355556 2605 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.104:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.104:6443: connect: connection refused" interval="200ms" Oct 27 08:30:16.355844 kubelet[2605]: I1027 08:30:16.355834 2605 factory.go:223] Registration of the systemd container factory successfully Oct 27 08:30:16.355946 kubelet[2605]: I1027 08:30:16.355937 2605 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 27 08:30:16.360411 kubelet[2605]: E1027 08:30:16.360393 2605 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 27 08:30:16.361456 kubelet[2605]: I1027 08:30:16.361447 2605 factory.go:223] Registration of the containerd container factory successfully Oct 27 08:30:16.377032 kubelet[2605]: I1027 08:30:16.376962 2605 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Oct 27 08:30:16.378333 kubelet[2605]: I1027 08:30:16.378320 2605 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 27 08:30:16.378333 kubelet[2605]: I1027 08:30:16.378329 2605 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 27 08:30:16.378406 kubelet[2605]: I1027 08:30:16.378339 2605 state_mem.go:36] "Initialized new in-memory state store" Oct 27 08:30:16.379572 kubelet[2605]: I1027 08:30:16.379563 2605 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Oct 27 08:30:16.379629 kubelet[2605]: I1027 08:30:16.379615 2605 status_manager.go:230] "Starting to sync pod status with apiserver" Oct 27 08:30:16.379670 kubelet[2605]: I1027 08:30:16.379665 2605 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 27 08:30:16.379786 kubelet[2605]: I1027 08:30:16.379780 2605 kubelet.go:2436] "Starting kubelet main sync loop" Oct 27 08:30:16.380126 kubelet[2605]: E1027 08:30:16.380112 2605 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 27 08:30:16.380843 kubelet[2605]: E1027 08:30:16.380829 2605 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://139.178.70.104:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 139.178.70.104:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Oct 27 08:30:16.409966 kubelet[2605]: I1027 08:30:16.409945 2605 policy_none.go:49] "None policy: Start" Oct 27 08:30:16.409966 kubelet[2605]: I1027 08:30:16.409965 2605 memory_manager.go:186] "Starting memorymanager" policy="None" Oct 27 08:30:16.409966 kubelet[2605]: I1027 08:30:16.409974 2605 state_mem.go:35] "Initializing new in-memory state store" Oct 27 08:30:16.446402 kubelet[2605]: E1027 08:30:16.446378 2605 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 27 08:30:16.450961 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Oct 27 08:30:16.460838 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Oct 27 08:30:16.463182 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Oct 27 08:30:16.472808 kubelet[2605]: E1027 08:30:16.472792 2605 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Oct 27 08:30:16.473930 kubelet[2605]: I1027 08:30:16.473320 2605 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 27 08:30:16.473975 kubelet[2605]: I1027 08:30:16.473937 2605 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 27 08:30:16.474159 kubelet[2605]: I1027 08:30:16.474148 2605 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 27 08:30:16.489140 kubelet[2605]: E1027 08:30:16.489121 2605 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 27 08:30:16.503921 kubelet[2605]: E1027 08:30:16.503733 2605 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Oct 27 08:30:16.507962 systemd[1]: Created slice kubepods-burstable-podcc958c2fde5b2cbdbb5d1b820ec23b13.slice - libcontainer container kubepods-burstable-podcc958c2fde5b2cbdbb5d1b820ec23b13.slice. Oct 27 08:30:16.517571 kubelet[2605]: E1027 08:30:16.517551 2605 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 27 08:30:16.556163 kubelet[2605]: E1027 08:30:16.556134 2605 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.104:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.104:6443: connect: connection refused" interval="400ms" Oct 27 08:30:16.559343 kubelet[2605]: I1027 08:30:16.559226 2605 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 27 08:30:16.559343 kubelet[2605]: I1027 08:30:16.559248 2605 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 27 08:30:16.559343 kubelet[2605]: I1027 08:30:16.559262 2605 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cc958c2fde5b2cbdbb5d1b820ec23b13-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"cc958c2fde5b2cbdbb5d1b820ec23b13\") " pod="kube-system/kube-apiserver-localhost" Oct 27 08:30:16.559343 kubelet[2605]: I1027 08:30:16.559272 2605 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cc958c2fde5b2cbdbb5d1b820ec23b13-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"cc958c2fde5b2cbdbb5d1b820ec23b13\") " pod="kube-system/kube-apiserver-localhost" Oct 27 08:30:16.559343 kubelet[2605]: I1027 08:30:16.559281 2605 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cc958c2fde5b2cbdbb5d1b820ec23b13-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"cc958c2fde5b2cbdbb5d1b820ec23b13\") " pod="kube-system/kube-apiserver-localhost" Oct 27 08:30:16.559440 kubelet[2605]: I1027 08:30:16.559289 2605 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 27 08:30:16.559440 kubelet[2605]: I1027 08:30:16.559298 2605 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 27 08:30:16.559440 kubelet[2605]: I1027 08:30:16.559306 2605 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 27 08:30:16.567926 systemd[1]: Created slice kubepods-burstable-pod20c890a246d840d308022312da9174cb.slice - libcontainer container kubepods-burstable-pod20c890a246d840d308022312da9174cb.slice. Oct 27 08:30:16.570928 kubelet[2605]: E1027 08:30:16.570689 2605 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 27 08:30:16.575769 kubelet[2605]: I1027 08:30:16.575757 2605 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 27 08:30:16.576063 kubelet[2605]: E1027 08:30:16.576049 2605 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.70.104:6443/api/v1/nodes\": dial tcp 139.178.70.104:6443: connect: connection refused" node="localhost" Oct 27 08:30:16.581024 systemd[1]: Created slice kubepods-burstable-podd13d96f639b65e57f439b4396b605564.slice - libcontainer container kubepods-burstable-podd13d96f639b65e57f439b4396b605564.slice. Oct 27 08:30:16.582309 kubelet[2605]: E1027 08:30:16.582295 2605 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 27 08:30:16.659718 kubelet[2605]: I1027 08:30:16.659685 2605 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d13d96f639b65e57f439b4396b605564-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d13d96f639b65e57f439b4396b605564\") " pod="kube-system/kube-scheduler-localhost" Oct 27 08:30:16.793803 kubelet[2605]: I1027 08:30:16.793772 2605 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 27 08:30:16.794146 kubelet[2605]: E1027 08:30:16.794128 2605 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.70.104:6443/api/v1/nodes\": dial tcp 139.178.70.104:6443: connect: connection refused" node="localhost" Oct 27 08:30:16.820220 containerd[1685]: time="2025-10-27T08:30:16.820152233Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:cc958c2fde5b2cbdbb5d1b820ec23b13,Namespace:kube-system,Attempt:0,}" Oct 27 08:30:16.882455 containerd[1685]: time="2025-10-27T08:30:16.882246714Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:20c890a246d840d308022312da9174cb,Namespace:kube-system,Attempt:0,}" Oct 27 08:30:16.882766 containerd[1685]: time="2025-10-27T08:30:16.882754542Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d13d96f639b65e57f439b4396b605564,Namespace:kube-system,Attempt:0,}" Oct 27 08:30:16.921508 containerd[1685]: time="2025-10-27T08:30:16.921447378Z" level=info msg="connecting to shim ea67f10d297660c16f17aec1d045952db6172c7c21f5675bf192acdab167dd2c" address="unix:///run/containerd/s/ebdd7efe08e12ee67c2e7cee95f3224177c76b93c278bf153bf94cb409e9df79" namespace=k8s.io protocol=ttrpc version=3 Oct 27 08:30:16.921810 containerd[1685]: time="2025-10-27T08:30:16.921473375Z" level=info msg="connecting to shim 460808bbc3fda5760c73f409d5b9cdce065b61fccb5178f4ddbd4ab8d05f7e6e" address="unix:///run/containerd/s/b0c78c7425530244811dd3ce9faa285529ddeecb60a66b024016b8f83172a151" namespace=k8s.io protocol=ttrpc version=3 Oct 27 08:30:16.925208 containerd[1685]: time="2025-10-27T08:30:16.925187074Z" level=info msg="connecting to shim f9668713cce6a35332ef806344325da03d0b0bf9de558c87fc231e248dcf37c3" address="unix:///run/containerd/s/fd3cdf3b9b87dd72fddbf302e2fd750943c0a91d015b95327bd6fc6e28da2134" namespace=k8s.io protocol=ttrpc version=3 Oct 27 08:30:16.957300 kubelet[2605]: E1027 08:30:16.957268 2605 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.104:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.104:6443: connect: connection refused" interval="800ms" Oct 27 08:30:16.989040 systemd[1]: Started cri-containerd-f9668713cce6a35332ef806344325da03d0b0bf9de558c87fc231e248dcf37c3.scope - libcontainer container f9668713cce6a35332ef806344325da03d0b0bf9de558c87fc231e248dcf37c3. Oct 27 08:30:16.995103 systemd[1]: Started cri-containerd-460808bbc3fda5760c73f409d5b9cdce065b61fccb5178f4ddbd4ab8d05f7e6e.scope - libcontainer container 460808bbc3fda5760c73f409d5b9cdce065b61fccb5178f4ddbd4ab8d05f7e6e. Oct 27 08:30:16.996876 systemd[1]: Started cri-containerd-ea67f10d297660c16f17aec1d045952db6172c7c21f5675bf192acdab167dd2c.scope - libcontainer container ea67f10d297660c16f17aec1d045952db6172c7c21f5675bf192acdab167dd2c. Oct 27 08:30:17.035127 containerd[1685]: time="2025-10-27T08:30:17.035101787Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d13d96f639b65e57f439b4396b605564,Namespace:kube-system,Attempt:0,} returns sandbox id \"460808bbc3fda5760c73f409d5b9cdce065b61fccb5178f4ddbd4ab8d05f7e6e\"" Oct 27 08:30:17.039781 containerd[1685]: time="2025-10-27T08:30:17.039764107Z" level=info msg="CreateContainer within sandbox \"460808bbc3fda5760c73f409d5b9cdce065b61fccb5178f4ddbd4ab8d05f7e6e\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Oct 27 08:30:17.052320 containerd[1685]: time="2025-10-27T08:30:17.052250479Z" level=info msg="Container 814c879f07cca3b5eb8006f7ed496d9af82f4c346cfb87b8fb10d27082c7563c: CDI devices from CRI Config.CDIDevices: []" Oct 27 08:30:17.065145 containerd[1685]: time="2025-10-27T08:30:17.065119382Z" level=info msg="CreateContainer within sandbox \"460808bbc3fda5760c73f409d5b9cdce065b61fccb5178f4ddbd4ab8d05f7e6e\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"814c879f07cca3b5eb8006f7ed496d9af82f4c346cfb87b8fb10d27082c7563c\"" Oct 27 08:30:17.065611 containerd[1685]: time="2025-10-27T08:30:17.065598721Z" level=info msg="StartContainer for \"814c879f07cca3b5eb8006f7ed496d9af82f4c346cfb87b8fb10d27082c7563c\"" Oct 27 08:30:17.065825 containerd[1685]: time="2025-10-27T08:30:17.065809448Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:cc958c2fde5b2cbdbb5d1b820ec23b13,Namespace:kube-system,Attempt:0,} returns sandbox id \"f9668713cce6a35332ef806344325da03d0b0bf9de558c87fc231e248dcf37c3\"" Oct 27 08:30:17.066470 containerd[1685]: time="2025-10-27T08:30:17.066458197Z" level=info msg="connecting to shim 814c879f07cca3b5eb8006f7ed496d9af82f4c346cfb87b8fb10d27082c7563c" address="unix:///run/containerd/s/b0c78c7425530244811dd3ce9faa285529ddeecb60a66b024016b8f83172a151" protocol=ttrpc version=3 Oct 27 08:30:17.068265 containerd[1685]: time="2025-10-27T08:30:17.068247542Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:20c890a246d840d308022312da9174cb,Namespace:kube-system,Attempt:0,} returns sandbox id \"ea67f10d297660c16f17aec1d045952db6172c7c21f5675bf192acdab167dd2c\"" Oct 27 08:30:17.069627 containerd[1685]: time="2025-10-27T08:30:17.069179826Z" level=info msg="CreateContainer within sandbox \"f9668713cce6a35332ef806344325da03d0b0bf9de558c87fc231e248dcf37c3\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Oct 27 08:30:17.072225 containerd[1685]: time="2025-10-27T08:30:17.072200740Z" level=info msg="CreateContainer within sandbox \"ea67f10d297660c16f17aec1d045952db6172c7c21f5675bf192acdab167dd2c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Oct 27 08:30:17.085075 systemd[1]: Started cri-containerd-814c879f07cca3b5eb8006f7ed496d9af82f4c346cfb87b8fb10d27082c7563c.scope - libcontainer container 814c879f07cca3b5eb8006f7ed496d9af82f4c346cfb87b8fb10d27082c7563c. Oct 27 08:30:17.098281 containerd[1685]: time="2025-10-27T08:30:17.098253902Z" level=info msg="Container 6edd2a4a2e0dabf9854eb723908c45b05fac86637b20a4d52b0598ce15d03f19: CDI devices from CRI Config.CDIDevices: []" Oct 27 08:30:17.112763 containerd[1685]: time="2025-10-27T08:30:17.112737774Z" level=info msg="Container 1e517ec643cb169084788bb62b2376a69dcc78a5081483c5a6e1c7ec875874bc: CDI devices from CRI Config.CDIDevices: []" Oct 27 08:30:17.116108 containerd[1685]: time="2025-10-27T08:30:17.116093611Z" level=info msg="CreateContainer within sandbox \"f9668713cce6a35332ef806344325da03d0b0bf9de558c87fc231e248dcf37c3\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"6edd2a4a2e0dabf9854eb723908c45b05fac86637b20a4d52b0598ce15d03f19\"" Oct 27 08:30:17.116806 containerd[1685]: time="2025-10-27T08:30:17.116511799Z" level=info msg="StartContainer for \"6edd2a4a2e0dabf9854eb723908c45b05fac86637b20a4d52b0598ce15d03f19\"" Oct 27 08:30:17.116806 containerd[1685]: time="2025-10-27T08:30:17.116769563Z" level=info msg="CreateContainer within sandbox \"ea67f10d297660c16f17aec1d045952db6172c7c21f5675bf192acdab167dd2c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"1e517ec643cb169084788bb62b2376a69dcc78a5081483c5a6e1c7ec875874bc\"" Oct 27 08:30:17.116962 containerd[1685]: time="2025-10-27T08:30:17.116952403Z" level=info msg="StartContainer for \"1e517ec643cb169084788bb62b2376a69dcc78a5081483c5a6e1c7ec875874bc\"" Oct 27 08:30:17.117416 containerd[1685]: time="2025-10-27T08:30:17.117400153Z" level=info msg="connecting to shim 6edd2a4a2e0dabf9854eb723908c45b05fac86637b20a4d52b0598ce15d03f19" address="unix:///run/containerd/s/fd3cdf3b9b87dd72fddbf302e2fd750943c0a91d015b95327bd6fc6e28da2134" protocol=ttrpc version=3 Oct 27 08:30:17.117505 containerd[1685]: time="2025-10-27T08:30:17.117493905Z" level=info msg="connecting to shim 1e517ec643cb169084788bb62b2376a69dcc78a5081483c5a6e1c7ec875874bc" address="unix:///run/containerd/s/ebdd7efe08e12ee67c2e7cee95f3224177c76b93c278bf153bf94cb409e9df79" protocol=ttrpc version=3 Oct 27 08:30:17.132010 containerd[1685]: time="2025-10-27T08:30:17.131974736Z" level=info msg="StartContainer for \"814c879f07cca3b5eb8006f7ed496d9af82f4c346cfb87b8fb10d27082c7563c\" returns successfully" Oct 27 08:30:17.145127 systemd[1]: Started cri-containerd-1e517ec643cb169084788bb62b2376a69dcc78a5081483c5a6e1c7ec875874bc.scope - libcontainer container 1e517ec643cb169084788bb62b2376a69dcc78a5081483c5a6e1c7ec875874bc. Oct 27 08:30:17.147989 systemd[1]: Started cri-containerd-6edd2a4a2e0dabf9854eb723908c45b05fac86637b20a4d52b0598ce15d03f19.scope - libcontainer container 6edd2a4a2e0dabf9854eb723908c45b05fac86637b20a4d52b0598ce15d03f19. Oct 27 08:30:17.183723 containerd[1685]: time="2025-10-27T08:30:17.183658594Z" level=info msg="StartContainer for \"6edd2a4a2e0dabf9854eb723908c45b05fac86637b20a4d52b0598ce15d03f19\" returns successfully" Oct 27 08:30:17.185971 kubelet[2605]: E1027 08:30:17.185952 2605 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://139.178.70.104:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.70.104:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Oct 27 08:30:17.195646 kubelet[2605]: I1027 08:30:17.195627 2605 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 27 08:30:17.195826 kubelet[2605]: E1027 08:30:17.195808 2605 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.70.104:6443/api/v1/nodes\": dial tcp 139.178.70.104:6443: connect: connection refused" node="localhost" Oct 27 08:30:17.202072 containerd[1685]: time="2025-10-27T08:30:17.202050328Z" level=info msg="StartContainer for \"1e517ec643cb169084788bb62b2376a69dcc78a5081483c5a6e1c7ec875874bc\" returns successfully" Oct 27 08:30:17.309240 kubelet[2605]: E1027 08:30:17.309215 2605 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://139.178.70.104:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.70.104:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Oct 27 08:30:17.390657 kubelet[2605]: E1027 08:30:17.390638 2605 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 27 08:30:17.403048 kubelet[2605]: E1027 08:30:17.403027 2605 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 27 08:30:17.403304 kubelet[2605]: E1027 08:30:17.403221 2605 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 27 08:30:17.404941 kubelet[2605]: E1027 08:30:17.404927 2605 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://139.178.70.104:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 139.178.70.104:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Oct 27 08:30:17.738198 kubelet[2605]: E1027 08:30:17.738172 2605 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://139.178.70.104:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 139.178.70.104:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Oct 27 08:30:17.757723 kubelet[2605]: E1027 08:30:17.757704 2605 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.104:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.104:6443: connect: connection refused" interval="1.6s" Oct 27 08:30:17.996771 kubelet[2605]: I1027 08:30:17.996717 2605 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 27 08:30:18.393799 kubelet[2605]: E1027 08:30:18.393781 2605 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 27 08:30:18.395047 kubelet[2605]: E1027 08:30:18.395036 2605 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 27 08:30:18.973922 kubelet[2605]: I1027 08:30:18.972852 2605 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Oct 27 08:30:18.973922 kubelet[2605]: E1027 08:30:18.972877 2605 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Oct 27 08:30:18.999322 kubelet[2605]: E1027 08:30:18.999302 2605 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 27 08:30:19.100115 kubelet[2605]: E1027 08:30:19.100079 2605 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 27 08:30:19.201087 kubelet[2605]: E1027 08:30:19.201058 2605 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 27 08:30:19.302200 kubelet[2605]: E1027 08:30:19.302118 2605 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 27 08:30:19.403126 kubelet[2605]: E1027 08:30:19.403103 2605 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 27 08:30:19.503459 kubelet[2605]: E1027 08:30:19.503433 2605 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 27 08:30:19.604201 kubelet[2605]: E1027 08:30:19.604130 2605 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 27 08:30:19.704260 kubelet[2605]: E1027 08:30:19.704229 2605 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 27 08:30:19.805294 kubelet[2605]: E1027 08:30:19.805260 2605 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 27 08:30:19.906084 kubelet[2605]: E1027 08:30:19.906060 2605 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 27 08:30:20.005071 kubelet[2605]: E1027 08:30:20.005035 2605 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 27 08:30:20.007131 kubelet[2605]: E1027 08:30:20.007118 2605 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 27 08:30:20.107673 kubelet[2605]: E1027 08:30:20.107646 2605 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 27 08:30:20.208570 kubelet[2605]: E1027 08:30:20.208493 2605 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 27 08:30:20.269518 kubelet[2605]: I1027 08:30:20.269469 2605 apiserver.go:52] "Watching apiserver" Oct 27 08:30:20.354890 kubelet[2605]: I1027 08:30:20.354719 2605 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Oct 27 08:30:20.354890 kubelet[2605]: I1027 08:30:20.354749 2605 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 27 08:30:20.362470 kubelet[2605]: I1027 08:30:20.362447 2605 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 27 08:30:20.365856 kubelet[2605]: I1027 08:30:20.365838 2605 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 27 08:30:20.936292 systemd[1]: Reload requested from client PID 2884 ('systemctl') (unit session-9.scope)... Oct 27 08:30:20.936473 systemd[1]: Reloading... Oct 27 08:30:20.992977 zram_generator::config[2928]: No configuration found. Oct 27 08:30:21.072221 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Oct 27 08:30:21.148383 systemd[1]: Reloading finished in 211 ms. Oct 27 08:30:21.176735 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 27 08:30:21.198640 systemd[1]: kubelet.service: Deactivated successfully. Oct 27 08:30:21.198850 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 27 08:30:21.198896 systemd[1]: kubelet.service: Consumed 529ms CPU time, 127.5M memory peak. Oct 27 08:30:21.200689 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 27 08:30:21.657753 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 27 08:30:21.663297 (kubelet)[2996]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 27 08:30:21.757955 kubelet[2996]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 27 08:30:21.757955 kubelet[2996]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 27 08:30:21.757955 kubelet[2996]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 27 08:30:21.758837 kubelet[2996]: I1027 08:30:21.758809 2996 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 27 08:30:21.762916 kubelet[2996]: I1027 08:30:21.762776 2996 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Oct 27 08:30:21.762916 kubelet[2996]: I1027 08:30:21.762790 2996 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 27 08:30:21.763335 kubelet[2996]: I1027 08:30:21.763322 2996 server.go:956] "Client rotation is on, will bootstrap in background" Oct 27 08:30:21.765046 kubelet[2996]: I1027 08:30:21.765032 2996 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Oct 27 08:30:21.767170 kubelet[2996]: I1027 08:30:21.767106 2996 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 27 08:30:21.776608 kubelet[2996]: I1027 08:30:21.776597 2996 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 27 08:30:21.779621 kubelet[2996]: I1027 08:30:21.779607 2996 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 27 08:30:21.779717 kubelet[2996]: I1027 08:30:21.779698 2996 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 27 08:30:21.779797 kubelet[2996]: I1027 08:30:21.779713 2996 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 27 08:30:21.779849 kubelet[2996]: I1027 08:30:21.779801 2996 topology_manager.go:138] "Creating topology manager with none policy" Oct 27 08:30:21.779849 kubelet[2996]: I1027 08:30:21.779807 2996 container_manager_linux.go:303] "Creating device plugin manager" Oct 27 08:30:21.779849 kubelet[2996]: I1027 08:30:21.779830 2996 state_mem.go:36] "Initialized new in-memory state store" Oct 27 08:30:21.785205 kubelet[2996]: I1027 08:30:21.785193 2996 kubelet.go:480] "Attempting to sync node with API server" Oct 27 08:30:21.785205 kubelet[2996]: I1027 08:30:21.785206 2996 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 27 08:30:21.785261 kubelet[2996]: I1027 08:30:21.785219 2996 kubelet.go:386] "Adding apiserver pod source" Oct 27 08:30:21.785261 kubelet[2996]: I1027 08:30:21.785227 2996 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 27 08:30:21.789675 kubelet[2996]: I1027 08:30:21.789660 2996 apiserver.go:52] "Watching apiserver" Oct 27 08:30:21.796951 kubelet[2996]: I1027 08:30:21.796934 2996 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Oct 27 08:30:21.797242 kubelet[2996]: I1027 08:30:21.797226 2996 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Oct 27 08:30:21.799899 kubelet[2996]: I1027 08:30:21.799885 2996 watchdog_linux.go:99] "Systemd watchdog is not enabled" Oct 27 08:30:21.800317 kubelet[2996]: I1027 08:30:21.800306 2996 server.go:1289] "Started kubelet" Oct 27 08:30:21.800655 kubelet[2996]: I1027 08:30:21.800574 2996 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Oct 27 08:30:21.826148 kubelet[2996]: I1027 08:30:21.826128 2996 server.go:317] "Adding debug handlers to kubelet server" Oct 27 08:30:21.827335 kubelet[2996]: I1027 08:30:21.826657 2996 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 27 08:30:21.827335 kubelet[2996]: I1027 08:30:21.826811 2996 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 27 08:30:21.839920 kubelet[2996]: I1027 08:30:21.839903 2996 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 27 08:30:21.840176 kubelet[2996]: I1027 08:30:21.840040 2996 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 27 08:30:21.843939 kubelet[2996]: I1027 08:30:21.842375 2996 volume_manager.go:297] "Starting Kubelet Volume Manager" Oct 27 08:30:21.843939 kubelet[2996]: I1027 08:30:21.842450 2996 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Oct 27 08:30:21.843939 kubelet[2996]: I1027 08:30:21.842585 2996 reconciler.go:26] "Reconciler: start to sync state" Oct 27 08:30:21.844921 kubelet[2996]: I1027 08:30:21.844554 2996 factory.go:223] Registration of the systemd container factory successfully Oct 27 08:30:21.844921 kubelet[2996]: I1027 08:30:21.844613 2996 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 27 08:30:21.847311 kubelet[2996]: E1027 08:30:21.847248 2996 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 27 08:30:21.847630 kubelet[2996]: I1027 08:30:21.847621 2996 factory.go:223] Registration of the containerd container factory successfully Oct 27 08:30:21.850855 kubelet[2996]: I1027 08:30:21.850826 2996 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Oct 27 08:30:21.851448 kubelet[2996]: I1027 08:30:21.851436 2996 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Oct 27 08:30:21.851473 kubelet[2996]: I1027 08:30:21.851460 2996 status_manager.go:230] "Starting to sync pod status with apiserver" Oct 27 08:30:21.851491 kubelet[2996]: I1027 08:30:21.851473 2996 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 27 08:30:21.851491 kubelet[2996]: I1027 08:30:21.851479 2996 kubelet.go:2436] "Starting kubelet main sync loop" Oct 27 08:30:21.851524 kubelet[2996]: E1027 08:30:21.851501 2996 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 27 08:30:21.886719 kubelet[2996]: I1027 08:30:21.886692 2996 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 27 08:30:21.886943 kubelet[2996]: I1027 08:30:21.886804 2996 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 27 08:30:21.886943 kubelet[2996]: I1027 08:30:21.886817 2996 state_mem.go:36] "Initialized new in-memory state store" Oct 27 08:30:21.887084 kubelet[2996]: I1027 08:30:21.886902 2996 state_mem.go:88] "Updated default CPUSet" cpuSet="" Oct 27 08:30:21.887084 kubelet[2996]: I1027 08:30:21.887029 2996 state_mem.go:96] "Updated CPUSet assignments" assignments={} Oct 27 08:30:21.887084 kubelet[2996]: I1027 08:30:21.887039 2996 policy_none.go:49] "None policy: Start" Oct 27 08:30:21.887084 kubelet[2996]: I1027 08:30:21.887044 2996 memory_manager.go:186] "Starting memorymanager" policy="None" Oct 27 08:30:21.887084 kubelet[2996]: I1027 08:30:21.887050 2996 state_mem.go:35] "Initializing new in-memory state store" Oct 27 08:30:21.887213 kubelet[2996]: I1027 08:30:21.887207 2996 state_mem.go:75] "Updated machine memory state" Oct 27 08:30:21.889407 kubelet[2996]: E1027 08:30:21.889394 2996 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Oct 27 08:30:21.890702 kubelet[2996]: I1027 08:30:21.890223 2996 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 27 08:30:21.890702 kubelet[2996]: I1027 08:30:21.890231 2996 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 27 08:30:21.890702 kubelet[2996]: I1027 08:30:21.890356 2996 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 27 08:30:21.891268 kubelet[2996]: E1027 08:30:21.891260 2996 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 27 08:30:21.952901 kubelet[2996]: I1027 08:30:21.952849 2996 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 27 08:30:21.956716 kubelet[2996]: E1027 08:30:21.956702 2996 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Oct 27 08:30:21.971371 kubelet[2996]: I1027 08:30:21.971340 2996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.971320521 podStartE2EDuration="1.971320521s" podCreationTimestamp="2025-10-27 08:30:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 08:30:21.965875796 +0000 UTC m=+0.278891686" watchObservedRunningTime="2025-10-27 08:30:21.971320521 +0000 UTC m=+0.284336404" Oct 27 08:30:21.971914 kubelet[2996]: I1027 08:30:21.971529 2996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.971523732 podStartE2EDuration="1.971523732s" podCreationTimestamp="2025-10-27 08:30:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 08:30:21.969813942 +0000 UTC m=+0.282829833" watchObservedRunningTime="2025-10-27 08:30:21.971523732 +0000 UTC m=+0.284539615" Oct 27 08:30:21.993033 kubelet[2996]: I1027 08:30:21.993001 2996 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 27 08:30:21.997442 kubelet[2996]: I1027 08:30:21.997379 2996 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Oct 27 08:30:21.997642 kubelet[2996]: I1027 08:30:21.997502 2996 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Oct 27 08:30:22.042647 kubelet[2996]: I1027 08:30:22.042620 2996 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Oct 27 08:30:22.043902 kubelet[2996]: I1027 08:30:22.043853 2996 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cc958c2fde5b2cbdbb5d1b820ec23b13-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"cc958c2fde5b2cbdbb5d1b820ec23b13\") " pod="kube-system/kube-apiserver-localhost" Oct 27 08:30:22.043902 kubelet[2996]: I1027 08:30:22.043946 2996 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 27 08:30:22.043902 kubelet[2996]: I1027 08:30:22.043962 2996 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 27 08:30:22.044152 kubelet[2996]: I1027 08:30:22.044096 2996 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 27 08:30:22.044152 kubelet[2996]: I1027 08:30:22.044126 2996 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 27 08:30:22.044287 kubelet[2996]: I1027 08:30:22.044232 2996 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d13d96f639b65e57f439b4396b605564-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d13d96f639b65e57f439b4396b605564\") " pod="kube-system/kube-scheduler-localhost" Oct 27 08:30:22.044287 kubelet[2996]: I1027 08:30:22.044249 2996 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cc958c2fde5b2cbdbb5d1b820ec23b13-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"cc958c2fde5b2cbdbb5d1b820ec23b13\") " pod="kube-system/kube-apiserver-localhost" Oct 27 08:30:22.044287 kubelet[2996]: I1027 08:30:22.044261 2996 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cc958c2fde5b2cbdbb5d1b820ec23b13-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"cc958c2fde5b2cbdbb5d1b820ec23b13\") " pod="kube-system/kube-apiserver-localhost" Oct 27 08:30:22.044287 kubelet[2996]: I1027 08:30:22.044273 2996 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 27 08:30:22.582756 kubelet[2996]: I1027 08:30:22.582694 2996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.5826829030000003 podStartE2EDuration="2.582682903s" podCreationTimestamp="2025-10-27 08:30:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 08:30:21.974029083 +0000 UTC m=+0.287044973" watchObservedRunningTime="2025-10-27 08:30:22.582682903 +0000 UTC m=+0.895698796" Oct 27 08:30:28.020487 kubelet[2996]: I1027 08:30:28.020376 2996 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Oct 27 08:30:28.020792 kubelet[2996]: I1027 08:30:28.020778 2996 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Oct 27 08:30:28.020827 containerd[1685]: time="2025-10-27T08:30:28.020651270Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 27 08:30:28.173648 systemd[1]: Created slice kubepods-besteffort-pod3708c609_0f20_4879_9253_666863fcbfba.slice - libcontainer container kubepods-besteffort-pod3708c609_0f20_4879_9253_666863fcbfba.slice. Oct 27 08:30:28.187478 kubelet[2996]: I1027 08:30:28.187453 2996 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmnpc\" (UniqueName: \"kubernetes.io/projected/3708c609-0f20-4879-9253-666863fcbfba-kube-api-access-gmnpc\") pod \"kube-proxy-zhqrp\" (UID: \"3708c609-0f20-4879-9253-666863fcbfba\") " pod="kube-system/kube-proxy-zhqrp" Oct 27 08:30:28.187573 kubelet[2996]: I1027 08:30:28.187482 2996 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3708c609-0f20-4879-9253-666863fcbfba-kube-proxy\") pod \"kube-proxy-zhqrp\" (UID: \"3708c609-0f20-4879-9253-666863fcbfba\") " pod="kube-system/kube-proxy-zhqrp" Oct 27 08:30:28.187573 kubelet[2996]: I1027 08:30:28.187497 2996 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3708c609-0f20-4879-9253-666863fcbfba-lib-modules\") pod \"kube-proxy-zhqrp\" (UID: \"3708c609-0f20-4879-9253-666863fcbfba\") " pod="kube-system/kube-proxy-zhqrp" Oct 27 08:30:28.187573 kubelet[2996]: I1027 08:30:28.187510 2996 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3708c609-0f20-4879-9253-666863fcbfba-xtables-lock\") pod \"kube-proxy-zhqrp\" (UID: \"3708c609-0f20-4879-9253-666863fcbfba\") " pod="kube-system/kube-proxy-zhqrp" Oct 27 08:30:28.292608 kubelet[2996]: E1027 08:30:28.292537 2996 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Oct 27 08:30:28.292608 kubelet[2996]: E1027 08:30:28.292564 2996 projected.go:194] Error preparing data for projected volume kube-api-access-gmnpc for pod kube-system/kube-proxy-zhqrp: configmap "kube-root-ca.crt" not found Oct 27 08:30:28.292708 kubelet[2996]: E1027 08:30:28.292615 2996 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3708c609-0f20-4879-9253-666863fcbfba-kube-api-access-gmnpc podName:3708c609-0f20-4879-9253-666863fcbfba nodeName:}" failed. No retries permitted until 2025-10-27 08:30:28.79259373 +0000 UTC m=+7.105609613 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-gmnpc" (UniqueName: "kubernetes.io/projected/3708c609-0f20-4879-9253-666863fcbfba-kube-api-access-gmnpc") pod "kube-proxy-zhqrp" (UID: "3708c609-0f20-4879-9253-666863fcbfba") : configmap "kube-root-ca.crt" not found Oct 27 08:30:29.084294 containerd[1685]: time="2025-10-27T08:30:29.084254380Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zhqrp,Uid:3708c609-0f20-4879-9253-666863fcbfba,Namespace:kube-system,Attempt:0,}" Oct 27 08:30:29.098771 containerd[1685]: time="2025-10-27T08:30:29.098716858Z" level=info msg="connecting to shim a64d5ec56c91d0ca8ed00aea4a97f726d78ad1516cee277dc44cc860e1eb0dfb" address="unix:///run/containerd/s/a2019cec0a5f26d796fced4b402ee87ef3b134de46d09fd5076026857aea95ea" namespace=k8s.io protocol=ttrpc version=3 Oct 27 08:30:29.123121 systemd[1]: Started cri-containerd-a64d5ec56c91d0ca8ed00aea4a97f726d78ad1516cee277dc44cc860e1eb0dfb.scope - libcontainer container a64d5ec56c91d0ca8ed00aea4a97f726d78ad1516cee277dc44cc860e1eb0dfb. Oct 27 08:30:29.138039 containerd[1685]: time="2025-10-27T08:30:29.138014070Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zhqrp,Uid:3708c609-0f20-4879-9253-666863fcbfba,Namespace:kube-system,Attempt:0,} returns sandbox id \"a64d5ec56c91d0ca8ed00aea4a97f726d78ad1516cee277dc44cc860e1eb0dfb\"" Oct 27 08:30:29.140448 containerd[1685]: time="2025-10-27T08:30:29.140404433Z" level=info msg="CreateContainer within sandbox \"a64d5ec56c91d0ca8ed00aea4a97f726d78ad1516cee277dc44cc860e1eb0dfb\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 27 08:30:29.145947 containerd[1685]: time="2025-10-27T08:30:29.145892849Z" level=info msg="Container 7973afaab292714f066d3335c3ecef297f182f3f6b8d3f32412c2b272d94e6dc: CDI devices from CRI Config.CDIDevices: []" Oct 27 08:30:29.147484 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount839959149.mount: Deactivated successfully. Oct 27 08:30:29.149525 containerd[1685]: time="2025-10-27T08:30:29.149507908Z" level=info msg="CreateContainer within sandbox \"a64d5ec56c91d0ca8ed00aea4a97f726d78ad1516cee277dc44cc860e1eb0dfb\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"7973afaab292714f066d3335c3ecef297f182f3f6b8d3f32412c2b272d94e6dc\"" Oct 27 08:30:29.150073 containerd[1685]: time="2025-10-27T08:30:29.149940738Z" level=info msg="StartContainer for \"7973afaab292714f066d3335c3ecef297f182f3f6b8d3f32412c2b272d94e6dc\"" Oct 27 08:30:29.152664 containerd[1685]: time="2025-10-27T08:30:29.152561207Z" level=info msg="connecting to shim 7973afaab292714f066d3335c3ecef297f182f3f6b8d3f32412c2b272d94e6dc" address="unix:///run/containerd/s/a2019cec0a5f26d796fced4b402ee87ef3b134de46d09fd5076026857aea95ea" protocol=ttrpc version=3 Oct 27 08:30:29.173316 systemd[1]: Started cri-containerd-7973afaab292714f066d3335c3ecef297f182f3f6b8d3f32412c2b272d94e6dc.scope - libcontainer container 7973afaab292714f066d3335c3ecef297f182f3f6b8d3f32412c2b272d94e6dc. Oct 27 08:30:29.188630 systemd[1]: Created slice kubepods-besteffort-pod3d2f3074_5a6b_43f3_92d3_94487c93b900.slice - libcontainer container kubepods-besteffort-pod3d2f3074_5a6b_43f3_92d3_94487c93b900.slice. Oct 27 08:30:29.193392 kubelet[2996]: I1027 08:30:29.193367 2996 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhpcq\" (UniqueName: \"kubernetes.io/projected/3d2f3074-5a6b-43f3-92d3-94487c93b900-kube-api-access-vhpcq\") pod \"tigera-operator-7dcd859c48-ztsp9\" (UID: \"3d2f3074-5a6b-43f3-92d3-94487c93b900\") " pod="tigera-operator/tigera-operator-7dcd859c48-ztsp9" Oct 27 08:30:29.193632 kubelet[2996]: I1027 08:30:29.193599 2996 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/3d2f3074-5a6b-43f3-92d3-94487c93b900-var-lib-calico\") pod \"tigera-operator-7dcd859c48-ztsp9\" (UID: \"3d2f3074-5a6b-43f3-92d3-94487c93b900\") " pod="tigera-operator/tigera-operator-7dcd859c48-ztsp9" Oct 27 08:30:29.213875 containerd[1685]: time="2025-10-27T08:30:29.213835238Z" level=info msg="StartContainer for \"7973afaab292714f066d3335c3ecef297f182f3f6b8d3f32412c2b272d94e6dc\" returns successfully" Oct 27 08:30:29.491777 containerd[1685]: time="2025-10-27T08:30:29.491745820Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-ztsp9,Uid:3d2f3074-5a6b-43f3-92d3-94487c93b900,Namespace:tigera-operator,Attempt:0,}" Oct 27 08:30:29.502224 containerd[1685]: time="2025-10-27T08:30:29.502199825Z" level=info msg="connecting to shim 6fe777dee064af51b9757a4e068a2313dfc42d488f475068d2a0753ea3e66150" address="unix:///run/containerd/s/636cd23da4cbeaa224621d483599a9c0092c1fb2221a1788d7e20d29ebf02855" namespace=k8s.io protocol=ttrpc version=3 Oct 27 08:30:29.517000 systemd[1]: Started cri-containerd-6fe777dee064af51b9757a4e068a2313dfc42d488f475068d2a0753ea3e66150.scope - libcontainer container 6fe777dee064af51b9757a4e068a2313dfc42d488f475068d2a0753ea3e66150. Oct 27 08:30:29.553312 containerd[1685]: time="2025-10-27T08:30:29.553285448Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-ztsp9,Uid:3d2f3074-5a6b-43f3-92d3-94487c93b900,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"6fe777dee064af51b9757a4e068a2313dfc42d488f475068d2a0753ea3e66150\"" Oct 27 08:30:29.554318 containerd[1685]: time="2025-10-27T08:30:29.554303653Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Oct 27 08:30:29.889110 kubelet[2996]: I1027 08:30:29.889046 2996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-zhqrp" podStartSLOduration=1.88903234 podStartE2EDuration="1.88903234s" podCreationTimestamp="2025-10-27 08:30:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 08:30:29.888636878 +0000 UTC m=+8.201652770" watchObservedRunningTime="2025-10-27 08:30:29.88903234 +0000 UTC m=+8.202048238" Oct 27 08:30:30.869832 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1043862431.mount: Deactivated successfully. Oct 27 08:30:31.343856 containerd[1685]: time="2025-10-27T08:30:31.343511280Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:30:31.345777 containerd[1685]: time="2025-10-27T08:30:31.345231323Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:30:31.346930 containerd[1685]: time="2025-10-27T08:30:31.346516093Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 1.79219423s" Oct 27 08:30:31.346930 containerd[1685]: time="2025-10-27T08:30:31.346532644Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Oct 27 08:30:31.346930 containerd[1685]: time="2025-10-27T08:30:31.346673931Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:30:31.349745 containerd[1685]: time="2025-10-27T08:30:31.349730531Z" level=info msg="CreateContainer within sandbox \"6fe777dee064af51b9757a4e068a2313dfc42d488f475068d2a0753ea3e66150\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Oct 27 08:30:31.349830 containerd[1685]: time="2025-10-27T08:30:31.349815625Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Oct 27 08:30:31.361948 containerd[1685]: time="2025-10-27T08:30:31.361931870Z" level=info msg="Container 08fa27df1ef6df4143f4b43793b1b85911eaaaba5aafd2cffe319410c7a11f7b: CDI devices from CRI Config.CDIDevices: []" Oct 27 08:30:31.363752 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount847687007.mount: Deactivated successfully. Oct 27 08:30:31.365252 containerd[1685]: time="2025-10-27T08:30:31.365226641Z" level=info msg="CreateContainer within sandbox \"6fe777dee064af51b9757a4e068a2313dfc42d488f475068d2a0753ea3e66150\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"08fa27df1ef6df4143f4b43793b1b85911eaaaba5aafd2cffe319410c7a11f7b\"" Oct 27 08:30:31.366413 containerd[1685]: time="2025-10-27T08:30:31.366398791Z" level=info msg="StartContainer for \"08fa27df1ef6df4143f4b43793b1b85911eaaaba5aafd2cffe319410c7a11f7b\"" Oct 27 08:30:31.367325 containerd[1685]: time="2025-10-27T08:30:31.367309312Z" level=info msg="connecting to shim 08fa27df1ef6df4143f4b43793b1b85911eaaaba5aafd2cffe319410c7a11f7b" address="unix:///run/containerd/s/636cd23da4cbeaa224621d483599a9c0092c1fb2221a1788d7e20d29ebf02855" protocol=ttrpc version=3 Oct 27 08:30:31.384137 systemd[1]: Started cri-containerd-08fa27df1ef6df4143f4b43793b1b85911eaaaba5aafd2cffe319410c7a11f7b.scope - libcontainer container 08fa27df1ef6df4143f4b43793b1b85911eaaaba5aafd2cffe319410c7a11f7b. Oct 27 08:30:31.400633 containerd[1685]: time="2025-10-27T08:30:31.400608854Z" level=info msg="StartContainer for \"08fa27df1ef6df4143f4b43793b1b85911eaaaba5aafd2cffe319410c7a11f7b\" returns successfully" Oct 27 08:30:32.888103 kubelet[2996]: I1027 08:30:32.888065 2996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-ztsp9" podStartSLOduration=2.0945522 podStartE2EDuration="3.888050132s" podCreationTimestamp="2025-10-27 08:30:29 +0000 UTC" firstStartedPulling="2025-10-27 08:30:29.553883987 +0000 UTC m=+7.866899871" lastFinishedPulling="2025-10-27 08:30:31.347381922 +0000 UTC m=+9.660397803" observedRunningTime="2025-10-27 08:30:31.895565667 +0000 UTC m=+10.208581559" watchObservedRunningTime="2025-10-27 08:30:32.888050132 +0000 UTC m=+11.201066023" Oct 27 08:30:36.497883 sudo[2003]: pam_unix(sudo:session): session closed for user root Oct 27 08:30:36.499521 sshd[2002]: Connection closed by 147.75.109.163 port 57112 Oct 27 08:30:36.499889 sshd-session[1999]: pam_unix(sshd:session): session closed for user core Oct 27 08:30:36.502511 systemd[1]: sshd@6-139.178.70.104:22-147.75.109.163:57112.service: Deactivated successfully. Oct 27 08:30:36.504980 systemd[1]: session-9.scope: Deactivated successfully. Oct 27 08:30:36.505327 systemd[1]: session-9.scope: Consumed 3.826s CPU time, 152.9M memory peak. Oct 27 08:30:36.507393 systemd-logind[1652]: Session 9 logged out. Waiting for processes to exit. Oct 27 08:30:36.509515 systemd-logind[1652]: Removed session 9. Oct 27 08:30:40.534653 systemd[1]: Created slice kubepods-besteffort-pod8df62827_9c9e_47c8_af37_76b0bba93a41.slice - libcontainer container kubepods-besteffort-pod8df62827_9c9e_47c8_af37_76b0bba93a41.slice. Oct 27 08:30:40.660060 kubelet[2996]: I1027 08:30:40.660033 2996 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/8df62827-9c9e-47c8-af37-76b0bba93a41-typha-certs\") pod \"calico-typha-94c5756fc-nmftw\" (UID: \"8df62827-9c9e-47c8-af37-76b0bba93a41\") " pod="calico-system/calico-typha-94c5756fc-nmftw" Oct 27 08:30:40.660060 kubelet[2996]: I1027 08:30:40.660059 2996 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nsr92\" (UniqueName: \"kubernetes.io/projected/8df62827-9c9e-47c8-af37-76b0bba93a41-kube-api-access-nsr92\") pod \"calico-typha-94c5756fc-nmftw\" (UID: \"8df62827-9c9e-47c8-af37-76b0bba93a41\") " pod="calico-system/calico-typha-94c5756fc-nmftw" Oct 27 08:30:40.660889 kubelet[2996]: I1027 08:30:40.660074 2996 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8df62827-9c9e-47c8-af37-76b0bba93a41-tigera-ca-bundle\") pod \"calico-typha-94c5756fc-nmftw\" (UID: \"8df62827-9c9e-47c8-af37-76b0bba93a41\") " pod="calico-system/calico-typha-94c5756fc-nmftw" Oct 27 08:30:40.669652 systemd[1]: Created slice kubepods-besteffort-pod2968e2e5_2a82_41d7_a316_211aae1c4118.slice - libcontainer container kubepods-besteffort-pod2968e2e5_2a82_41d7_a316_211aae1c4118.slice. Oct 27 08:30:40.844943 containerd[1685]: time="2025-10-27T08:30:40.844842107Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-94c5756fc-nmftw,Uid:8df62827-9c9e-47c8-af37-76b0bba93a41,Namespace:calico-system,Attempt:0,}" Oct 27 08:30:40.862894 kubelet[2996]: I1027 08:30:40.861850 2996 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2968e2e5-2a82-41d7-a316-211aae1c4118-lib-modules\") pod \"calico-node-nvs6t\" (UID: \"2968e2e5-2a82-41d7-a316-211aae1c4118\") " pod="calico-system/calico-node-nvs6t" Oct 27 08:30:40.862894 kubelet[2996]: I1027 08:30:40.861877 2996 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2968e2e5-2a82-41d7-a316-211aae1c4118-xtables-lock\") pod \"calico-node-nvs6t\" (UID: \"2968e2e5-2a82-41d7-a316-211aae1c4118\") " pod="calico-system/calico-node-nvs6t" Oct 27 08:30:40.862894 kubelet[2996]: I1027 08:30:40.861894 2996 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/2968e2e5-2a82-41d7-a316-211aae1c4118-cni-log-dir\") pod \"calico-node-nvs6t\" (UID: \"2968e2e5-2a82-41d7-a316-211aae1c4118\") " pod="calico-system/calico-node-nvs6t" Oct 27 08:30:40.862894 kubelet[2996]: I1027 08:30:40.861934 2996 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/2968e2e5-2a82-41d7-a316-211aae1c4118-node-certs\") pod \"calico-node-nvs6t\" (UID: \"2968e2e5-2a82-41d7-a316-211aae1c4118\") " pod="calico-system/calico-node-nvs6t" Oct 27 08:30:40.862894 kubelet[2996]: I1027 08:30:40.861948 2996 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/2968e2e5-2a82-41d7-a316-211aae1c4118-flexvol-driver-host\") pod \"calico-node-nvs6t\" (UID: \"2968e2e5-2a82-41d7-a316-211aae1c4118\") " pod="calico-system/calico-node-nvs6t" Oct 27 08:30:40.866178 kubelet[2996]: I1027 08:30:40.861958 2996 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/2968e2e5-2a82-41d7-a316-211aae1c4118-var-lib-calico\") pod \"calico-node-nvs6t\" (UID: \"2968e2e5-2a82-41d7-a316-211aae1c4118\") " pod="calico-system/calico-node-nvs6t" Oct 27 08:30:40.866178 kubelet[2996]: I1027 08:30:40.861966 2996 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2968e2e5-2a82-41d7-a316-211aae1c4118-tigera-ca-bundle\") pod \"calico-node-nvs6t\" (UID: \"2968e2e5-2a82-41d7-a316-211aae1c4118\") " pod="calico-system/calico-node-nvs6t" Oct 27 08:30:40.866178 kubelet[2996]: I1027 08:30:40.861976 2996 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jtj9\" (UniqueName: \"kubernetes.io/projected/2968e2e5-2a82-41d7-a316-211aae1c4118-kube-api-access-7jtj9\") pod \"calico-node-nvs6t\" (UID: \"2968e2e5-2a82-41d7-a316-211aae1c4118\") " pod="calico-system/calico-node-nvs6t" Oct 27 08:30:40.866178 kubelet[2996]: I1027 08:30:40.861996 2996 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/2968e2e5-2a82-41d7-a316-211aae1c4118-var-run-calico\") pod \"calico-node-nvs6t\" (UID: \"2968e2e5-2a82-41d7-a316-211aae1c4118\") " pod="calico-system/calico-node-nvs6t" Oct 27 08:30:40.866178 kubelet[2996]: I1027 08:30:40.862008 2996 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/2968e2e5-2a82-41d7-a316-211aae1c4118-cni-bin-dir\") pod \"calico-node-nvs6t\" (UID: \"2968e2e5-2a82-41d7-a316-211aae1c4118\") " pod="calico-system/calico-node-nvs6t" Oct 27 08:30:40.866786 kubelet[2996]: I1027 08:30:40.862016 2996 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/2968e2e5-2a82-41d7-a316-211aae1c4118-cni-net-dir\") pod \"calico-node-nvs6t\" (UID: \"2968e2e5-2a82-41d7-a316-211aae1c4118\") " pod="calico-system/calico-node-nvs6t" Oct 27 08:30:40.866786 kubelet[2996]: I1027 08:30:40.862026 2996 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/2968e2e5-2a82-41d7-a316-211aae1c4118-policysync\") pod \"calico-node-nvs6t\" (UID: \"2968e2e5-2a82-41d7-a316-211aae1c4118\") " pod="calico-system/calico-node-nvs6t" Oct 27 08:30:40.880780 kubelet[2996]: E1027 08:30:40.880741 2996 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-l9mcn" podUID="ef789cb0-639f-477f-87e6-d52226d43664" Oct 27 08:30:40.883179 containerd[1685]: time="2025-10-27T08:30:40.883142029Z" level=info msg="connecting to shim 7879e7812763f2a9e2240e48d611942fed738fbebf8c67ae40dcd18a579a4375" address="unix:///run/containerd/s/3fb86e90efcfe7aa4c740027f73239f13c2b24c312248a3e9c0430d4217d0242" namespace=k8s.io protocol=ttrpc version=3 Oct 27 08:30:40.909066 systemd[1]: Started cri-containerd-7879e7812763f2a9e2240e48d611942fed738fbebf8c67ae40dcd18a579a4375.scope - libcontainer container 7879e7812763f2a9e2240e48d611942fed738fbebf8c67ae40dcd18a579a4375. Oct 27 08:30:40.984560 containerd[1685]: time="2025-10-27T08:30:40.984537611Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-94c5756fc-nmftw,Uid:8df62827-9c9e-47c8-af37-76b0bba93a41,Namespace:calico-system,Attempt:0,} returns sandbox id \"7879e7812763f2a9e2240e48d611942fed738fbebf8c67ae40dcd18a579a4375\"" Oct 27 08:30:40.985711 containerd[1685]: time="2025-10-27T08:30:40.985699906Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Oct 27 08:30:40.991141 kubelet[2996]: E1027 08:30:40.991123 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:30:40.991231 kubelet[2996]: W1027 08:30:40.991222 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:30:40.995573 kubelet[2996]: E1027 08:30:40.991305 2996 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:30:40.995573 kubelet[2996]: E1027 08:30:40.991437 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:30:40.995573 kubelet[2996]: W1027 08:30:40.991442 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:30:40.995573 kubelet[2996]: E1027 08:30:40.991447 2996 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:30:40.995573 kubelet[2996]: E1027 08:30:40.991584 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:30:40.995573 kubelet[2996]: W1027 08:30:40.991589 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:30:40.995573 kubelet[2996]: E1027 08:30:40.991594 2996 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:30:41.063743 kubelet[2996]: E1027 08:30:41.063710 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:30:41.063743 kubelet[2996]: W1027 08:30:41.063730 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:30:41.063743 kubelet[2996]: E1027 08:30:41.063747 2996 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:30:41.063901 kubelet[2996]: I1027 08:30:41.063774 2996 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/ef789cb0-639f-477f-87e6-d52226d43664-registration-dir\") pod \"csi-node-driver-l9mcn\" (UID: \"ef789cb0-639f-477f-87e6-d52226d43664\") " pod="calico-system/csi-node-driver-l9mcn" Oct 27 08:30:41.063963 kubelet[2996]: E1027 08:30:41.063945 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:30:41.063963 kubelet[2996]: W1027 08:30:41.063954 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:30:41.063963 kubelet[2996]: E1027 08:30:41.063961 2996 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:30:41.064034 kubelet[2996]: I1027 08:30:41.063977 2996 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/ef789cb0-639f-477f-87e6-d52226d43664-varrun\") pod \"csi-node-driver-l9mcn\" (UID: \"ef789cb0-639f-477f-87e6-d52226d43664\") " pod="calico-system/csi-node-driver-l9mcn" Oct 27 08:30:41.064103 kubelet[2996]: E1027 08:30:41.064084 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:30:41.064103 kubelet[2996]: W1027 08:30:41.064098 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:30:41.064155 kubelet[2996]: E1027 08:30:41.064110 2996 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:30:41.064155 kubelet[2996]: I1027 08:30:41.064135 2996 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dndt\" (UniqueName: \"kubernetes.io/projected/ef789cb0-639f-477f-87e6-d52226d43664-kube-api-access-9dndt\") pod \"csi-node-driver-l9mcn\" (UID: \"ef789cb0-639f-477f-87e6-d52226d43664\") " pod="calico-system/csi-node-driver-l9mcn" Oct 27 08:30:41.064310 kubelet[2996]: E1027 08:30:41.064291 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:30:41.064310 kubelet[2996]: W1027 08:30:41.064305 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:30:41.064643 kubelet[2996]: E1027 08:30:41.064315 2996 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:30:41.064643 kubelet[2996]: I1027 08:30:41.064337 2996 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ef789cb0-639f-477f-87e6-d52226d43664-kubelet-dir\") pod \"csi-node-driver-l9mcn\" (UID: \"ef789cb0-639f-477f-87e6-d52226d43664\") " pod="calico-system/csi-node-driver-l9mcn" Oct 27 08:30:41.064643 kubelet[2996]: E1027 08:30:41.064507 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:30:41.064643 kubelet[2996]: W1027 08:30:41.064517 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:30:41.064643 kubelet[2996]: E1027 08:30:41.064527 2996 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:30:41.064809 kubelet[2996]: E1027 08:30:41.064801 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:30:41.064850 kubelet[2996]: W1027 08:30:41.064843 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:30:41.064895 kubelet[2996]: E1027 08:30:41.064888 2996 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:30:41.065140 kubelet[2996]: E1027 08:30:41.065064 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:30:41.065140 kubelet[2996]: W1027 08:30:41.065072 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:30:41.065140 kubelet[2996]: E1027 08:30:41.065079 2996 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:30:41.065271 kubelet[2996]: E1027 08:30:41.065263 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:30:41.065317 kubelet[2996]: W1027 08:30:41.065310 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:30:41.065427 kubelet[2996]: E1027 08:30:41.065359 2996 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:30:41.065500 kubelet[2996]: E1027 08:30:41.065493 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:30:41.065606 kubelet[2996]: W1027 08:30:41.065539 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:30:41.065606 kubelet[2996]: E1027 08:30:41.065549 2996 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:30:41.065698 kubelet[2996]: E1027 08:30:41.065691 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:30:41.066175 kubelet[2996]: W1027 08:30:41.065963 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:30:41.066175 kubelet[2996]: E1027 08:30:41.065973 2996 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:30:41.066175 kubelet[2996]: E1027 08:30:41.066147 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:30:41.066175 kubelet[2996]: W1027 08:30:41.066163 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:30:41.066175 kubelet[2996]: E1027 08:30:41.066172 2996 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:30:41.066301 kubelet[2996]: E1027 08:30:41.066286 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:30:41.066301 kubelet[2996]: W1027 08:30:41.066292 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:30:41.066301 kubelet[2996]: E1027 08:30:41.066299 2996 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:30:41.066485 kubelet[2996]: I1027 08:30:41.066392 2996 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/ef789cb0-639f-477f-87e6-d52226d43664-socket-dir\") pod \"csi-node-driver-l9mcn\" (UID: \"ef789cb0-639f-477f-87e6-d52226d43664\") " pod="calico-system/csi-node-driver-l9mcn" Oct 27 08:30:41.066485 kubelet[2996]: E1027 08:30:41.066398 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:30:41.066485 kubelet[2996]: W1027 08:30:41.066407 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:30:41.066485 kubelet[2996]: E1027 08:30:41.066414 2996 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:30:41.066700 kubelet[2996]: E1027 08:30:41.066536 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:30:41.066700 kubelet[2996]: W1027 08:30:41.066544 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:30:41.066700 kubelet[2996]: E1027 08:30:41.066550 2996 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:30:41.066845 kubelet[2996]: E1027 08:30:41.066837 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:30:41.066889 kubelet[2996]: W1027 08:30:41.066883 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:30:41.066955 kubelet[2996]: E1027 08:30:41.066941 2996 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:30:41.167280 kubelet[2996]: E1027 08:30:41.167253 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:30:41.167280 kubelet[2996]: W1027 08:30:41.167270 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:30:41.167280 kubelet[2996]: E1027 08:30:41.167283 2996 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:30:41.167420 kubelet[2996]: E1027 08:30:41.167378 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:30:41.167420 kubelet[2996]: W1027 08:30:41.167382 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:30:41.167420 kubelet[2996]: E1027 08:30:41.167387 2996 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:30:41.167468 kubelet[2996]: E1027 08:30:41.167463 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:30:41.167468 kubelet[2996]: W1027 08:30:41.167467 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:30:41.167499 kubelet[2996]: E1027 08:30:41.167472 2996 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:30:41.167569 kubelet[2996]: E1027 08:30:41.167556 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:30:41.167569 kubelet[2996]: W1027 08:30:41.167564 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:30:41.167623 kubelet[2996]: E1027 08:30:41.167571 2996 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:30:41.167687 kubelet[2996]: E1027 08:30:41.167673 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:30:41.167687 kubelet[2996]: W1027 08:30:41.167681 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:30:41.167687 kubelet[2996]: E1027 08:30:41.167686 2996 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:30:41.167790 kubelet[2996]: E1027 08:30:41.167774 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:30:41.167790 kubelet[2996]: W1027 08:30:41.167787 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:30:41.167840 kubelet[2996]: E1027 08:30:41.167793 2996 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:30:41.167881 kubelet[2996]: E1027 08:30:41.167875 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:30:41.167881 kubelet[2996]: W1027 08:30:41.167880 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:30:41.167950 kubelet[2996]: E1027 08:30:41.167884 2996 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:30:41.167987 kubelet[2996]: E1027 08:30:41.167977 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:30:41.167987 kubelet[2996]: W1027 08:30:41.167982 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:30:41.167987 kubelet[2996]: E1027 08:30:41.167987 2996 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:30:41.168199 kubelet[2996]: E1027 08:30:41.168127 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:30:41.168199 kubelet[2996]: W1027 08:30:41.168136 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:30:41.168199 kubelet[2996]: E1027 08:30:41.168145 2996 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:30:41.168289 kubelet[2996]: E1027 08:30:41.168284 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:30:41.168325 kubelet[2996]: W1027 08:30:41.168319 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:30:41.168361 kubelet[2996]: E1027 08:30:41.168356 2996 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:30:41.168558 kubelet[2996]: E1027 08:30:41.168479 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:30:41.168558 kubelet[2996]: W1027 08:30:41.168485 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:30:41.168558 kubelet[2996]: E1027 08:30:41.168490 2996 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:30:41.168661 kubelet[2996]: E1027 08:30:41.168656 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:30:41.168694 kubelet[2996]: W1027 08:30:41.168689 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:30:41.168726 kubelet[2996]: E1027 08:30:41.168720 2996 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:30:41.168856 kubelet[2996]: E1027 08:30:41.168850 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:30:41.168953 kubelet[2996]: W1027 08:30:41.168887 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:30:41.168953 kubelet[2996]: E1027 08:30:41.168894 2996 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:30:41.169028 kubelet[2996]: E1027 08:30:41.169023 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:30:41.169061 kubelet[2996]: W1027 08:30:41.169056 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:30:41.169101 kubelet[2996]: E1027 08:30:41.169093 2996 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:30:41.169271 kubelet[2996]: E1027 08:30:41.169213 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:30:41.169271 kubelet[2996]: W1027 08:30:41.169219 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:30:41.169271 kubelet[2996]: E1027 08:30:41.169225 2996 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:30:41.169368 kubelet[2996]: E1027 08:30:41.169363 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:30:41.169399 kubelet[2996]: W1027 08:30:41.169395 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:30:41.169431 kubelet[2996]: E1027 08:30:41.169427 2996 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:30:41.169610 kubelet[2996]: E1027 08:30:41.169556 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:30:41.169610 kubelet[2996]: W1027 08:30:41.169562 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:30:41.169610 kubelet[2996]: E1027 08:30:41.169566 2996 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:30:41.169714 kubelet[2996]: E1027 08:30:41.169708 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:30:41.169746 kubelet[2996]: W1027 08:30:41.169741 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:30:41.169781 kubelet[2996]: E1027 08:30:41.169776 2996 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:30:41.169928 kubelet[2996]: E1027 08:30:41.169916 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:30:41.169928 kubelet[2996]: W1027 08:30:41.169923 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:30:41.169928 kubelet[2996]: E1027 08:30:41.169928 2996 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:30:41.170010 kubelet[2996]: E1027 08:30:41.169999 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:30:41.170010 kubelet[2996]: W1027 08:30:41.170006 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:30:41.170010 kubelet[2996]: E1027 08:30:41.170011 2996 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:30:41.170094 kubelet[2996]: E1027 08:30:41.170089 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:30:41.170120 kubelet[2996]: W1027 08:30:41.170094 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:30:41.170120 kubelet[2996]: E1027 08:30:41.170101 2996 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:30:41.170335 kubelet[2996]: E1027 08:30:41.170311 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:30:41.170335 kubelet[2996]: W1027 08:30:41.170318 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:30:41.170335 kubelet[2996]: E1027 08:30:41.170323 2996 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:30:41.170486 kubelet[2996]: E1027 08:30:41.170399 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:30:41.170486 kubelet[2996]: W1027 08:30:41.170405 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:30:41.170486 kubelet[2996]: E1027 08:30:41.170410 2996 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:30:41.170971 kubelet[2996]: E1027 08:30:41.170510 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:30:41.170971 kubelet[2996]: W1027 08:30:41.170514 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:30:41.170971 kubelet[2996]: E1027 08:30:41.170519 2996 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:30:41.170971 kubelet[2996]: E1027 08:30:41.170610 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:30:41.170971 kubelet[2996]: W1027 08:30:41.170616 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:30:41.170971 kubelet[2996]: E1027 08:30:41.170624 2996 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:30:41.175665 kubelet[2996]: E1027 08:30:41.175650 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:30:41.175665 kubelet[2996]: W1027 08:30:41.175660 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:30:41.175742 kubelet[2996]: E1027 08:30:41.175670 2996 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:30:41.272417 containerd[1685]: time="2025-10-27T08:30:41.272383427Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-nvs6t,Uid:2968e2e5-2a82-41d7-a316-211aae1c4118,Namespace:calico-system,Attempt:0,}" Oct 27 08:30:41.309538 containerd[1685]: time="2025-10-27T08:30:41.309326860Z" level=info msg="connecting to shim 16ed1c63a1a2de950215477d3273236c54414d2658fe1f18053646b0737e1e58" address="unix:///run/containerd/s/f31a531be67628fc459d91aaff0ccac732bd383efbbfc95a3e54878e431a49d9" namespace=k8s.io protocol=ttrpc version=3 Oct 27 08:30:41.327005 systemd[1]: Started cri-containerd-16ed1c63a1a2de950215477d3273236c54414d2658fe1f18053646b0737e1e58.scope - libcontainer container 16ed1c63a1a2de950215477d3273236c54414d2658fe1f18053646b0737e1e58. Oct 27 08:30:41.350296 containerd[1685]: time="2025-10-27T08:30:41.350271349Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-nvs6t,Uid:2968e2e5-2a82-41d7-a316-211aae1c4118,Namespace:calico-system,Attempt:0,} returns sandbox id \"16ed1c63a1a2de950215477d3273236c54414d2658fe1f18053646b0737e1e58\"" Oct 27 08:30:42.519819 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3322913892.mount: Deactivated successfully. Oct 27 08:30:42.853020 kubelet[2996]: E1027 08:30:42.852287 2996 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-l9mcn" podUID="ef789cb0-639f-477f-87e6-d52226d43664" Oct 27 08:30:44.388925 containerd[1685]: time="2025-10-27T08:30:44.387980861Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:30:44.392608 containerd[1685]: time="2025-10-27T08:30:44.392587780Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Oct 27 08:30:44.394560 containerd[1685]: time="2025-10-27T08:30:44.394539733Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:30:44.398723 containerd[1685]: time="2025-10-27T08:30:44.398704418Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:30:44.404975 containerd[1685]: time="2025-10-27T08:30:44.399104643Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 3.413331032s" Oct 27 08:30:44.404975 containerd[1685]: time="2025-10-27T08:30:44.399124637Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Oct 27 08:30:44.404975 containerd[1685]: time="2025-10-27T08:30:44.400069971Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Oct 27 08:30:44.426300 containerd[1685]: time="2025-10-27T08:30:44.426202816Z" level=info msg="CreateContainer within sandbox \"7879e7812763f2a9e2240e48d611942fed738fbebf8c67ae40dcd18a579a4375\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Oct 27 08:30:44.436383 containerd[1685]: time="2025-10-27T08:30:44.436328352Z" level=info msg="Container 0ffac908cbe891d928934bf753c54c7a166ab9b825a88e8da755449a8131329f: CDI devices from CRI Config.CDIDevices: []" Oct 27 08:30:44.440073 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1053611971.mount: Deactivated successfully. Oct 27 08:30:44.440485 containerd[1685]: time="2025-10-27T08:30:44.440461822Z" level=info msg="CreateContainer within sandbox \"7879e7812763f2a9e2240e48d611942fed738fbebf8c67ae40dcd18a579a4375\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"0ffac908cbe891d928934bf753c54c7a166ab9b825a88e8da755449a8131329f\"" Oct 27 08:30:44.441176 containerd[1685]: time="2025-10-27T08:30:44.441156613Z" level=info msg="StartContainer for \"0ffac908cbe891d928934bf753c54c7a166ab9b825a88e8da755449a8131329f\"" Oct 27 08:30:44.442423 containerd[1685]: time="2025-10-27T08:30:44.442402832Z" level=info msg="connecting to shim 0ffac908cbe891d928934bf753c54c7a166ab9b825a88e8da755449a8131329f" address="unix:///run/containerd/s/3fb86e90efcfe7aa4c740027f73239f13c2b24c312248a3e9c0430d4217d0242" protocol=ttrpc version=3 Oct 27 08:30:44.461098 systemd[1]: Started cri-containerd-0ffac908cbe891d928934bf753c54c7a166ab9b825a88e8da755449a8131329f.scope - libcontainer container 0ffac908cbe891d928934bf753c54c7a166ab9b825a88e8da755449a8131329f. Oct 27 08:30:44.498269 containerd[1685]: time="2025-10-27T08:30:44.498208201Z" level=info msg="StartContainer for \"0ffac908cbe891d928934bf753c54c7a166ab9b825a88e8da755449a8131329f\" returns successfully" Oct 27 08:30:44.859147 kubelet[2996]: E1027 08:30:44.858824 2996 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-l9mcn" podUID="ef789cb0-639f-477f-87e6-d52226d43664" Oct 27 08:30:44.990803 kubelet[2996]: E1027 08:30:44.990649 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:30:44.990803 kubelet[2996]: W1027 08:30:44.990671 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:30:44.992259 kubelet[2996]: E1027 08:30:44.992247 2996 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:30:44.992458 kubelet[2996]: E1027 08:30:44.992410 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:30:44.992458 kubelet[2996]: W1027 08:30:44.992417 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:30:44.992458 kubelet[2996]: E1027 08:30:44.992424 2996 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:30:44.992617 kubelet[2996]: E1027 08:30:44.992579 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:30:44.992617 kubelet[2996]: W1027 08:30:44.992585 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:30:44.992617 kubelet[2996]: E1027 08:30:44.992591 2996 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:30:44.992815 kubelet[2996]: E1027 08:30:44.992756 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:30:44.992815 kubelet[2996]: W1027 08:30:44.992761 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:30:44.992815 kubelet[2996]: E1027 08:30:44.992766 2996 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:30:44.993004 kubelet[2996]: E1027 08:30:44.992975 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:30:44.993004 kubelet[2996]: W1027 08:30:44.992981 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:30:44.993004 kubelet[2996]: E1027 08:30:44.992987 2996 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:30:44.993155 kubelet[2996]: E1027 08:30:44.993128 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:30:44.993155 kubelet[2996]: W1027 08:30:44.993134 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:30:44.993155 kubelet[2996]: E1027 08:30:44.993138 2996 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:30:44.993305 kubelet[2996]: E1027 08:30:44.993276 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:30:44.993305 kubelet[2996]: W1027 08:30:44.993281 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:30:44.993305 kubelet[2996]: E1027 08:30:44.993287 2996 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:30:44.993476 kubelet[2996]: E1027 08:30:44.993430 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:30:44.993476 kubelet[2996]: W1027 08:30:44.993438 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:30:44.993476 kubelet[2996]: E1027 08:30:44.993445 2996 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:30:44.993624 kubelet[2996]: E1027 08:30:44.993595 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:30:44.993624 kubelet[2996]: W1027 08:30:44.993602 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:30:44.993624 kubelet[2996]: E1027 08:30:44.993607 2996 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:30:44.994796 kubelet[2996]: E1027 08:30:44.994692 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:30:44.994796 kubelet[2996]: W1027 08:30:44.994701 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:30:44.994796 kubelet[2996]: E1027 08:30:44.994710 2996 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:30:44.994896 kubelet[2996]: E1027 08:30:44.994888 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:30:44.994896 kubelet[2996]: W1027 08:30:44.994895 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:30:44.995483 kubelet[2996]: E1027 08:30:44.994900 2996 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:30:44.995483 kubelet[2996]: E1027 08:30:44.994994 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:30:44.995483 kubelet[2996]: W1027 08:30:44.994999 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:30:44.995483 kubelet[2996]: E1027 08:30:44.995020 2996 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:30:44.995483 kubelet[2996]: E1027 08:30:44.995097 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:30:44.995483 kubelet[2996]: W1027 08:30:44.995102 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:30:44.995483 kubelet[2996]: E1027 08:30:44.995130 2996 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:30:44.995483 kubelet[2996]: E1027 08:30:44.995255 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:30:44.995483 kubelet[2996]: W1027 08:30:44.995260 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:30:44.995483 kubelet[2996]: E1027 08:30:44.995264 2996 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:30:44.995674 kubelet[2996]: E1027 08:30:44.995359 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:30:44.995674 kubelet[2996]: W1027 08:30:44.995364 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:30:44.995674 kubelet[2996]: E1027 08:30:44.995370 2996 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:30:44.995674 kubelet[2996]: E1027 08:30:44.995528 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:30:44.995674 kubelet[2996]: W1027 08:30:44.995533 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:30:44.995674 kubelet[2996]: E1027 08:30:44.995538 2996 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:30:44.996407 kubelet[2996]: E1027 08:30:44.996395 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:30:44.996407 kubelet[2996]: W1027 08:30:44.996402 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:30:44.996407 kubelet[2996]: E1027 08:30:44.996409 2996 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:30:44.996704 kubelet[2996]: E1027 08:30:44.996682 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:30:44.996704 kubelet[2996]: W1027 08:30:44.996688 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:30:44.996704 kubelet[2996]: E1027 08:30:44.996694 2996 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:30:44.996958 kubelet[2996]: E1027 08:30:44.996825 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:30:44.996958 kubelet[2996]: W1027 08:30:44.996830 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:30:44.996958 kubelet[2996]: E1027 08:30:44.996835 2996 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:30:44.996958 kubelet[2996]: E1027 08:30:44.996942 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:30:44.996958 kubelet[2996]: W1027 08:30:44.996947 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:30:44.996958 kubelet[2996]: E1027 08:30:44.996953 2996 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:30:44.997458 kubelet[2996]: E1027 08:30:44.997035 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:30:44.997458 kubelet[2996]: W1027 08:30:44.997039 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:30:44.997458 kubelet[2996]: E1027 08:30:44.997043 2996 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:30:44.997458 kubelet[2996]: E1027 08:30:44.997158 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:30:44.997458 kubelet[2996]: W1027 08:30:44.997162 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:30:44.997458 kubelet[2996]: E1027 08:30:44.997168 2996 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:30:44.997941 kubelet[2996]: E1027 08:30:44.997771 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:30:44.997941 kubelet[2996]: W1027 08:30:44.997865 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:30:44.997941 kubelet[2996]: E1027 08:30:44.997874 2996 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:30:44.998176 kubelet[2996]: E1027 08:30:44.998168 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:30:44.998296 kubelet[2996]: W1027 08:30:44.998289 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:30:44.998486 kubelet[2996]: E1027 08:30:44.998338 2996 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:30:44.998698 kubelet[2996]: E1027 08:30:44.998560 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:30:44.998698 kubelet[2996]: W1027 08:30:44.998567 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:30:44.998698 kubelet[2996]: E1027 08:30:44.998573 2996 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:30:44.998919 kubelet[2996]: E1027 08:30:44.998837 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:30:44.998919 kubelet[2996]: W1027 08:30:44.998854 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:30:44.998919 kubelet[2996]: E1027 08:30:44.998860 2996 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:30:44.999008 kubelet[2996]: E1027 08:30:44.998998 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:30:44.999008 kubelet[2996]: W1027 08:30:44.999006 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:30:44.999050 kubelet[2996]: E1027 08:30:44.999012 2996 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:30:44.999131 kubelet[2996]: E1027 08:30:44.999078 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:30:44.999131 kubelet[2996]: W1027 08:30:44.999085 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:30:44.999131 kubelet[2996]: E1027 08:30:44.999091 2996 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:30:44.999210 kubelet[2996]: E1027 08:30:44.999170 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:30:44.999210 kubelet[2996]: W1027 08:30:44.999174 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:30:44.999210 kubelet[2996]: E1027 08:30:44.999180 2996 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:30:44.999288 kubelet[2996]: E1027 08:30:44.999275 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:30:44.999288 kubelet[2996]: W1027 08:30:44.999280 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:30:44.999288 kubelet[2996]: E1027 08:30:44.999284 2996 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:30:44.999551 kubelet[2996]: E1027 08:30:44.999498 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:30:44.999551 kubelet[2996]: W1027 08:30:44.999505 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:30:44.999551 kubelet[2996]: E1027 08:30:44.999511 2996 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:30:44.999687 kubelet[2996]: E1027 08:30:44.999680 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:30:44.999735 kubelet[2996]: W1027 08:30:44.999729 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:30:44.999786 kubelet[2996]: E1027 08:30:44.999767 2996 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:30:44.999920 kubelet[2996]: E1027 08:30:44.999900 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:30:44.999972 kubelet[2996]: W1027 08:30:44.999955 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:30:44.999972 kubelet[2996]: E1027 08:30:44.999962 2996 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:30:45.930125 kubelet[2996]: I1027 08:30:45.930035 2996 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 27 08:30:46.001497 kubelet[2996]: E1027 08:30:46.001476 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:30:46.001497 kubelet[2996]: W1027 08:30:46.001490 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:30:46.001497 kubelet[2996]: E1027 08:30:46.001502 2996 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:30:46.001800 kubelet[2996]: E1027 08:30:46.001708 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:30:46.001800 kubelet[2996]: W1027 08:30:46.001714 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:30:46.001800 kubelet[2996]: E1027 08:30:46.001719 2996 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:30:46.002369 kubelet[2996]: E1027 08:30:46.001963 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:30:46.002369 kubelet[2996]: W1027 08:30:46.001969 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:30:46.002369 kubelet[2996]: E1027 08:30:46.001974 2996 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:30:46.002369 kubelet[2996]: E1027 08:30:46.002139 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:30:46.002369 kubelet[2996]: W1027 08:30:46.002144 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:30:46.002369 kubelet[2996]: E1027 08:30:46.002149 2996 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:30:46.002369 kubelet[2996]: E1027 08:30:46.002362 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:30:46.002369 kubelet[2996]: W1027 08:30:46.002367 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:30:46.002369 kubelet[2996]: E1027 08:30:46.002372 2996 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:30:46.002618 kubelet[2996]: E1027 08:30:46.002483 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:30:46.002618 kubelet[2996]: W1027 08:30:46.002487 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:30:46.002618 kubelet[2996]: E1027 08:30:46.002492 2996 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:30:46.002618 kubelet[2996]: E1027 08:30:46.002582 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:30:46.002618 kubelet[2996]: W1027 08:30:46.002585 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:30:46.002618 kubelet[2996]: E1027 08:30:46.002590 2996 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:30:46.002884 kubelet[2996]: E1027 08:30:46.002663 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:30:46.002884 kubelet[2996]: W1027 08:30:46.002667 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:30:46.002884 kubelet[2996]: E1027 08:30:46.002671 2996 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:30:46.002884 kubelet[2996]: E1027 08:30:46.002834 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:30:46.003056 kubelet[2996]: W1027 08:30:46.002922 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:30:46.003056 kubelet[2996]: E1027 08:30:46.002932 2996 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:30:46.003197 kubelet[2996]: E1027 08:30:46.003109 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:30:46.003197 kubelet[2996]: W1027 08:30:46.003113 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:30:46.003197 kubelet[2996]: E1027 08:30:46.003119 2996 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:30:46.003325 kubelet[2996]: E1027 08:30:46.003295 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:30:46.003325 kubelet[2996]: W1027 08:30:46.003300 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:30:46.003325 kubelet[2996]: E1027 08:30:46.003304 2996 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:30:46.003556 kubelet[2996]: E1027 08:30:46.003493 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:30:46.003556 kubelet[2996]: W1027 08:30:46.003499 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:30:46.003556 kubelet[2996]: E1027 08:30:46.003504 2996 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:30:46.003698 kubelet[2996]: E1027 08:30:46.003685 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:30:46.003736 kubelet[2996]: W1027 08:30:46.003731 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:30:46.003779 kubelet[2996]: E1027 08:30:46.003773 2996 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:30:46.003972 kubelet[2996]: E1027 08:30:46.003902 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:30:46.003972 kubelet[2996]: W1027 08:30:46.003925 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:30:46.003972 kubelet[2996]: E1027 08:30:46.003930 2996 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:30:46.004079 kubelet[2996]: E1027 08:30:46.004074 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:30:46.004127 kubelet[2996]: W1027 08:30:46.004110 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:30:46.004127 kubelet[2996]: E1027 08:30:46.004118 2996 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:30:46.004370 kubelet[2996]: E1027 08:30:46.004306 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:30:46.004370 kubelet[2996]: W1027 08:30:46.004312 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:30:46.004370 kubelet[2996]: E1027 08:30:46.004317 2996 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:30:46.004531 kubelet[2996]: E1027 08:30:46.004513 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:30:46.004570 kubelet[2996]: W1027 08:30:46.004564 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:30:46.004642 kubelet[2996]: E1027 08:30:46.004620 2996 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:30:46.004747 kubelet[2996]: E1027 08:30:46.004735 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:30:46.004747 kubelet[2996]: W1027 08:30:46.004744 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:30:46.004804 kubelet[2996]: E1027 08:30:46.004749 2996 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:30:46.004833 kubelet[2996]: E1027 08:30:46.004824 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:30:46.004833 kubelet[2996]: W1027 08:30:46.004830 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:30:46.004833 kubelet[2996]: E1027 08:30:46.004836 2996 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:30:46.004923 kubelet[2996]: E1027 08:30:46.004900 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:30:46.004923 kubelet[2996]: W1027 08:30:46.004921 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:30:46.004963 kubelet[2996]: E1027 08:30:46.004928 2996 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:30:46.005036 kubelet[2996]: E1027 08:30:46.005026 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:30:46.005036 kubelet[2996]: W1027 08:30:46.005034 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:30:46.005099 kubelet[2996]: E1027 08:30:46.005039 2996 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:30:46.005197 kubelet[2996]: E1027 08:30:46.005189 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:30:46.005197 kubelet[2996]: W1027 08:30:46.005195 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:30:46.005197 kubelet[2996]: E1027 08:30:46.005200 2996 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:30:46.005303 kubelet[2996]: E1027 08:30:46.005294 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:30:46.005303 kubelet[2996]: W1027 08:30:46.005301 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:30:46.005303 kubelet[2996]: E1027 08:30:46.005306 2996 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:30:46.005457 kubelet[2996]: E1027 08:30:46.005451 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:30:46.005505 kubelet[2996]: W1027 08:30:46.005490 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:30:46.005580 kubelet[2996]: E1027 08:30:46.005538 2996 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:30:46.005684 kubelet[2996]: E1027 08:30:46.005673 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:30:46.005774 kubelet[2996]: W1027 08:30:46.005718 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:30:46.005774 kubelet[2996]: E1027 08:30:46.005727 2996 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:30:46.005862 kubelet[2996]: E1027 08:30:46.005856 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:30:46.005914 kubelet[2996]: W1027 08:30:46.005890 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:30:46.005914 kubelet[2996]: E1027 08:30:46.005896 2996 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:30:46.006147 kubelet[2996]: E1027 08:30:46.006068 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:30:46.006147 kubelet[2996]: W1027 08:30:46.006074 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:30:46.006147 kubelet[2996]: E1027 08:30:46.006079 2996 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:30:46.006232 kubelet[2996]: E1027 08:30:46.006220 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:30:46.006232 kubelet[2996]: W1027 08:30:46.006225 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:30:46.006232 kubelet[2996]: E1027 08:30:46.006230 2996 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:30:46.006302 kubelet[2996]: E1027 08:30:46.006293 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:30:46.006302 kubelet[2996]: W1027 08:30:46.006300 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:30:46.006345 kubelet[2996]: E1027 08:30:46.006306 2996 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:30:46.006380 kubelet[2996]: E1027 08:30:46.006371 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:30:46.006380 kubelet[2996]: W1027 08:30:46.006377 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:30:46.006457 kubelet[2996]: E1027 08:30:46.006381 2996 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:30:46.006457 kubelet[2996]: E1027 08:30:46.006450 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:30:46.006457 kubelet[2996]: W1027 08:30:46.006454 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:30:46.006513 kubelet[2996]: E1027 08:30:46.006458 2996 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:30:46.006702 kubelet[2996]: E1027 08:30:46.006692 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:30:46.006702 kubelet[2996]: W1027 08:30:46.006700 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:30:46.007031 kubelet[2996]: E1027 08:30:46.006705 2996 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:30:46.007031 kubelet[2996]: E1027 08:30:46.006787 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:30:46.007031 kubelet[2996]: W1027 08:30:46.006791 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:30:46.007031 kubelet[2996]: E1027 08:30:46.006796 2996 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:30:46.236342 containerd[1685]: time="2025-10-27T08:30:46.236051205Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:30:46.236597 containerd[1685]: time="2025-10-27T08:30:46.236576847Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Oct 27 08:30:46.237021 containerd[1685]: time="2025-10-27T08:30:46.237005421Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:30:46.237951 containerd[1685]: time="2025-10-27T08:30:46.237934831Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:30:46.238460 containerd[1685]: time="2025-10-27T08:30:46.238445879Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.838246882s" Oct 27 08:30:46.238510 containerd[1685]: time="2025-10-27T08:30:46.238502028Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Oct 27 08:30:46.241888 containerd[1685]: time="2025-10-27T08:30:46.241662055Z" level=info msg="CreateContainer within sandbox \"16ed1c63a1a2de950215477d3273236c54414d2658fe1f18053646b0737e1e58\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Oct 27 08:30:46.246664 containerd[1685]: time="2025-10-27T08:30:46.246644347Z" level=info msg="Container 9aa6cd4cd57f94467def8a1d51d9d662223c7ecfb4622635525e957ae79664b2: CDI devices from CRI Config.CDIDevices: []" Oct 27 08:30:46.260779 containerd[1685]: time="2025-10-27T08:30:46.260756407Z" level=info msg="CreateContainer within sandbox \"16ed1c63a1a2de950215477d3273236c54414d2658fe1f18053646b0737e1e58\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"9aa6cd4cd57f94467def8a1d51d9d662223c7ecfb4622635525e957ae79664b2\"" Oct 27 08:30:46.261410 containerd[1685]: time="2025-10-27T08:30:46.261249037Z" level=info msg="StartContainer for \"9aa6cd4cd57f94467def8a1d51d9d662223c7ecfb4622635525e957ae79664b2\"" Oct 27 08:30:46.263146 containerd[1685]: time="2025-10-27T08:30:46.263096612Z" level=info msg="connecting to shim 9aa6cd4cd57f94467def8a1d51d9d662223c7ecfb4622635525e957ae79664b2" address="unix:///run/containerd/s/f31a531be67628fc459d91aaff0ccac732bd383efbbfc95a3e54878e431a49d9" protocol=ttrpc version=3 Oct 27 08:30:46.285077 systemd[1]: Started cri-containerd-9aa6cd4cd57f94467def8a1d51d9d662223c7ecfb4622635525e957ae79664b2.scope - libcontainer container 9aa6cd4cd57f94467def8a1d51d9d662223c7ecfb4622635525e957ae79664b2. Oct 27 08:30:46.310175 containerd[1685]: time="2025-10-27T08:30:46.310142108Z" level=info msg="StartContainer for \"9aa6cd4cd57f94467def8a1d51d9d662223c7ecfb4622635525e957ae79664b2\" returns successfully" Oct 27 08:30:46.320532 systemd[1]: cri-containerd-9aa6cd4cd57f94467def8a1d51d9d662223c7ecfb4622635525e957ae79664b2.scope: Deactivated successfully. Oct 27 08:30:46.320735 systemd[1]: cri-containerd-9aa6cd4cd57f94467def8a1d51d9d662223c7ecfb4622635525e957ae79664b2.scope: Consumed 18ms CPU time, 6.2M memory peak, 3.6M written to disk. Oct 27 08:30:46.340587 containerd[1685]: time="2025-10-27T08:30:46.340548756Z" level=info msg="received exit event container_id:\"9aa6cd4cd57f94467def8a1d51d9d662223c7ecfb4622635525e957ae79664b2\" id:\"9aa6cd4cd57f94467def8a1d51d9d662223c7ecfb4622635525e957ae79664b2\" pid:3669 exited_at:{seconds:1761553846 nanos:331321100}" Oct 27 08:30:46.354657 containerd[1685]: time="2025-10-27T08:30:46.354550005Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9aa6cd4cd57f94467def8a1d51d9d662223c7ecfb4622635525e957ae79664b2\" id:\"9aa6cd4cd57f94467def8a1d51d9d662223c7ecfb4622635525e957ae79664b2\" pid:3669 exited_at:{seconds:1761553846 nanos:331321100}" Oct 27 08:30:46.363254 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9aa6cd4cd57f94467def8a1d51d9d662223c7ecfb4622635525e957ae79664b2-rootfs.mount: Deactivated successfully. Oct 27 08:30:46.852440 kubelet[2996]: E1027 08:30:46.852165 2996 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-l9mcn" podUID="ef789cb0-639f-477f-87e6-d52226d43664" Oct 27 08:30:46.931684 containerd[1685]: time="2025-10-27T08:30:46.931638961Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Oct 27 08:30:46.940491 kubelet[2996]: I1027 08:30:46.940461 2996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-94c5756fc-nmftw" podStartSLOduration=3.526185143 podStartE2EDuration="6.940449229s" podCreationTimestamp="2025-10-27 08:30:40 +0000 UTC" firstStartedPulling="2025-10-27 08:30:40.98544535 +0000 UTC m=+19.298461229" lastFinishedPulling="2025-10-27 08:30:44.399709429 +0000 UTC m=+22.712725315" observedRunningTime="2025-10-27 08:30:44.965014442 +0000 UTC m=+23.278030334" watchObservedRunningTime="2025-10-27 08:30:46.940449229 +0000 UTC m=+25.253465115" Oct 27 08:30:48.853022 kubelet[2996]: E1027 08:30:48.852522 2996 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-l9mcn" podUID="ef789cb0-639f-477f-87e6-d52226d43664" Oct 27 08:30:50.817534 containerd[1685]: time="2025-10-27T08:30:50.817500821Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:30:50.818110 containerd[1685]: time="2025-10-27T08:30:50.817927201Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Oct 27 08:30:50.818588 containerd[1685]: time="2025-10-27T08:30:50.818567221Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:30:50.833770 containerd[1685]: time="2025-10-27T08:30:50.833752256Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:30:50.834189 containerd[1685]: time="2025-10-27T08:30:50.834175939Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 3.902484168s" Oct 27 08:30:50.834236 containerd[1685]: time="2025-10-27T08:30:50.834228935Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Oct 27 08:30:50.836774 containerd[1685]: time="2025-10-27T08:30:50.836759201Z" level=info msg="CreateContainer within sandbox \"16ed1c63a1a2de950215477d3273236c54414d2658fe1f18053646b0737e1e58\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Oct 27 08:30:50.841085 containerd[1685]: time="2025-10-27T08:30:50.840985350Z" level=info msg="Container 77a0cb1c5c9780fdd6c85e538573769fce0e7ac03fc8c9de02edaa1a459c5cbc: CDI devices from CRI Config.CDIDevices: []" Oct 27 08:30:50.851695 kubelet[2996]: E1027 08:30:50.851667 2996 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-l9mcn" podUID="ef789cb0-639f-477f-87e6-d52226d43664" Oct 27 08:30:50.858528 containerd[1685]: time="2025-10-27T08:30:50.858488783Z" level=info msg="CreateContainer within sandbox \"16ed1c63a1a2de950215477d3273236c54414d2658fe1f18053646b0737e1e58\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"77a0cb1c5c9780fdd6c85e538573769fce0e7ac03fc8c9de02edaa1a459c5cbc\"" Oct 27 08:30:50.858806 containerd[1685]: time="2025-10-27T08:30:50.858791812Z" level=info msg="StartContainer for \"77a0cb1c5c9780fdd6c85e538573769fce0e7ac03fc8c9de02edaa1a459c5cbc\"" Oct 27 08:30:50.860131 containerd[1685]: time="2025-10-27T08:30:50.860113036Z" level=info msg="connecting to shim 77a0cb1c5c9780fdd6c85e538573769fce0e7ac03fc8c9de02edaa1a459c5cbc" address="unix:///run/containerd/s/f31a531be67628fc459d91aaff0ccac732bd383efbbfc95a3e54878e431a49d9" protocol=ttrpc version=3 Oct 27 08:30:50.882988 systemd[1]: Started cri-containerd-77a0cb1c5c9780fdd6c85e538573769fce0e7ac03fc8c9de02edaa1a459c5cbc.scope - libcontainer container 77a0cb1c5c9780fdd6c85e538573769fce0e7ac03fc8c9de02edaa1a459c5cbc. Oct 27 08:30:50.907094 containerd[1685]: time="2025-10-27T08:30:50.907069777Z" level=info msg="StartContainer for \"77a0cb1c5c9780fdd6c85e538573769fce0e7ac03fc8c9de02edaa1a459c5cbc\" returns successfully" Oct 27 08:30:52.136387 systemd[1]: cri-containerd-77a0cb1c5c9780fdd6c85e538573769fce0e7ac03fc8c9de02edaa1a459c5cbc.scope: Deactivated successfully. Oct 27 08:30:52.136569 systemd[1]: cri-containerd-77a0cb1c5c9780fdd6c85e538573769fce0e7ac03fc8c9de02edaa1a459c5cbc.scope: Consumed 307ms CPU time, 159.6M memory peak, 20K read from disk, 171.3M written to disk. Oct 27 08:30:52.139967 containerd[1685]: time="2025-10-27T08:30:52.139775375Z" level=info msg="received exit event container_id:\"77a0cb1c5c9780fdd6c85e538573769fce0e7ac03fc8c9de02edaa1a459c5cbc\" id:\"77a0cb1c5c9780fdd6c85e538573769fce0e7ac03fc8c9de02edaa1a459c5cbc\" pid:3727 exited_at:{seconds:1761553852 nanos:139148786}" Oct 27 08:30:52.139967 containerd[1685]: time="2025-10-27T08:30:52.139944622Z" level=info msg="TaskExit event in podsandbox handler container_id:\"77a0cb1c5c9780fdd6c85e538573769fce0e7ac03fc8c9de02edaa1a459c5cbc\" id:\"77a0cb1c5c9780fdd6c85e538573769fce0e7ac03fc8c9de02edaa1a459c5cbc\" pid:3727 exited_at:{seconds:1761553852 nanos:139148786}" Oct 27 08:30:52.170528 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-77a0cb1c5c9780fdd6c85e538573769fce0e7ac03fc8c9de02edaa1a459c5cbc-rootfs.mount: Deactivated successfully. Oct 27 08:30:52.208927 kubelet[2996]: I1027 08:30:52.208894 2996 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Oct 27 08:30:52.309913 systemd[1]: Created slice kubepods-burstable-podd4806ae9_c0ee_46ba_a6b8_04fea562ec1c.slice - libcontainer container kubepods-burstable-podd4806ae9_c0ee_46ba_a6b8_04fea562ec1c.slice. Oct 27 08:30:52.321178 systemd[1]: Created slice kubepods-burstable-pode75ee41c_b373_44a8_9425_dc19e6224499.slice - libcontainer container kubepods-burstable-pode75ee41c_b373_44a8_9425_dc19e6224499.slice. Oct 27 08:30:52.331651 systemd[1]: Created slice kubepods-besteffort-podf209a925_c7ce_4786_8b27_b2615413f1ab.slice - libcontainer container kubepods-besteffort-podf209a925_c7ce_4786_8b27_b2615413f1ab.slice. Oct 27 08:30:52.338847 systemd[1]: Created slice kubepods-besteffort-pod024044b8_ffed_437e_be0d_2af9d6b61984.slice - libcontainer container kubepods-besteffort-pod024044b8_ffed_437e_be0d_2af9d6b61984.slice. Oct 27 08:30:52.342929 kubelet[2996]: I1027 08:30:52.342888 2996 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6g9tz\" (UniqueName: \"kubernetes.io/projected/423be6d6-16fc-456a-a6d1-876213aa577d-kube-api-access-6g9tz\") pod \"calico-apiserver-74fcb5d84f-hzkgp\" (UID: \"423be6d6-16fc-456a-a6d1-876213aa577d\") " pod="calico-apiserver/calico-apiserver-74fcb5d84f-hzkgp" Oct 27 08:30:52.343183 kubelet[2996]: I1027 08:30:52.343013 2996 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/01295943-bd91-493a-b86a-7f82f8f61b26-goldmane-ca-bundle\") pod \"goldmane-666569f655-mxfbf\" (UID: \"01295943-bd91-493a-b86a-7f82f8f61b26\") " pod="calico-system/goldmane-666569f655-mxfbf" Oct 27 08:30:52.343183 kubelet[2996]: I1027 08:30:52.343031 2996 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01295943-bd91-493a-b86a-7f82f8f61b26-config\") pod \"goldmane-666569f655-mxfbf\" (UID: \"01295943-bd91-493a-b86a-7f82f8f61b26\") " pod="calico-system/goldmane-666569f655-mxfbf" Oct 27 08:30:52.343183 kubelet[2996]: I1027 08:30:52.343041 2996 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4b62w\" (UniqueName: \"kubernetes.io/projected/01295943-bd91-493a-b86a-7f82f8f61b26-kube-api-access-4b62w\") pod \"goldmane-666569f655-mxfbf\" (UID: \"01295943-bd91-493a-b86a-7f82f8f61b26\") " pod="calico-system/goldmane-666569f655-mxfbf" Oct 27 08:30:52.343183 kubelet[2996]: I1027 08:30:52.343050 2996 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/423be6d6-16fc-456a-a6d1-876213aa577d-calico-apiserver-certs\") pod \"calico-apiserver-74fcb5d84f-hzkgp\" (UID: \"423be6d6-16fc-456a-a6d1-876213aa577d\") " pod="calico-apiserver/calico-apiserver-74fcb5d84f-hzkgp" Oct 27 08:30:52.343183 kubelet[2996]: I1027 08:30:52.343062 2996 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/01295943-bd91-493a-b86a-7f82f8f61b26-goldmane-key-pair\") pod \"goldmane-666569f655-mxfbf\" (UID: \"01295943-bd91-493a-b86a-7f82f8f61b26\") " pod="calico-system/goldmane-666569f655-mxfbf" Oct 27 08:30:52.344921 systemd[1]: Created slice kubepods-besteffort-pod9a4275eb_9684_4e6e_9ea4_6cdccb43bcef.slice - libcontainer container kubepods-besteffort-pod9a4275eb_9684_4e6e_9ea4_6cdccb43bcef.slice. Oct 27 08:30:52.352118 systemd[1]: Created slice kubepods-besteffort-pod423be6d6_16fc_456a_a6d1_876213aa577d.slice - libcontainer container kubepods-besteffort-pod423be6d6_16fc_456a_a6d1_876213aa577d.slice. Oct 27 08:30:52.357616 systemd[1]: Created slice kubepods-besteffort-pod01295943_bd91_493a_b86a_7f82f8f61b26.slice - libcontainer container kubepods-besteffort-pod01295943_bd91_493a_b86a_7f82f8f61b26.slice. Oct 27 08:30:52.443965 kubelet[2996]: I1027 08:30:52.443871 2996 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d4806ae9-c0ee-46ba-a6b8-04fea562ec1c-config-volume\") pod \"coredns-674b8bbfcf-xmfql\" (UID: \"d4806ae9-c0ee-46ba-a6b8-04fea562ec1c\") " pod="kube-system/coredns-674b8bbfcf-xmfql" Oct 27 08:30:52.443965 kubelet[2996]: I1027 08:30:52.443897 2996 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e75ee41c-b373-44a8-9425-dc19e6224499-config-volume\") pod \"coredns-674b8bbfcf-bnvzn\" (UID: \"e75ee41c-b373-44a8-9425-dc19e6224499\") " pod="kube-system/coredns-674b8bbfcf-bnvzn" Oct 27 08:30:52.445456 kubelet[2996]: I1027 08:30:52.444332 2996 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4v44\" (UniqueName: \"kubernetes.io/projected/e75ee41c-b373-44a8-9425-dc19e6224499-kube-api-access-b4v44\") pod \"coredns-674b8bbfcf-bnvzn\" (UID: \"e75ee41c-b373-44a8-9425-dc19e6224499\") " pod="kube-system/coredns-674b8bbfcf-bnvzn" Oct 27 08:30:52.445456 kubelet[2996]: I1027 08:30:52.444529 2996 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j45rg\" (UniqueName: \"kubernetes.io/projected/f209a925-c7ce-4786-8b27-b2615413f1ab-kube-api-access-j45rg\") pod \"calico-apiserver-74fcb5d84f-x85t2\" (UID: \"f209a925-c7ce-4786-8b27-b2615413f1ab\") " pod="calico-apiserver/calico-apiserver-74fcb5d84f-x85t2" Oct 27 08:30:52.445456 kubelet[2996]: I1027 08:30:52.444563 2996 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/024044b8-ffed-437e-be0d-2af9d6b61984-tigera-ca-bundle\") pod \"calico-kube-controllers-565cf6bf7-fd9ql\" (UID: \"024044b8-ffed-437e-be0d-2af9d6b61984\") " pod="calico-system/calico-kube-controllers-565cf6bf7-fd9ql" Oct 27 08:30:52.445456 kubelet[2996]: I1027 08:30:52.444575 2996 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thxmv\" (UniqueName: \"kubernetes.io/projected/024044b8-ffed-437e-be0d-2af9d6b61984-kube-api-access-thxmv\") pod \"calico-kube-controllers-565cf6bf7-fd9ql\" (UID: \"024044b8-ffed-437e-be0d-2af9d6b61984\") " pod="calico-system/calico-kube-controllers-565cf6bf7-fd9ql" Oct 27 08:30:52.445456 kubelet[2996]: I1027 08:30:52.444585 2996 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7w6z\" (UniqueName: \"kubernetes.io/projected/d4806ae9-c0ee-46ba-a6b8-04fea562ec1c-kube-api-access-k7w6z\") pod \"coredns-674b8bbfcf-xmfql\" (UID: \"d4806ae9-c0ee-46ba-a6b8-04fea562ec1c\") " pod="kube-system/coredns-674b8bbfcf-xmfql" Oct 27 08:30:52.446682 kubelet[2996]: I1027 08:30:52.444595 2996 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9a4275eb-9684-4e6e-9ea4-6cdccb43bcef-whisker-backend-key-pair\") pod \"whisker-d58fd57dc-nzzcq\" (UID: \"9a4275eb-9684-4e6e-9ea4-6cdccb43bcef\") " pod="calico-system/whisker-d58fd57dc-nzzcq" Oct 27 08:30:52.446682 kubelet[2996]: I1027 08:30:52.444606 2996 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9a4275eb-9684-4e6e-9ea4-6cdccb43bcef-whisker-ca-bundle\") pod \"whisker-d58fd57dc-nzzcq\" (UID: \"9a4275eb-9684-4e6e-9ea4-6cdccb43bcef\") " pod="calico-system/whisker-d58fd57dc-nzzcq" Oct 27 08:30:52.446682 kubelet[2996]: I1027 08:30:52.444622 2996 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f209a925-c7ce-4786-8b27-b2615413f1ab-calico-apiserver-certs\") pod \"calico-apiserver-74fcb5d84f-x85t2\" (UID: \"f209a925-c7ce-4786-8b27-b2615413f1ab\") " pod="calico-apiserver/calico-apiserver-74fcb5d84f-x85t2" Oct 27 08:30:52.446682 kubelet[2996]: I1027 08:30:52.444631 2996 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69gd8\" (UniqueName: \"kubernetes.io/projected/9a4275eb-9684-4e6e-9ea4-6cdccb43bcef-kube-api-access-69gd8\") pod \"whisker-d58fd57dc-nzzcq\" (UID: \"9a4275eb-9684-4e6e-9ea4-6cdccb43bcef\") " pod="calico-system/whisker-d58fd57dc-nzzcq" Oct 27 08:30:52.618551 containerd[1685]: time="2025-10-27T08:30:52.618520294Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-xmfql,Uid:d4806ae9-c0ee-46ba-a6b8-04fea562ec1c,Namespace:kube-system,Attempt:0,}" Oct 27 08:30:52.624924 containerd[1685]: time="2025-10-27T08:30:52.624877057Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-bnvzn,Uid:e75ee41c-b373-44a8-9425-dc19e6224499,Namespace:kube-system,Attempt:0,}" Oct 27 08:30:52.638731 containerd[1685]: time="2025-10-27T08:30:52.638543616Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74fcb5d84f-x85t2,Uid:f209a925-c7ce-4786-8b27-b2615413f1ab,Namespace:calico-apiserver,Attempt:0,}" Oct 27 08:30:52.656281 containerd[1685]: time="2025-10-27T08:30:52.656252369Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74fcb5d84f-hzkgp,Uid:423be6d6-16fc-456a-a6d1-876213aa577d,Namespace:calico-apiserver,Attempt:0,}" Oct 27 08:30:52.656993 containerd[1685]: time="2025-10-27T08:30:52.656902965Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-565cf6bf7-fd9ql,Uid:024044b8-ffed-437e-be0d-2af9d6b61984,Namespace:calico-system,Attempt:0,}" Oct 27 08:30:52.657045 containerd[1685]: time="2025-10-27T08:30:52.657031751Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-d58fd57dc-nzzcq,Uid:9a4275eb-9684-4e6e-9ea4-6cdccb43bcef,Namespace:calico-system,Attempt:0,}" Oct 27 08:30:52.669633 containerd[1685]: time="2025-10-27T08:30:52.669488574Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-mxfbf,Uid:01295943-bd91-493a-b86a-7f82f8f61b26,Namespace:calico-system,Attempt:0,}" Oct 27 08:30:52.865082 systemd[1]: Created slice kubepods-besteffort-podef789cb0_639f_477f_87e6_d52226d43664.slice - libcontainer container kubepods-besteffort-podef789cb0_639f_477f_87e6_d52226d43664.slice. Oct 27 08:30:52.868133 containerd[1685]: time="2025-10-27T08:30:52.867930813Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-l9mcn,Uid:ef789cb0-639f-477f-87e6-d52226d43664,Namespace:calico-system,Attempt:0,}" Oct 27 08:30:52.910693 containerd[1685]: time="2025-10-27T08:30:52.910663405Z" level=error msg="Failed to destroy network for sandbox \"b17bb217b00586e020aac8e6d307498c44555b25ccf8282ee47aa2f8cba01431\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:30:52.911391 containerd[1685]: time="2025-10-27T08:30:52.911367177Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-xmfql,Uid:d4806ae9-c0ee-46ba-a6b8-04fea562ec1c,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b17bb217b00586e020aac8e6d307498c44555b25ccf8282ee47aa2f8cba01431\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:30:52.913267 containerd[1685]: time="2025-10-27T08:30:52.913220728Z" level=error msg="Failed to destroy network for sandbox \"6789e40df7d1b7cd5b4cc7b71e5980d7cd93586d5ae673751ccf258be0495714\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:30:52.914420 kubelet[2996]: E1027 08:30:52.914314 2996 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b17bb217b00586e020aac8e6d307498c44555b25ccf8282ee47aa2f8cba01431\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:30:52.914529 kubelet[2996]: E1027 08:30:52.914516 2996 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b17bb217b00586e020aac8e6d307498c44555b25ccf8282ee47aa2f8cba01431\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-xmfql" Oct 27 08:30:52.914584 kubelet[2996]: E1027 08:30:52.914576 2996 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b17bb217b00586e020aac8e6d307498c44555b25ccf8282ee47aa2f8cba01431\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-xmfql" Oct 27 08:30:52.915345 kubelet[2996]: E1027 08:30:52.915318 2996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-xmfql_kube-system(d4806ae9-c0ee-46ba-a6b8-04fea562ec1c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-xmfql_kube-system(d4806ae9-c0ee-46ba-a6b8-04fea562ec1c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b17bb217b00586e020aac8e6d307498c44555b25ccf8282ee47aa2f8cba01431\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-xmfql" podUID="d4806ae9-c0ee-46ba-a6b8-04fea562ec1c" Oct 27 08:30:52.915828 containerd[1685]: time="2025-10-27T08:30:52.915696863Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74fcb5d84f-hzkgp,Uid:423be6d6-16fc-456a-a6d1-876213aa577d,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6789e40df7d1b7cd5b4cc7b71e5980d7cd93586d5ae673751ccf258be0495714\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:30:52.920542 kubelet[2996]: E1027 08:30:52.920064 2996 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6789e40df7d1b7cd5b4cc7b71e5980d7cd93586d5ae673751ccf258be0495714\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:30:52.920542 kubelet[2996]: E1027 08:30:52.920092 2996 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6789e40df7d1b7cd5b4cc7b71e5980d7cd93586d5ae673751ccf258be0495714\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-74fcb5d84f-hzkgp" Oct 27 08:30:52.920542 kubelet[2996]: E1027 08:30:52.920110 2996 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6789e40df7d1b7cd5b4cc7b71e5980d7cd93586d5ae673751ccf258be0495714\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-74fcb5d84f-hzkgp" Oct 27 08:30:52.920674 kubelet[2996]: E1027 08:30:52.920137 2996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-74fcb5d84f-hzkgp_calico-apiserver(423be6d6-16fc-456a-a6d1-876213aa577d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-74fcb5d84f-hzkgp_calico-apiserver(423be6d6-16fc-456a-a6d1-876213aa577d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6789e40df7d1b7cd5b4cc7b71e5980d7cd93586d5ae673751ccf258be0495714\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-74fcb5d84f-hzkgp" podUID="423be6d6-16fc-456a-a6d1-876213aa577d" Oct 27 08:30:52.923917 containerd[1685]: time="2025-10-27T08:30:52.923593890Z" level=error msg="Failed to destroy network for sandbox \"4c9dd850cd3cbc5d48bb9fcb9c6627cddf5f8f3824e7b11e260c59334b1f9a2c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:30:52.924687 containerd[1685]: time="2025-10-27T08:30:52.924666023Z" level=error msg="Failed to destroy network for sandbox \"9b23dac33f2d95e1a55d16447313f3d96b568d12d0d9059927f7fb5b23a62231\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:30:52.926253 containerd[1685]: time="2025-10-27T08:30:52.926201223Z" level=error msg="Failed to destroy network for sandbox \"2bdf1db6ff726ee5832844b6481728708c6f96ca4d7eaf5d5703e8ee16ee89fd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:30:52.926374 containerd[1685]: time="2025-10-27T08:30:52.926362438Z" level=error msg="Failed to destroy network for sandbox \"4a643fdfab886a622358ee0c5d55a2dc1d60909c5e986c391bfc630bf0a52542\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:30:52.926774 containerd[1685]: time="2025-10-27T08:30:52.926754095Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-mxfbf,Uid:01295943-bd91-493a-b86a-7f82f8f61b26,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c9dd850cd3cbc5d48bb9fcb9c6627cddf5f8f3824e7b11e260c59334b1f9a2c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:30:52.927131 containerd[1685]: time="2025-10-27T08:30:52.927032821Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74fcb5d84f-x85t2,Uid:f209a925-c7ce-4786-8b27-b2615413f1ab,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9b23dac33f2d95e1a55d16447313f3d96b568d12d0d9059927f7fb5b23a62231\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:30:52.927277 containerd[1685]: time="2025-10-27T08:30:52.927259138Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-565cf6bf7-fd9ql,Uid:024044b8-ffed-437e-be0d-2af9d6b61984,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2bdf1db6ff726ee5832844b6481728708c6f96ca4d7eaf5d5703e8ee16ee89fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:30:52.927949 kubelet[2996]: E1027 08:30:52.927334 2996 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c9dd850cd3cbc5d48bb9fcb9c6627cddf5f8f3824e7b11e260c59334b1f9a2c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:30:52.927949 kubelet[2996]: E1027 08:30:52.927369 2996 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c9dd850cd3cbc5d48bb9fcb9c6627cddf5f8f3824e7b11e260c59334b1f9a2c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-mxfbf" Oct 27 08:30:52.927949 kubelet[2996]: E1027 08:30:52.927382 2996 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c9dd850cd3cbc5d48bb9fcb9c6627cddf5f8f3824e7b11e260c59334b1f9a2c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-mxfbf" Oct 27 08:30:52.928044 containerd[1685]: time="2025-10-27T08:30:52.927544838Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-bnvzn,Uid:e75ee41c-b373-44a8-9425-dc19e6224499,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4a643fdfab886a622358ee0c5d55a2dc1d60909c5e986c391bfc630bf0a52542\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:30:52.928078 kubelet[2996]: E1027 08:30:52.927415 2996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-mxfbf_calico-system(01295943-bd91-493a-b86a-7f82f8f61b26)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-mxfbf_calico-system(01295943-bd91-493a-b86a-7f82f8f61b26)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4c9dd850cd3cbc5d48bb9fcb9c6627cddf5f8f3824e7b11e260c59334b1f9a2c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-mxfbf" podUID="01295943-bd91-493a-b86a-7f82f8f61b26" Oct 27 08:30:52.928540 kubelet[2996]: E1027 08:30:52.928326 2996 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9b23dac33f2d95e1a55d16447313f3d96b568d12d0d9059927f7fb5b23a62231\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:30:52.928540 kubelet[2996]: E1027 08:30:52.928359 2996 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9b23dac33f2d95e1a55d16447313f3d96b568d12d0d9059927f7fb5b23a62231\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-74fcb5d84f-x85t2" Oct 27 08:30:52.928540 kubelet[2996]: E1027 08:30:52.928371 2996 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9b23dac33f2d95e1a55d16447313f3d96b568d12d0d9059927f7fb5b23a62231\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-74fcb5d84f-x85t2" Oct 27 08:30:52.928612 kubelet[2996]: E1027 08:30:52.928390 2996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-74fcb5d84f-x85t2_calico-apiserver(f209a925-c7ce-4786-8b27-b2615413f1ab)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-74fcb5d84f-x85t2_calico-apiserver(f209a925-c7ce-4786-8b27-b2615413f1ab)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9b23dac33f2d95e1a55d16447313f3d96b568d12d0d9059927f7fb5b23a62231\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-74fcb5d84f-x85t2" podUID="f209a925-c7ce-4786-8b27-b2615413f1ab" Oct 27 08:30:52.928612 kubelet[2996]: E1027 08:30:52.928436 2996 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4a643fdfab886a622358ee0c5d55a2dc1d60909c5e986c391bfc630bf0a52542\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:30:52.928612 kubelet[2996]: E1027 08:30:52.928449 2996 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4a643fdfab886a622358ee0c5d55a2dc1d60909c5e986c391bfc630bf0a52542\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-bnvzn" Oct 27 08:30:52.930493 kubelet[2996]: E1027 08:30:52.928456 2996 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4a643fdfab886a622358ee0c5d55a2dc1d60909c5e986c391bfc630bf0a52542\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-bnvzn" Oct 27 08:30:52.930493 kubelet[2996]: E1027 08:30:52.928472 2996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-bnvzn_kube-system(e75ee41c-b373-44a8-9425-dc19e6224499)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-bnvzn_kube-system(e75ee41c-b373-44a8-9425-dc19e6224499)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4a643fdfab886a622358ee0c5d55a2dc1d60909c5e986c391bfc630bf0a52542\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-bnvzn" podUID="e75ee41c-b373-44a8-9425-dc19e6224499" Oct 27 08:30:52.930493 kubelet[2996]: E1027 08:30:52.928493 2996 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2bdf1db6ff726ee5832844b6481728708c6f96ca4d7eaf5d5703e8ee16ee89fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:30:52.931207 kubelet[2996]: E1027 08:30:52.928503 2996 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2bdf1db6ff726ee5832844b6481728708c6f96ca4d7eaf5d5703e8ee16ee89fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-565cf6bf7-fd9ql" Oct 27 08:30:52.931207 kubelet[2996]: E1027 08:30:52.928510 2996 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2bdf1db6ff726ee5832844b6481728708c6f96ca4d7eaf5d5703e8ee16ee89fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-565cf6bf7-fd9ql" Oct 27 08:30:52.931207 kubelet[2996]: E1027 08:30:52.928524 2996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-565cf6bf7-fd9ql_calico-system(024044b8-ffed-437e-be0d-2af9d6b61984)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-565cf6bf7-fd9ql_calico-system(024044b8-ffed-437e-be0d-2af9d6b61984)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2bdf1db6ff726ee5832844b6481728708c6f96ca4d7eaf5d5703e8ee16ee89fd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-565cf6bf7-fd9ql" podUID="024044b8-ffed-437e-be0d-2af9d6b61984" Oct 27 08:30:52.932131 containerd[1685]: time="2025-10-27T08:30:52.931277132Z" level=error msg="Failed to destroy network for sandbox \"168ac81e0a7ee020bc22ed6cc1aba76edbfb315483202bd2a6b439f77cced916\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:30:52.932131 containerd[1685]: time="2025-10-27T08:30:52.931737082Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-d58fd57dc-nzzcq,Uid:9a4275eb-9684-4e6e-9ea4-6cdccb43bcef,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"168ac81e0a7ee020bc22ed6cc1aba76edbfb315483202bd2a6b439f77cced916\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:30:52.932199 kubelet[2996]: E1027 08:30:52.931822 2996 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"168ac81e0a7ee020bc22ed6cc1aba76edbfb315483202bd2a6b439f77cced916\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:30:52.932199 kubelet[2996]: E1027 08:30:52.931848 2996 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"168ac81e0a7ee020bc22ed6cc1aba76edbfb315483202bd2a6b439f77cced916\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-d58fd57dc-nzzcq" Oct 27 08:30:52.932199 kubelet[2996]: E1027 08:30:52.931872 2996 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"168ac81e0a7ee020bc22ed6cc1aba76edbfb315483202bd2a6b439f77cced916\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-d58fd57dc-nzzcq" Oct 27 08:30:52.932276 kubelet[2996]: E1027 08:30:52.931899 2996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-d58fd57dc-nzzcq_calico-system(9a4275eb-9684-4e6e-9ea4-6cdccb43bcef)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-d58fd57dc-nzzcq_calico-system(9a4275eb-9684-4e6e-9ea4-6cdccb43bcef)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"168ac81e0a7ee020bc22ed6cc1aba76edbfb315483202bd2a6b439f77cced916\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-d58fd57dc-nzzcq" podUID="9a4275eb-9684-4e6e-9ea4-6cdccb43bcef" Oct 27 08:30:52.954761 containerd[1685]: time="2025-10-27T08:30:52.954736708Z" level=error msg="Failed to destroy network for sandbox \"8949201fe67a922fe66ffae223e0deb6a5094c4ff2daceb71bde817e8dcf77e2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:30:52.955417 containerd[1685]: time="2025-10-27T08:30:52.955398654Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-l9mcn,Uid:ef789cb0-639f-477f-87e6-d52226d43664,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8949201fe67a922fe66ffae223e0deb6a5094c4ff2daceb71bde817e8dcf77e2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:30:52.955630 kubelet[2996]: E1027 08:30:52.955608 2996 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8949201fe67a922fe66ffae223e0deb6a5094c4ff2daceb71bde817e8dcf77e2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:30:52.955723 kubelet[2996]: E1027 08:30:52.955713 2996 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8949201fe67a922fe66ffae223e0deb6a5094c4ff2daceb71bde817e8dcf77e2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-l9mcn" Oct 27 08:30:52.955776 kubelet[2996]: E1027 08:30:52.955768 2996 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8949201fe67a922fe66ffae223e0deb6a5094c4ff2daceb71bde817e8dcf77e2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-l9mcn" Oct 27 08:30:52.955857 kubelet[2996]: E1027 08:30:52.955842 2996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-l9mcn_calico-system(ef789cb0-639f-477f-87e6-d52226d43664)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-l9mcn_calico-system(ef789cb0-639f-477f-87e6-d52226d43664)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8949201fe67a922fe66ffae223e0deb6a5094c4ff2daceb71bde817e8dcf77e2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-l9mcn" podUID="ef789cb0-639f-477f-87e6-d52226d43664" Oct 27 08:30:53.020578 containerd[1685]: time="2025-10-27T08:30:53.020465860Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Oct 27 08:30:59.412490 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount124667406.mount: Deactivated successfully. Oct 27 08:30:59.576389 containerd[1685]: time="2025-10-27T08:30:59.576358092Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 6.553586815s" Oct 27 08:30:59.576659 containerd[1685]: time="2025-10-27T08:30:59.576647183Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Oct 27 08:30:59.576739 containerd[1685]: time="2025-10-27T08:30:59.556615045Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Oct 27 08:30:59.581734 containerd[1685]: time="2025-10-27T08:30:59.581228091Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:30:59.601476 containerd[1685]: time="2025-10-27T08:30:59.601149991Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:30:59.601476 containerd[1685]: time="2025-10-27T08:30:59.601438868Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:30:59.630523 containerd[1685]: time="2025-10-27T08:30:59.630186529Z" level=info msg="CreateContainer within sandbox \"16ed1c63a1a2de950215477d3273236c54414d2658fe1f18053646b0737e1e58\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Oct 27 08:30:59.697960 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2535630584.mount: Deactivated successfully. Oct 27 08:30:59.698375 containerd[1685]: time="2025-10-27T08:30:59.698295828Z" level=info msg="Container 10b23570a41acdbf3a7699fed3775bf0e43909572be3bb7b85d4c7f02d7b8544: CDI devices from CRI Config.CDIDevices: []" Oct 27 08:30:59.725455 containerd[1685]: time="2025-10-27T08:30:59.725429083Z" level=info msg="CreateContainer within sandbox \"16ed1c63a1a2de950215477d3273236c54414d2658fe1f18053646b0737e1e58\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"10b23570a41acdbf3a7699fed3775bf0e43909572be3bb7b85d4c7f02d7b8544\"" Oct 27 08:30:59.726033 containerd[1685]: time="2025-10-27T08:30:59.726018296Z" level=info msg="StartContainer for \"10b23570a41acdbf3a7699fed3775bf0e43909572be3bb7b85d4c7f02d7b8544\"" Oct 27 08:30:59.732588 containerd[1685]: time="2025-10-27T08:30:59.732563884Z" level=info msg="connecting to shim 10b23570a41acdbf3a7699fed3775bf0e43909572be3bb7b85d4c7f02d7b8544" address="unix:///run/containerd/s/f31a531be67628fc459d91aaff0ccac732bd383efbbfc95a3e54878e431a49d9" protocol=ttrpc version=3 Oct 27 08:30:59.822561 systemd[1]: Started cri-containerd-10b23570a41acdbf3a7699fed3775bf0e43909572be3bb7b85d4c7f02d7b8544.scope - libcontainer container 10b23570a41acdbf3a7699fed3775bf0e43909572be3bb7b85d4c7f02d7b8544. Oct 27 08:30:59.854511 containerd[1685]: time="2025-10-27T08:30:59.853991461Z" level=info msg="StartContainer for \"10b23570a41acdbf3a7699fed3775bf0e43909572be3bb7b85d4c7f02d7b8544\" returns successfully" Oct 27 08:31:00.186843 kubelet[2996]: I1027 08:31:00.185629 2996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-nvs6t" podStartSLOduration=1.958207433 podStartE2EDuration="20.18508702s" podCreationTimestamp="2025-10-27 08:30:40 +0000 UTC" firstStartedPulling="2025-10-27 08:30:41.351081963 +0000 UTC m=+19.664097841" lastFinishedPulling="2025-10-27 08:30:59.577961543 +0000 UTC m=+37.890977428" observedRunningTime="2025-10-27 08:31:00.183884309 +0000 UTC m=+38.496900201" watchObservedRunningTime="2025-10-27 08:31:00.18508702 +0000 UTC m=+38.498102906" Oct 27 08:31:00.364568 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Oct 27 08:31:00.370181 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Oct 27 08:31:00.378127 containerd[1685]: time="2025-10-27T08:31:00.378067139Z" level=info msg="TaskExit event in podsandbox handler container_id:\"10b23570a41acdbf3a7699fed3775bf0e43909572be3bb7b85d4c7f02d7b8544\" id:\"51e3932fab2bcc9fa30767167c3117796a8b10ef337c5b9ef27bb8e8ed680382\" pid:4034 exit_status:1 exited_at:{seconds:1761553860 nanos:377717161}" Oct 27 08:31:00.706930 kubelet[2996]: I1027 08:31:00.706881 2996 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-69gd8\" (UniqueName: \"kubernetes.io/projected/9a4275eb-9684-4e6e-9ea4-6cdccb43bcef-kube-api-access-69gd8\") pod \"9a4275eb-9684-4e6e-9ea4-6cdccb43bcef\" (UID: \"9a4275eb-9684-4e6e-9ea4-6cdccb43bcef\") " Oct 27 08:31:00.707055 kubelet[2996]: I1027 08:31:00.706965 2996 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9a4275eb-9684-4e6e-9ea4-6cdccb43bcef-whisker-backend-key-pair\") pod \"9a4275eb-9684-4e6e-9ea4-6cdccb43bcef\" (UID: \"9a4275eb-9684-4e6e-9ea4-6cdccb43bcef\") " Oct 27 08:31:00.707055 kubelet[2996]: I1027 08:31:00.706981 2996 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9a4275eb-9684-4e6e-9ea4-6cdccb43bcef-whisker-ca-bundle\") pod \"9a4275eb-9684-4e6e-9ea4-6cdccb43bcef\" (UID: \"9a4275eb-9684-4e6e-9ea4-6cdccb43bcef\") " Oct 27 08:31:00.707245 kubelet[2996]: I1027 08:31:00.707230 2996 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a4275eb-9684-4e6e-9ea4-6cdccb43bcef-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "9a4275eb-9684-4e6e-9ea4-6cdccb43bcef" (UID: "9a4275eb-9684-4e6e-9ea4-6cdccb43bcef"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Oct 27 08:31:00.720829 systemd[1]: var-lib-kubelet-pods-9a4275eb\x2d9684\x2d4e6e\x2d9ea4\x2d6cdccb43bcef-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d69gd8.mount: Deactivated successfully. Oct 27 08:31:00.720927 systemd[1]: var-lib-kubelet-pods-9a4275eb\x2d9684\x2d4e6e\x2d9ea4\x2d6cdccb43bcef-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Oct 27 08:31:00.723201 kubelet[2996]: I1027 08:31:00.721292 2996 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a4275eb-9684-4e6e-9ea4-6cdccb43bcef-kube-api-access-69gd8" (OuterVolumeSpecName: "kube-api-access-69gd8") pod "9a4275eb-9684-4e6e-9ea4-6cdccb43bcef" (UID: "9a4275eb-9684-4e6e-9ea4-6cdccb43bcef"). InnerVolumeSpecName "kube-api-access-69gd8". PluginName "kubernetes.io/projected", VolumeGIDValue "" Oct 27 08:31:00.723201 kubelet[2996]: I1027 08:31:00.721622 2996 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a4275eb-9684-4e6e-9ea4-6cdccb43bcef-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "9a4275eb-9684-4e6e-9ea4-6cdccb43bcef" (UID: "9a4275eb-9684-4e6e-9ea4-6cdccb43bcef"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Oct 27 08:31:00.808219 kubelet[2996]: I1027 08:31:00.808194 2996 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9a4275eb-9684-4e6e-9ea4-6cdccb43bcef-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Oct 27 08:31:00.808219 kubelet[2996]: I1027 08:31:00.808215 2996 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9a4275eb-9684-4e6e-9ea4-6cdccb43bcef-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Oct 27 08:31:00.808219 kubelet[2996]: I1027 08:31:00.808221 2996 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-69gd8\" (UniqueName: \"kubernetes.io/projected/9a4275eb-9684-4e6e-9ea4-6cdccb43bcef-kube-api-access-69gd8\") on node \"localhost\" DevicePath \"\"" Oct 27 08:31:01.055817 systemd[1]: Removed slice kubepods-besteffort-pod9a4275eb_9684_4e6e_9ea4_6cdccb43bcef.slice - libcontainer container kubepods-besteffort-pod9a4275eb_9684_4e6e_9ea4_6cdccb43bcef.slice. Oct 27 08:31:01.130699 systemd[1]: Created slice kubepods-besteffort-pod275ad17e_d663_404d_99d3_03d139a8cc73.slice - libcontainer container kubepods-besteffort-pod275ad17e_d663_404d_99d3_03d139a8cc73.slice. Oct 27 08:31:01.151810 containerd[1685]: time="2025-10-27T08:31:01.151787295Z" level=info msg="TaskExit event in podsandbox handler container_id:\"10b23570a41acdbf3a7699fed3775bf0e43909572be3bb7b85d4c7f02d7b8544\" id:\"61169fefc06bc0a1664547be9a625742189be6c23d596f61337790163ad1ec9f\" pid:4094 exit_status:1 exited_at:{seconds:1761553861 nanos:151444743}" Oct 27 08:31:01.211112 kubelet[2996]: I1027 08:31:01.211079 2996 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/275ad17e-d663-404d-99d3-03d139a8cc73-whisker-backend-key-pair\") pod \"whisker-86b9c948c9-rsbhz\" (UID: \"275ad17e-d663-404d-99d3-03d139a8cc73\") " pod="calico-system/whisker-86b9c948c9-rsbhz" Oct 27 08:31:01.211431 kubelet[2996]: I1027 08:31:01.211191 2996 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-854sq\" (UniqueName: \"kubernetes.io/projected/275ad17e-d663-404d-99d3-03d139a8cc73-kube-api-access-854sq\") pod \"whisker-86b9c948c9-rsbhz\" (UID: \"275ad17e-d663-404d-99d3-03d139a8cc73\") " pod="calico-system/whisker-86b9c948c9-rsbhz" Oct 27 08:31:01.211431 kubelet[2996]: I1027 08:31:01.211213 2996 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/275ad17e-d663-404d-99d3-03d139a8cc73-whisker-ca-bundle\") pod \"whisker-86b9c948c9-rsbhz\" (UID: \"275ad17e-d663-404d-99d3-03d139a8cc73\") " pod="calico-system/whisker-86b9c948c9-rsbhz" Oct 27 08:31:01.439693 containerd[1685]: time="2025-10-27T08:31:01.439529997Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-86b9c948c9-rsbhz,Uid:275ad17e-d663-404d-99d3-03d139a8cc73,Namespace:calico-system,Attempt:0,}" Oct 27 08:31:01.893476 systemd-networkd[1580]: calie0c76bebb54: Link UP Oct 27 08:31:01.893595 systemd-networkd[1580]: calie0c76bebb54: Gained carrier Oct 27 08:31:01.906359 containerd[1685]: 2025-10-27 08:31:01.471 [INFO][4108] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 27 08:31:01.906359 containerd[1685]: 2025-10-27 08:31:01.510 [INFO][4108] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--86b9c948c9--rsbhz-eth0 whisker-86b9c948c9- calico-system 275ad17e-d663-404d-99d3-03d139a8cc73 923 0 2025-10-27 08:31:01 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:86b9c948c9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-86b9c948c9-rsbhz eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calie0c76bebb54 [] [] }} ContainerID="a3b63a6ae2b673d5273cd30c54b3aa7435099a23b026ff5384ab7f65beffe51a" Namespace="calico-system" Pod="whisker-86b9c948c9-rsbhz" WorkloadEndpoint="localhost-k8s-whisker--86b9c948c9--rsbhz-" Oct 27 08:31:01.906359 containerd[1685]: 2025-10-27 08:31:01.510 [INFO][4108] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a3b63a6ae2b673d5273cd30c54b3aa7435099a23b026ff5384ab7f65beffe51a" Namespace="calico-system" Pod="whisker-86b9c948c9-rsbhz" WorkloadEndpoint="localhost-k8s-whisker--86b9c948c9--rsbhz-eth0" Oct 27 08:31:01.906359 containerd[1685]: 2025-10-27 08:31:01.793 [INFO][4119] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a3b63a6ae2b673d5273cd30c54b3aa7435099a23b026ff5384ab7f65beffe51a" HandleID="k8s-pod-network.a3b63a6ae2b673d5273cd30c54b3aa7435099a23b026ff5384ab7f65beffe51a" Workload="localhost-k8s-whisker--86b9c948c9--rsbhz-eth0" Oct 27 08:31:01.906540 containerd[1685]: 2025-10-27 08:31:01.798 [INFO][4119] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a3b63a6ae2b673d5273cd30c54b3aa7435099a23b026ff5384ab7f65beffe51a" HandleID="k8s-pod-network.a3b63a6ae2b673d5273cd30c54b3aa7435099a23b026ff5384ab7f65beffe51a" Workload="localhost-k8s-whisker--86b9c948c9--rsbhz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002bd5c0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-86b9c948c9-rsbhz", "timestamp":"2025-10-27 08:31:01.793658065 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 27 08:31:01.906540 containerd[1685]: 2025-10-27 08:31:01.799 [INFO][4119] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 27 08:31:01.906540 containerd[1685]: 2025-10-27 08:31:01.800 [INFO][4119] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 27 08:31:01.906540 containerd[1685]: 2025-10-27 08:31:01.801 [INFO][4119] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 27 08:31:01.906540 containerd[1685]: 2025-10-27 08:31:01.831 [INFO][4119] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a3b63a6ae2b673d5273cd30c54b3aa7435099a23b026ff5384ab7f65beffe51a" host="localhost" Oct 27 08:31:01.906540 containerd[1685]: 2025-10-27 08:31:01.845 [INFO][4119] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 27 08:31:01.906540 containerd[1685]: 2025-10-27 08:31:01.851 [INFO][4119] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 27 08:31:01.906540 containerd[1685]: 2025-10-27 08:31:01.853 [INFO][4119] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 27 08:31:01.906540 containerd[1685]: 2025-10-27 08:31:01.855 [INFO][4119] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 27 08:31:01.906540 containerd[1685]: 2025-10-27 08:31:01.855 [INFO][4119] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a3b63a6ae2b673d5273cd30c54b3aa7435099a23b026ff5384ab7f65beffe51a" host="localhost" Oct 27 08:31:01.906699 containerd[1685]: 2025-10-27 08:31:01.858 [INFO][4119] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a3b63a6ae2b673d5273cd30c54b3aa7435099a23b026ff5384ab7f65beffe51a Oct 27 08:31:01.906699 containerd[1685]: 2025-10-27 08:31:01.861 [INFO][4119] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a3b63a6ae2b673d5273cd30c54b3aa7435099a23b026ff5384ab7f65beffe51a" host="localhost" Oct 27 08:31:01.906699 containerd[1685]: 2025-10-27 08:31:01.865 [INFO][4119] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.a3b63a6ae2b673d5273cd30c54b3aa7435099a23b026ff5384ab7f65beffe51a" host="localhost" Oct 27 08:31:01.906699 containerd[1685]: 2025-10-27 08:31:01.865 [INFO][4119] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.a3b63a6ae2b673d5273cd30c54b3aa7435099a23b026ff5384ab7f65beffe51a" host="localhost" Oct 27 08:31:01.906699 containerd[1685]: 2025-10-27 08:31:01.865 [INFO][4119] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 27 08:31:01.906699 containerd[1685]: 2025-10-27 08:31:01.865 [INFO][4119] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="a3b63a6ae2b673d5273cd30c54b3aa7435099a23b026ff5384ab7f65beffe51a" HandleID="k8s-pod-network.a3b63a6ae2b673d5273cd30c54b3aa7435099a23b026ff5384ab7f65beffe51a" Workload="localhost-k8s-whisker--86b9c948c9--rsbhz-eth0" Oct 27 08:31:01.906791 containerd[1685]: 2025-10-27 08:31:01.867 [INFO][4108] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a3b63a6ae2b673d5273cd30c54b3aa7435099a23b026ff5384ab7f65beffe51a" Namespace="calico-system" Pod="whisker-86b9c948c9-rsbhz" WorkloadEndpoint="localhost-k8s-whisker--86b9c948c9--rsbhz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--86b9c948c9--rsbhz-eth0", GenerateName:"whisker-86b9c948c9-", Namespace:"calico-system", SelfLink:"", UID:"275ad17e-d663-404d-99d3-03d139a8cc73", ResourceVersion:"923", Generation:0, CreationTimestamp:time.Date(2025, time.October, 27, 8, 31, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"86b9c948c9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-86b9c948c9-rsbhz", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calie0c76bebb54", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 27 08:31:01.906791 containerd[1685]: 2025-10-27 08:31:01.867 [INFO][4108] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="a3b63a6ae2b673d5273cd30c54b3aa7435099a23b026ff5384ab7f65beffe51a" Namespace="calico-system" Pod="whisker-86b9c948c9-rsbhz" WorkloadEndpoint="localhost-k8s-whisker--86b9c948c9--rsbhz-eth0" Oct 27 08:31:01.906845 containerd[1685]: 2025-10-27 08:31:01.867 [INFO][4108] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie0c76bebb54 ContainerID="a3b63a6ae2b673d5273cd30c54b3aa7435099a23b026ff5384ab7f65beffe51a" Namespace="calico-system" Pod="whisker-86b9c948c9-rsbhz" WorkloadEndpoint="localhost-k8s-whisker--86b9c948c9--rsbhz-eth0" Oct 27 08:31:01.906845 containerd[1685]: 2025-10-27 08:31:01.884 [INFO][4108] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a3b63a6ae2b673d5273cd30c54b3aa7435099a23b026ff5384ab7f65beffe51a" Namespace="calico-system" Pod="whisker-86b9c948c9-rsbhz" WorkloadEndpoint="localhost-k8s-whisker--86b9c948c9--rsbhz-eth0" Oct 27 08:31:01.914862 containerd[1685]: 2025-10-27 08:31:01.885 [INFO][4108] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a3b63a6ae2b673d5273cd30c54b3aa7435099a23b026ff5384ab7f65beffe51a" Namespace="calico-system" Pod="whisker-86b9c948c9-rsbhz" WorkloadEndpoint="localhost-k8s-whisker--86b9c948c9--rsbhz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--86b9c948c9--rsbhz-eth0", GenerateName:"whisker-86b9c948c9-", Namespace:"calico-system", SelfLink:"", UID:"275ad17e-d663-404d-99d3-03d139a8cc73", ResourceVersion:"923", Generation:0, CreationTimestamp:time.Date(2025, time.October, 27, 8, 31, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"86b9c948c9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a3b63a6ae2b673d5273cd30c54b3aa7435099a23b026ff5384ab7f65beffe51a", Pod:"whisker-86b9c948c9-rsbhz", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calie0c76bebb54", MAC:"ea:02:76:14:56:b1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 27 08:31:01.915435 containerd[1685]: 2025-10-27 08:31:01.904 [INFO][4108] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a3b63a6ae2b673d5273cd30c54b3aa7435099a23b026ff5384ab7f65beffe51a" Namespace="calico-system" Pod="whisker-86b9c948c9-rsbhz" WorkloadEndpoint="localhost-k8s-whisker--86b9c948c9--rsbhz-eth0" Oct 27 08:31:01.971922 kubelet[2996]: I1027 08:31:01.970578 2996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a4275eb-9684-4e6e-9ea4-6cdccb43bcef" path="/var/lib/kubelet/pods/9a4275eb-9684-4e6e-9ea4-6cdccb43bcef/volumes" Oct 27 08:31:02.085853 containerd[1685]: time="2025-10-27T08:31:02.085828109Z" level=info msg="connecting to shim a3b63a6ae2b673d5273cd30c54b3aa7435099a23b026ff5384ab7f65beffe51a" address="unix:///run/containerd/s/fe07b3f0a9db25676d82e92da17aced2b167689b8870f47787a2d7a29dfa4eb1" namespace=k8s.io protocol=ttrpc version=3 Oct 27 08:31:02.107072 systemd[1]: Started cri-containerd-a3b63a6ae2b673d5273cd30c54b3aa7435099a23b026ff5384ab7f65beffe51a.scope - libcontainer container a3b63a6ae2b673d5273cd30c54b3aa7435099a23b026ff5384ab7f65beffe51a. Oct 27 08:31:02.115321 systemd-resolved[1342]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 27 08:31:02.143009 containerd[1685]: time="2025-10-27T08:31:02.142985234Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-86b9c948c9-rsbhz,Uid:275ad17e-d663-404d-99d3-03d139a8cc73,Namespace:calico-system,Attempt:0,} returns sandbox id \"a3b63a6ae2b673d5273cd30c54b3aa7435099a23b026ff5384ab7f65beffe51a\"" Oct 27 08:31:02.144940 containerd[1685]: time="2025-10-27T08:31:02.144726669Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 27 08:31:02.506495 containerd[1685]: time="2025-10-27T08:31:02.506410764Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:31:02.506963 containerd[1685]: time="2025-10-27T08:31:02.506932621Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 27 08:31:02.507027 containerd[1685]: time="2025-10-27T08:31:02.506995095Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Oct 27 08:31:02.507244 kubelet[2996]: E1027 08:31:02.507175 2996 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 27 08:31:02.507244 kubelet[2996]: E1027 08:31:02.507229 2996 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 27 08:31:02.514430 kubelet[2996]: E1027 08:31:02.514393 2996 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:857366382d06447f833489122285d3f7,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-854sq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-86b9c948c9-rsbhz_calico-system(275ad17e-d663-404d-99d3-03d139a8cc73): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 27 08:31:02.516604 containerd[1685]: time="2025-10-27T08:31:02.516576561Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 27 08:31:02.916002 containerd[1685]: time="2025-10-27T08:31:02.915948584Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:31:02.916583 containerd[1685]: time="2025-10-27T08:31:02.916548887Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 27 08:31:02.916661 containerd[1685]: time="2025-10-27T08:31:02.916564925Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Oct 27 08:31:02.916780 kubelet[2996]: E1027 08:31:02.916733 2996 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 27 08:31:02.916823 kubelet[2996]: E1027 08:31:02.916789 2996 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 27 08:31:02.917045 kubelet[2996]: E1027 08:31:02.917005 2996 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-854sq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-86b9c948c9-rsbhz_calico-system(275ad17e-d663-404d-99d3-03d139a8cc73): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 27 08:31:02.918491 kubelet[2996]: E1027 08:31:02.918455 2996 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-86b9c948c9-rsbhz" podUID="275ad17e-d663-404d-99d3-03d139a8cc73" Oct 27 08:31:03.054793 kubelet[2996]: E1027 08:31:03.054756 2996 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-86b9c948c9-rsbhz" podUID="275ad17e-d663-404d-99d3-03d139a8cc73" Oct 27 08:31:03.854695 containerd[1685]: time="2025-10-27T08:31:03.854664481Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74fcb5d84f-x85t2,Uid:f209a925-c7ce-4786-8b27-b2615413f1ab,Namespace:calico-apiserver,Attempt:0,}" Oct 27 08:31:03.855050 containerd[1685]: time="2025-10-27T08:31:03.855029438Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-xmfql,Uid:d4806ae9-c0ee-46ba-a6b8-04fea562ec1c,Namespace:kube-system,Attempt:0,}" Oct 27 08:31:03.863552 systemd-networkd[1580]: calie0c76bebb54: Gained IPv6LL Oct 27 08:31:03.924888 systemd-networkd[1580]: cali01790842e1f: Link UP Oct 27 08:31:03.925193 systemd-networkd[1580]: cali01790842e1f: Gained carrier Oct 27 08:31:03.934411 containerd[1685]: 2025-10-27 08:31:03.879 [INFO][4299] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 27 08:31:03.934411 containerd[1685]: 2025-10-27 08:31:03.884 [INFO][4299] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--74fcb5d84f--x85t2-eth0 calico-apiserver-74fcb5d84f- calico-apiserver f209a925-c7ce-4786-8b27-b2615413f1ab 853 0 2025-10-27 08:30:37 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:74fcb5d84f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-74fcb5d84f-x85t2 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali01790842e1f [] [] }} ContainerID="b7d51d75456c9e1f4d953d7ac09a2a23a7a479244b5ac19d9ac856b4a7b57045" Namespace="calico-apiserver" Pod="calico-apiserver-74fcb5d84f-x85t2" WorkloadEndpoint="localhost-k8s-calico--apiserver--74fcb5d84f--x85t2-" Oct 27 08:31:03.934411 containerd[1685]: 2025-10-27 08:31:03.884 [INFO][4299] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b7d51d75456c9e1f4d953d7ac09a2a23a7a479244b5ac19d9ac856b4a7b57045" Namespace="calico-apiserver" Pod="calico-apiserver-74fcb5d84f-x85t2" WorkloadEndpoint="localhost-k8s-calico--apiserver--74fcb5d84f--x85t2-eth0" Oct 27 08:31:03.934411 containerd[1685]: 2025-10-27 08:31:03.900 [INFO][4322] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b7d51d75456c9e1f4d953d7ac09a2a23a7a479244b5ac19d9ac856b4a7b57045" HandleID="k8s-pod-network.b7d51d75456c9e1f4d953d7ac09a2a23a7a479244b5ac19d9ac856b4a7b57045" Workload="localhost-k8s-calico--apiserver--74fcb5d84f--x85t2-eth0" Oct 27 08:31:03.935005 containerd[1685]: 2025-10-27 08:31:03.901 [INFO][4322] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b7d51d75456c9e1f4d953d7ac09a2a23a7a479244b5ac19d9ac856b4a7b57045" HandleID="k8s-pod-network.b7d51d75456c9e1f4d953d7ac09a2a23a7a479244b5ac19d9ac856b4a7b57045" Workload="localhost-k8s-calico--apiserver--74fcb5d84f--x85t2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f8b0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-74fcb5d84f-x85t2", "timestamp":"2025-10-27 08:31:03.900894568 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 27 08:31:03.935005 containerd[1685]: 2025-10-27 08:31:03.901 [INFO][4322] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 27 08:31:03.935005 containerd[1685]: 2025-10-27 08:31:03.901 [INFO][4322] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 27 08:31:03.935005 containerd[1685]: 2025-10-27 08:31:03.901 [INFO][4322] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 27 08:31:03.935005 containerd[1685]: 2025-10-27 08:31:03.906 [INFO][4322] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b7d51d75456c9e1f4d953d7ac09a2a23a7a479244b5ac19d9ac856b4a7b57045" host="localhost" Oct 27 08:31:03.935005 containerd[1685]: 2025-10-27 08:31:03.908 [INFO][4322] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 27 08:31:03.935005 containerd[1685]: 2025-10-27 08:31:03.910 [INFO][4322] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 27 08:31:03.935005 containerd[1685]: 2025-10-27 08:31:03.911 [INFO][4322] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 27 08:31:03.935005 containerd[1685]: 2025-10-27 08:31:03.912 [INFO][4322] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 27 08:31:03.935005 containerd[1685]: 2025-10-27 08:31:03.912 [INFO][4322] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b7d51d75456c9e1f4d953d7ac09a2a23a7a479244b5ac19d9ac856b4a7b57045" host="localhost" Oct 27 08:31:03.935200 containerd[1685]: 2025-10-27 08:31:03.913 [INFO][4322] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b7d51d75456c9e1f4d953d7ac09a2a23a7a479244b5ac19d9ac856b4a7b57045 Oct 27 08:31:03.935200 containerd[1685]: 2025-10-27 08:31:03.914 [INFO][4322] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b7d51d75456c9e1f4d953d7ac09a2a23a7a479244b5ac19d9ac856b4a7b57045" host="localhost" Oct 27 08:31:03.935200 containerd[1685]: 2025-10-27 08:31:03.917 [INFO][4322] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.b7d51d75456c9e1f4d953d7ac09a2a23a7a479244b5ac19d9ac856b4a7b57045" host="localhost" Oct 27 08:31:03.935200 containerd[1685]: 2025-10-27 08:31:03.917 [INFO][4322] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.b7d51d75456c9e1f4d953d7ac09a2a23a7a479244b5ac19d9ac856b4a7b57045" host="localhost" Oct 27 08:31:03.935200 containerd[1685]: 2025-10-27 08:31:03.917 [INFO][4322] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 27 08:31:03.935200 containerd[1685]: 2025-10-27 08:31:03.918 [INFO][4322] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="b7d51d75456c9e1f4d953d7ac09a2a23a7a479244b5ac19d9ac856b4a7b57045" HandleID="k8s-pod-network.b7d51d75456c9e1f4d953d7ac09a2a23a7a479244b5ac19d9ac856b4a7b57045" Workload="localhost-k8s-calico--apiserver--74fcb5d84f--x85t2-eth0" Oct 27 08:31:03.936803 containerd[1685]: 2025-10-27 08:31:03.922 [INFO][4299] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b7d51d75456c9e1f4d953d7ac09a2a23a7a479244b5ac19d9ac856b4a7b57045" Namespace="calico-apiserver" Pod="calico-apiserver-74fcb5d84f-x85t2" WorkloadEndpoint="localhost-k8s-calico--apiserver--74fcb5d84f--x85t2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--74fcb5d84f--x85t2-eth0", GenerateName:"calico-apiserver-74fcb5d84f-", Namespace:"calico-apiserver", SelfLink:"", UID:"f209a925-c7ce-4786-8b27-b2615413f1ab", ResourceVersion:"853", Generation:0, CreationTimestamp:time.Date(2025, time.October, 27, 8, 30, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"74fcb5d84f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-74fcb5d84f-x85t2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali01790842e1f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 27 08:31:03.936848 containerd[1685]: 2025-10-27 08:31:03.922 [INFO][4299] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="b7d51d75456c9e1f4d953d7ac09a2a23a7a479244b5ac19d9ac856b4a7b57045" Namespace="calico-apiserver" Pod="calico-apiserver-74fcb5d84f-x85t2" WorkloadEndpoint="localhost-k8s-calico--apiserver--74fcb5d84f--x85t2-eth0" Oct 27 08:31:03.936848 containerd[1685]: 2025-10-27 08:31:03.922 [INFO][4299] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali01790842e1f ContainerID="b7d51d75456c9e1f4d953d7ac09a2a23a7a479244b5ac19d9ac856b4a7b57045" Namespace="calico-apiserver" Pod="calico-apiserver-74fcb5d84f-x85t2" WorkloadEndpoint="localhost-k8s-calico--apiserver--74fcb5d84f--x85t2-eth0" Oct 27 08:31:03.936848 containerd[1685]: 2025-10-27 08:31:03.925 [INFO][4299] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b7d51d75456c9e1f4d953d7ac09a2a23a7a479244b5ac19d9ac856b4a7b57045" Namespace="calico-apiserver" Pod="calico-apiserver-74fcb5d84f-x85t2" WorkloadEndpoint="localhost-k8s-calico--apiserver--74fcb5d84f--x85t2-eth0" Oct 27 08:31:03.936896 containerd[1685]: 2025-10-27 08:31:03.925 [INFO][4299] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b7d51d75456c9e1f4d953d7ac09a2a23a7a479244b5ac19d9ac856b4a7b57045" Namespace="calico-apiserver" Pod="calico-apiserver-74fcb5d84f-x85t2" WorkloadEndpoint="localhost-k8s-calico--apiserver--74fcb5d84f--x85t2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--74fcb5d84f--x85t2-eth0", GenerateName:"calico-apiserver-74fcb5d84f-", Namespace:"calico-apiserver", SelfLink:"", UID:"f209a925-c7ce-4786-8b27-b2615413f1ab", ResourceVersion:"853", Generation:0, CreationTimestamp:time.Date(2025, time.October, 27, 8, 30, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"74fcb5d84f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b7d51d75456c9e1f4d953d7ac09a2a23a7a479244b5ac19d9ac856b4a7b57045", Pod:"calico-apiserver-74fcb5d84f-x85t2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali01790842e1f", MAC:"7e:95:fc:cf:d7:72", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 27 08:31:03.936949 containerd[1685]: 2025-10-27 08:31:03.932 [INFO][4299] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b7d51d75456c9e1f4d953d7ac09a2a23a7a479244b5ac19d9ac856b4a7b57045" Namespace="calico-apiserver" Pod="calico-apiserver-74fcb5d84f-x85t2" WorkloadEndpoint="localhost-k8s-calico--apiserver--74fcb5d84f--x85t2-eth0" Oct 27 08:31:03.947589 containerd[1685]: time="2025-10-27T08:31:03.947550987Z" level=info msg="connecting to shim b7d51d75456c9e1f4d953d7ac09a2a23a7a479244b5ac19d9ac856b4a7b57045" address="unix:///run/containerd/s/e037e521d886d6f20c6ac3fb35c96383f74c85b755290f7e3a1dd541e9869565" namespace=k8s.io protocol=ttrpc version=3 Oct 27 08:31:03.965128 systemd[1]: Started cri-containerd-b7d51d75456c9e1f4d953d7ac09a2a23a7a479244b5ac19d9ac856b4a7b57045.scope - libcontainer container b7d51d75456c9e1f4d953d7ac09a2a23a7a479244b5ac19d9ac856b4a7b57045. Oct 27 08:31:03.974044 systemd-resolved[1342]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 27 08:31:04.000120 containerd[1685]: time="2025-10-27T08:31:04.000100339Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74fcb5d84f-x85t2,Uid:f209a925-c7ce-4786-8b27-b2615413f1ab,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"b7d51d75456c9e1f4d953d7ac09a2a23a7a479244b5ac19d9ac856b4a7b57045\"" Oct 27 08:31:04.000979 containerd[1685]: time="2025-10-27T08:31:04.000964954Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 27 08:31:04.022838 systemd-networkd[1580]: calif2c359e3ebd: Link UP Oct 27 08:31:04.022981 systemd-networkd[1580]: calif2c359e3ebd: Gained carrier Oct 27 08:31:04.034750 containerd[1685]: 2025-10-27 08:31:03.876 [INFO][4300] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 27 08:31:04.034750 containerd[1685]: 2025-10-27 08:31:03.885 [INFO][4300] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--xmfql-eth0 coredns-674b8bbfcf- kube-system d4806ae9-c0ee-46ba-a6b8-04fea562ec1c 845 0 2025-10-27 08:30:29 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-xmfql eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calif2c359e3ebd [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="0f092899c0d4d3b99ab049d95127d3833f13cd5d891373e7e30aecc48e91effe" Namespace="kube-system" Pod="coredns-674b8bbfcf-xmfql" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--xmfql-" Oct 27 08:31:04.034750 containerd[1685]: 2025-10-27 08:31:03.885 [INFO][4300] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0f092899c0d4d3b99ab049d95127d3833f13cd5d891373e7e30aecc48e91effe" Namespace="kube-system" Pod="coredns-674b8bbfcf-xmfql" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--xmfql-eth0" Oct 27 08:31:04.034750 containerd[1685]: 2025-10-27 08:31:03.906 [INFO][4324] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0f092899c0d4d3b99ab049d95127d3833f13cd5d891373e7e30aecc48e91effe" HandleID="k8s-pod-network.0f092899c0d4d3b99ab049d95127d3833f13cd5d891373e7e30aecc48e91effe" Workload="localhost-k8s-coredns--674b8bbfcf--xmfql-eth0" Oct 27 08:31:04.034984 containerd[1685]: 2025-10-27 08:31:03.906 [INFO][4324] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="0f092899c0d4d3b99ab049d95127d3833f13cd5d891373e7e30aecc48e91effe" HandleID="k8s-pod-network.0f092899c0d4d3b99ab049d95127d3833f13cd5d891373e7e30aecc48e91effe" Workload="localhost-k8s-coredns--674b8bbfcf--xmfql-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d50f0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-xmfql", "timestamp":"2025-10-27 08:31:03.906093453 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 27 08:31:04.034984 containerd[1685]: 2025-10-27 08:31:03.906 [INFO][4324] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 27 08:31:04.034984 containerd[1685]: 2025-10-27 08:31:03.918 [INFO][4324] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 27 08:31:04.034984 containerd[1685]: 2025-10-27 08:31:03.918 [INFO][4324] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 27 08:31:04.034984 containerd[1685]: 2025-10-27 08:31:04.007 [INFO][4324] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0f092899c0d4d3b99ab049d95127d3833f13cd5d891373e7e30aecc48e91effe" host="localhost" Oct 27 08:31:04.034984 containerd[1685]: 2025-10-27 08:31:04.010 [INFO][4324] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 27 08:31:04.034984 containerd[1685]: 2025-10-27 08:31:04.012 [INFO][4324] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 27 08:31:04.034984 containerd[1685]: 2025-10-27 08:31:04.012 [INFO][4324] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 27 08:31:04.034984 containerd[1685]: 2025-10-27 08:31:04.013 [INFO][4324] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 27 08:31:04.034984 containerd[1685]: 2025-10-27 08:31:04.013 [INFO][4324] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0f092899c0d4d3b99ab049d95127d3833f13cd5d891373e7e30aecc48e91effe" host="localhost" Oct 27 08:31:04.035229 containerd[1685]: 2025-10-27 08:31:04.014 [INFO][4324] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.0f092899c0d4d3b99ab049d95127d3833f13cd5d891373e7e30aecc48e91effe Oct 27 08:31:04.035229 containerd[1685]: 2025-10-27 08:31:04.016 [INFO][4324] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0f092899c0d4d3b99ab049d95127d3833f13cd5d891373e7e30aecc48e91effe" host="localhost" Oct 27 08:31:04.035229 containerd[1685]: 2025-10-27 08:31:04.019 [INFO][4324] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.0f092899c0d4d3b99ab049d95127d3833f13cd5d891373e7e30aecc48e91effe" host="localhost" Oct 27 08:31:04.035229 containerd[1685]: 2025-10-27 08:31:04.019 [INFO][4324] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.0f092899c0d4d3b99ab049d95127d3833f13cd5d891373e7e30aecc48e91effe" host="localhost" Oct 27 08:31:04.035229 containerd[1685]: 2025-10-27 08:31:04.019 [INFO][4324] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 27 08:31:04.035229 containerd[1685]: 2025-10-27 08:31:04.019 [INFO][4324] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="0f092899c0d4d3b99ab049d95127d3833f13cd5d891373e7e30aecc48e91effe" HandleID="k8s-pod-network.0f092899c0d4d3b99ab049d95127d3833f13cd5d891373e7e30aecc48e91effe" Workload="localhost-k8s-coredns--674b8bbfcf--xmfql-eth0" Oct 27 08:31:04.035355 containerd[1685]: 2025-10-27 08:31:04.020 [INFO][4300] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0f092899c0d4d3b99ab049d95127d3833f13cd5d891373e7e30aecc48e91effe" Namespace="kube-system" Pod="coredns-674b8bbfcf-xmfql" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--xmfql-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--xmfql-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"d4806ae9-c0ee-46ba-a6b8-04fea562ec1c", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2025, time.October, 27, 8, 30, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-xmfql", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif2c359e3ebd", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 27 08:31:04.035411 containerd[1685]: 2025-10-27 08:31:04.020 [INFO][4300] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="0f092899c0d4d3b99ab049d95127d3833f13cd5d891373e7e30aecc48e91effe" Namespace="kube-system" Pod="coredns-674b8bbfcf-xmfql" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--xmfql-eth0" Oct 27 08:31:04.035411 containerd[1685]: 2025-10-27 08:31:04.021 [INFO][4300] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif2c359e3ebd ContainerID="0f092899c0d4d3b99ab049d95127d3833f13cd5d891373e7e30aecc48e91effe" Namespace="kube-system" Pod="coredns-674b8bbfcf-xmfql" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--xmfql-eth0" Oct 27 08:31:04.035411 containerd[1685]: 2025-10-27 08:31:04.022 [INFO][4300] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0f092899c0d4d3b99ab049d95127d3833f13cd5d891373e7e30aecc48e91effe" Namespace="kube-system" Pod="coredns-674b8bbfcf-xmfql" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--xmfql-eth0" Oct 27 08:31:04.035509 containerd[1685]: 2025-10-27 08:31:04.022 [INFO][4300] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0f092899c0d4d3b99ab049d95127d3833f13cd5d891373e7e30aecc48e91effe" Namespace="kube-system" Pod="coredns-674b8bbfcf-xmfql" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--xmfql-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--xmfql-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"d4806ae9-c0ee-46ba-a6b8-04fea562ec1c", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2025, time.October, 27, 8, 30, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0f092899c0d4d3b99ab049d95127d3833f13cd5d891373e7e30aecc48e91effe", Pod:"coredns-674b8bbfcf-xmfql", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif2c359e3ebd", MAC:"6e:33:fc:97:7c:be", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 27 08:31:04.035509 containerd[1685]: 2025-10-27 08:31:04.032 [INFO][4300] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0f092899c0d4d3b99ab049d95127d3833f13cd5d891373e7e30aecc48e91effe" Namespace="kube-system" Pod="coredns-674b8bbfcf-xmfql" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--xmfql-eth0" Oct 27 08:31:04.057954 kubelet[2996]: E1027 08:31:04.057898 2996 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-86b9c948c9-rsbhz" podUID="275ad17e-d663-404d-99d3-03d139a8cc73" Oct 27 08:31:04.081405 containerd[1685]: time="2025-10-27T08:31:04.081365386Z" level=info msg="connecting to shim 0f092899c0d4d3b99ab049d95127d3833f13cd5d891373e7e30aecc48e91effe" address="unix:///run/containerd/s/1fd922a9e875afe86f4badef86a988f8ad3505dd866d95461ca23dd26c193876" namespace=k8s.io protocol=ttrpc version=3 Oct 27 08:31:04.098015 systemd[1]: Started cri-containerd-0f092899c0d4d3b99ab049d95127d3833f13cd5d891373e7e30aecc48e91effe.scope - libcontainer container 0f092899c0d4d3b99ab049d95127d3833f13cd5d891373e7e30aecc48e91effe. Oct 27 08:31:04.107706 systemd-resolved[1342]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 27 08:31:04.139268 containerd[1685]: time="2025-10-27T08:31:04.139243842Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-xmfql,Uid:d4806ae9-c0ee-46ba-a6b8-04fea562ec1c,Namespace:kube-system,Attempt:0,} returns sandbox id \"0f092899c0d4d3b99ab049d95127d3833f13cd5d891373e7e30aecc48e91effe\"" Oct 27 08:31:04.141461 containerd[1685]: time="2025-10-27T08:31:04.141444504Z" level=info msg="CreateContainer within sandbox \"0f092899c0d4d3b99ab049d95127d3833f13cd5d891373e7e30aecc48e91effe\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 27 08:31:04.153358 containerd[1685]: time="2025-10-27T08:31:04.153247804Z" level=info msg="Container 9285fd142c1a3e723787a9835336962b85d14defa3edb2a8949cd9ae81b12a17: CDI devices from CRI Config.CDIDevices: []" Oct 27 08:31:04.156155 containerd[1685]: time="2025-10-27T08:31:04.156137927Z" level=info msg="CreateContainer within sandbox \"0f092899c0d4d3b99ab049d95127d3833f13cd5d891373e7e30aecc48e91effe\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9285fd142c1a3e723787a9835336962b85d14defa3edb2a8949cd9ae81b12a17\"" Oct 27 08:31:04.157887 containerd[1685]: time="2025-10-27T08:31:04.157222445Z" level=info msg="StartContainer for \"9285fd142c1a3e723787a9835336962b85d14defa3edb2a8949cd9ae81b12a17\"" Oct 27 08:31:04.158153 containerd[1685]: time="2025-10-27T08:31:04.158141280Z" level=info msg="connecting to shim 9285fd142c1a3e723787a9835336962b85d14defa3edb2a8949cd9ae81b12a17" address="unix:///run/containerd/s/1fd922a9e875afe86f4badef86a988f8ad3505dd866d95461ca23dd26c193876" protocol=ttrpc version=3 Oct 27 08:31:04.173071 systemd[1]: Started cri-containerd-9285fd142c1a3e723787a9835336962b85d14defa3edb2a8949cd9ae81b12a17.scope - libcontainer container 9285fd142c1a3e723787a9835336962b85d14defa3edb2a8949cd9ae81b12a17. Oct 27 08:31:04.246389 containerd[1685]: time="2025-10-27T08:31:04.246322226Z" level=info msg="StartContainer for \"9285fd142c1a3e723787a9835336962b85d14defa3edb2a8949cd9ae81b12a17\" returns successfully" Oct 27 08:31:04.569179 containerd[1685]: time="2025-10-27T08:31:04.569123274Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:31:04.570005 containerd[1685]: time="2025-10-27T08:31:04.569886949Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 27 08:31:04.570005 containerd[1685]: time="2025-10-27T08:31:04.569963747Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 27 08:31:04.570159 kubelet[2996]: E1027 08:31:04.570127 2996 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 27 08:31:04.570309 kubelet[2996]: E1027 08:31:04.570165 2996 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 27 08:31:04.571041 kubelet[2996]: E1027 08:31:04.570999 2996 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j45rg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-74fcb5d84f-x85t2_calico-apiserver(f209a925-c7ce-4786-8b27-b2615413f1ab): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 27 08:31:04.572423 kubelet[2996]: E1027 08:31:04.572387 2996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74fcb5d84f-x85t2" podUID="f209a925-c7ce-4786-8b27-b2615413f1ab" Oct 27 08:31:05.067977 kubelet[2996]: E1027 08:31:05.066163 2996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74fcb5d84f-x85t2" podUID="f209a925-c7ce-4786-8b27-b2615413f1ab" Oct 27 08:31:05.078958 systemd-networkd[1580]: calif2c359e3ebd: Gained IPv6LL Oct 27 08:31:05.123195 kubelet[2996]: I1027 08:31:05.123149 2996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-xmfql" podStartSLOduration=36.123134637 podStartE2EDuration="36.123134637s" podCreationTimestamp="2025-10-27 08:30:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 08:31:05.097142635 +0000 UTC m=+43.410158534" watchObservedRunningTime="2025-10-27 08:31:05.123134637 +0000 UTC m=+43.436150536" Oct 27 08:31:05.462124 systemd-networkd[1580]: cali01790842e1f: Gained IPv6LL Oct 27 08:31:06.054023 kubelet[2996]: I1027 08:31:06.053894 2996 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 27 08:31:06.538838 systemd-networkd[1580]: vxlan.calico: Link UP Oct 27 08:31:06.538842 systemd-networkd[1580]: vxlan.calico: Gained carrier Oct 27 08:31:06.852777 containerd[1685]: time="2025-10-27T08:31:06.852565769Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-mxfbf,Uid:01295943-bd91-493a-b86a-7f82f8f61b26,Namespace:calico-system,Attempt:0,}" Oct 27 08:31:06.853504 containerd[1685]: time="2025-10-27T08:31:06.853028928Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74fcb5d84f-hzkgp,Uid:423be6d6-16fc-456a-a6d1-876213aa577d,Namespace:calico-apiserver,Attempt:0,}" Oct 27 08:31:06.853504 containerd[1685]: time="2025-10-27T08:31:06.853177997Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-565cf6bf7-fd9ql,Uid:024044b8-ffed-437e-be0d-2af9d6b61984,Namespace:calico-system,Attempt:0,}" Oct 27 08:31:06.880171 containerd[1685]: time="2025-10-27T08:31:06.880126253Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-bnvzn,Uid:e75ee41c-b373-44a8-9425-dc19e6224499,Namespace:kube-system,Attempt:0,}" Oct 27 08:31:06.980487 systemd-networkd[1580]: cali9debba472cf: Link UP Oct 27 08:31:06.982314 systemd-networkd[1580]: cali9debba472cf: Gained carrier Oct 27 08:31:07.003010 containerd[1685]: 2025-10-27 08:31:06.913 [INFO][4653] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--74fcb5d84f--hzkgp-eth0 calico-apiserver-74fcb5d84f- calico-apiserver 423be6d6-16fc-456a-a6d1-876213aa577d 857 0 2025-10-27 08:30:37 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:74fcb5d84f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-74fcb5d84f-hzkgp eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali9debba472cf [] [] }} ContainerID="b3a0203b105733f4c30425fa4b1f3a19baa41acea4383ed1baac0b21bf2702d1" Namespace="calico-apiserver" Pod="calico-apiserver-74fcb5d84f-hzkgp" WorkloadEndpoint="localhost-k8s-calico--apiserver--74fcb5d84f--hzkgp-" Oct 27 08:31:07.003010 containerd[1685]: 2025-10-27 08:31:06.914 [INFO][4653] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b3a0203b105733f4c30425fa4b1f3a19baa41acea4383ed1baac0b21bf2702d1" Namespace="calico-apiserver" Pod="calico-apiserver-74fcb5d84f-hzkgp" WorkloadEndpoint="localhost-k8s-calico--apiserver--74fcb5d84f--hzkgp-eth0" Oct 27 08:31:07.003010 containerd[1685]: 2025-10-27 08:31:06.948 [INFO][4701] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b3a0203b105733f4c30425fa4b1f3a19baa41acea4383ed1baac0b21bf2702d1" HandleID="k8s-pod-network.b3a0203b105733f4c30425fa4b1f3a19baa41acea4383ed1baac0b21bf2702d1" Workload="localhost-k8s-calico--apiserver--74fcb5d84f--hzkgp-eth0" Oct 27 08:31:07.003010 containerd[1685]: 2025-10-27 08:31:06.948 [INFO][4701] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b3a0203b105733f4c30425fa4b1f3a19baa41acea4383ed1baac0b21bf2702d1" HandleID="k8s-pod-network.b3a0203b105733f4c30425fa4b1f3a19baa41acea4383ed1baac0b21bf2702d1" Workload="localhost-k8s-calico--apiserver--74fcb5d84f--hzkgp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ccfe0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-74fcb5d84f-hzkgp", "timestamp":"2025-10-27 08:31:06.948847987 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 27 08:31:07.003010 containerd[1685]: 2025-10-27 08:31:06.949 [INFO][4701] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 27 08:31:07.003010 containerd[1685]: 2025-10-27 08:31:06.949 [INFO][4701] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 27 08:31:07.003010 containerd[1685]: 2025-10-27 08:31:06.949 [INFO][4701] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 27 08:31:07.003010 containerd[1685]: 2025-10-27 08:31:06.956 [INFO][4701] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b3a0203b105733f4c30425fa4b1f3a19baa41acea4383ed1baac0b21bf2702d1" host="localhost" Oct 27 08:31:07.003010 containerd[1685]: 2025-10-27 08:31:06.960 [INFO][4701] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 27 08:31:07.003010 containerd[1685]: 2025-10-27 08:31:06.962 [INFO][4701] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 27 08:31:07.003010 containerd[1685]: 2025-10-27 08:31:06.964 [INFO][4701] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 27 08:31:07.003010 containerd[1685]: 2025-10-27 08:31:06.965 [INFO][4701] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 27 08:31:07.003010 containerd[1685]: 2025-10-27 08:31:06.965 [INFO][4701] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b3a0203b105733f4c30425fa4b1f3a19baa41acea4383ed1baac0b21bf2702d1" host="localhost" Oct 27 08:31:07.003010 containerd[1685]: 2025-10-27 08:31:06.966 [INFO][4701] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b3a0203b105733f4c30425fa4b1f3a19baa41acea4383ed1baac0b21bf2702d1 Oct 27 08:31:07.003010 containerd[1685]: 2025-10-27 08:31:06.969 [INFO][4701] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b3a0203b105733f4c30425fa4b1f3a19baa41acea4383ed1baac0b21bf2702d1" host="localhost" Oct 27 08:31:07.003010 containerd[1685]: 2025-10-27 08:31:06.973 [INFO][4701] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.b3a0203b105733f4c30425fa4b1f3a19baa41acea4383ed1baac0b21bf2702d1" host="localhost" Oct 27 08:31:07.003010 containerd[1685]: 2025-10-27 08:31:06.973 [INFO][4701] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.b3a0203b105733f4c30425fa4b1f3a19baa41acea4383ed1baac0b21bf2702d1" host="localhost" Oct 27 08:31:07.003010 containerd[1685]: 2025-10-27 08:31:06.973 [INFO][4701] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 27 08:31:07.003010 containerd[1685]: 2025-10-27 08:31:06.973 [INFO][4701] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="b3a0203b105733f4c30425fa4b1f3a19baa41acea4383ed1baac0b21bf2702d1" HandleID="k8s-pod-network.b3a0203b105733f4c30425fa4b1f3a19baa41acea4383ed1baac0b21bf2702d1" Workload="localhost-k8s-calico--apiserver--74fcb5d84f--hzkgp-eth0" Oct 27 08:31:07.007175 containerd[1685]: 2025-10-27 08:31:06.977 [INFO][4653] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b3a0203b105733f4c30425fa4b1f3a19baa41acea4383ed1baac0b21bf2702d1" Namespace="calico-apiserver" Pod="calico-apiserver-74fcb5d84f-hzkgp" WorkloadEndpoint="localhost-k8s-calico--apiserver--74fcb5d84f--hzkgp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--74fcb5d84f--hzkgp-eth0", GenerateName:"calico-apiserver-74fcb5d84f-", Namespace:"calico-apiserver", SelfLink:"", UID:"423be6d6-16fc-456a-a6d1-876213aa577d", ResourceVersion:"857", Generation:0, CreationTimestamp:time.Date(2025, time.October, 27, 8, 30, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"74fcb5d84f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-74fcb5d84f-hzkgp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9debba472cf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 27 08:31:07.007175 containerd[1685]: 2025-10-27 08:31:06.977 [INFO][4653] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="b3a0203b105733f4c30425fa4b1f3a19baa41acea4383ed1baac0b21bf2702d1" Namespace="calico-apiserver" Pod="calico-apiserver-74fcb5d84f-hzkgp" WorkloadEndpoint="localhost-k8s-calico--apiserver--74fcb5d84f--hzkgp-eth0" Oct 27 08:31:07.007175 containerd[1685]: 2025-10-27 08:31:06.977 [INFO][4653] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9debba472cf ContainerID="b3a0203b105733f4c30425fa4b1f3a19baa41acea4383ed1baac0b21bf2702d1" Namespace="calico-apiserver" Pod="calico-apiserver-74fcb5d84f-hzkgp" WorkloadEndpoint="localhost-k8s-calico--apiserver--74fcb5d84f--hzkgp-eth0" Oct 27 08:31:07.007175 containerd[1685]: 2025-10-27 08:31:06.980 [INFO][4653] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b3a0203b105733f4c30425fa4b1f3a19baa41acea4383ed1baac0b21bf2702d1" Namespace="calico-apiserver" Pod="calico-apiserver-74fcb5d84f-hzkgp" WorkloadEndpoint="localhost-k8s-calico--apiserver--74fcb5d84f--hzkgp-eth0" Oct 27 08:31:07.007175 containerd[1685]: 2025-10-27 08:31:06.981 [INFO][4653] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b3a0203b105733f4c30425fa4b1f3a19baa41acea4383ed1baac0b21bf2702d1" Namespace="calico-apiserver" Pod="calico-apiserver-74fcb5d84f-hzkgp" WorkloadEndpoint="localhost-k8s-calico--apiserver--74fcb5d84f--hzkgp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--74fcb5d84f--hzkgp-eth0", GenerateName:"calico-apiserver-74fcb5d84f-", Namespace:"calico-apiserver", SelfLink:"", UID:"423be6d6-16fc-456a-a6d1-876213aa577d", ResourceVersion:"857", Generation:0, CreationTimestamp:time.Date(2025, time.October, 27, 8, 30, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"74fcb5d84f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b3a0203b105733f4c30425fa4b1f3a19baa41acea4383ed1baac0b21bf2702d1", Pod:"calico-apiserver-74fcb5d84f-hzkgp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9debba472cf", MAC:"d6:c9:b3:17:aa:28", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 27 08:31:07.007175 containerd[1685]: 2025-10-27 08:31:06.991 [INFO][4653] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b3a0203b105733f4c30425fa4b1f3a19baa41acea4383ed1baac0b21bf2702d1" Namespace="calico-apiserver" Pod="calico-apiserver-74fcb5d84f-hzkgp" WorkloadEndpoint="localhost-k8s-calico--apiserver--74fcb5d84f--hzkgp-eth0" Oct 27 08:31:07.023718 containerd[1685]: time="2025-10-27T08:31:07.023689996Z" level=info msg="connecting to shim b3a0203b105733f4c30425fa4b1f3a19baa41acea4383ed1baac0b21bf2702d1" address="unix:///run/containerd/s/4c3549e5cfc73dd490c7f3a1e0113f7d2bdd9aaa1f960f571b154ab4f636deee" namespace=k8s.io protocol=ttrpc version=3 Oct 27 08:31:07.049057 systemd[1]: Started cri-containerd-b3a0203b105733f4c30425fa4b1f3a19baa41acea4383ed1baac0b21bf2702d1.scope - libcontainer container b3a0203b105733f4c30425fa4b1f3a19baa41acea4383ed1baac0b21bf2702d1. Oct 27 08:31:07.057066 systemd-resolved[1342]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 27 08:31:07.081948 systemd-networkd[1580]: cali92af3fbbbc9: Link UP Oct 27 08:31:07.082786 systemd-networkd[1580]: cali92af3fbbbc9: Gained carrier Oct 27 08:31:07.094127 containerd[1685]: 2025-10-27 08:31:06.909 [INFO][4658] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--565cf6bf7--fd9ql-eth0 calico-kube-controllers-565cf6bf7- calico-system 024044b8-ffed-437e-be0d-2af9d6b61984 856 0 2025-10-27 08:30:40 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:565cf6bf7 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-565cf6bf7-fd9ql eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali92af3fbbbc9 [] [] }} ContainerID="2e9d8ae88b7b81553d8a7f9728135589041da4a7a9b1f5a9a597ea8a4e069e9a" Namespace="calico-system" Pod="calico-kube-controllers-565cf6bf7-fd9ql" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--565cf6bf7--fd9ql-" Oct 27 08:31:07.094127 containerd[1685]: 2025-10-27 08:31:06.909 [INFO][4658] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2e9d8ae88b7b81553d8a7f9728135589041da4a7a9b1f5a9a597ea8a4e069e9a" Namespace="calico-system" Pod="calico-kube-controllers-565cf6bf7-fd9ql" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--565cf6bf7--fd9ql-eth0" Oct 27 08:31:07.094127 containerd[1685]: 2025-10-27 08:31:06.954 [INFO][4696] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2e9d8ae88b7b81553d8a7f9728135589041da4a7a9b1f5a9a597ea8a4e069e9a" HandleID="k8s-pod-network.2e9d8ae88b7b81553d8a7f9728135589041da4a7a9b1f5a9a597ea8a4e069e9a" Workload="localhost-k8s-calico--kube--controllers--565cf6bf7--fd9ql-eth0" Oct 27 08:31:07.094127 containerd[1685]: 2025-10-27 08:31:06.954 [INFO][4696] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="2e9d8ae88b7b81553d8a7f9728135589041da4a7a9b1f5a9a597ea8a4e069e9a" HandleID="k8s-pod-network.2e9d8ae88b7b81553d8a7f9728135589041da4a7a9b1f5a9a597ea8a4e069e9a" Workload="localhost-k8s-calico--kube--controllers--565cf6bf7--fd9ql-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001038f0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-565cf6bf7-fd9ql", "timestamp":"2025-10-27 08:31:06.954457456 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 27 08:31:07.094127 containerd[1685]: 2025-10-27 08:31:06.954 [INFO][4696] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 27 08:31:07.094127 containerd[1685]: 2025-10-27 08:31:06.973 [INFO][4696] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 27 08:31:07.094127 containerd[1685]: 2025-10-27 08:31:06.973 [INFO][4696] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 27 08:31:07.094127 containerd[1685]: 2025-10-27 08:31:07.057 [INFO][4696] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2e9d8ae88b7b81553d8a7f9728135589041da4a7a9b1f5a9a597ea8a4e069e9a" host="localhost" Oct 27 08:31:07.094127 containerd[1685]: 2025-10-27 08:31:07.060 [INFO][4696] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 27 08:31:07.094127 containerd[1685]: 2025-10-27 08:31:07.062 [INFO][4696] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 27 08:31:07.094127 containerd[1685]: 2025-10-27 08:31:07.064 [INFO][4696] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 27 08:31:07.094127 containerd[1685]: 2025-10-27 08:31:07.065 [INFO][4696] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 27 08:31:07.094127 containerd[1685]: 2025-10-27 08:31:07.065 [INFO][4696] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2e9d8ae88b7b81553d8a7f9728135589041da4a7a9b1f5a9a597ea8a4e069e9a" host="localhost" Oct 27 08:31:07.094127 containerd[1685]: 2025-10-27 08:31:07.066 [INFO][4696] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.2e9d8ae88b7b81553d8a7f9728135589041da4a7a9b1f5a9a597ea8a4e069e9a Oct 27 08:31:07.094127 containerd[1685]: 2025-10-27 08:31:07.067 [INFO][4696] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2e9d8ae88b7b81553d8a7f9728135589041da4a7a9b1f5a9a597ea8a4e069e9a" host="localhost" Oct 27 08:31:07.094127 containerd[1685]: 2025-10-27 08:31:07.071 [INFO][4696] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.2e9d8ae88b7b81553d8a7f9728135589041da4a7a9b1f5a9a597ea8a4e069e9a" host="localhost" Oct 27 08:31:07.094127 containerd[1685]: 2025-10-27 08:31:07.071 [INFO][4696] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.2e9d8ae88b7b81553d8a7f9728135589041da4a7a9b1f5a9a597ea8a4e069e9a" host="localhost" Oct 27 08:31:07.094127 containerd[1685]: 2025-10-27 08:31:07.071 [INFO][4696] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 27 08:31:07.094127 containerd[1685]: 2025-10-27 08:31:07.072 [INFO][4696] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="2e9d8ae88b7b81553d8a7f9728135589041da4a7a9b1f5a9a597ea8a4e069e9a" HandleID="k8s-pod-network.2e9d8ae88b7b81553d8a7f9728135589041da4a7a9b1f5a9a597ea8a4e069e9a" Workload="localhost-k8s-calico--kube--controllers--565cf6bf7--fd9ql-eth0" Oct 27 08:31:07.095817 containerd[1685]: 2025-10-27 08:31:07.076 [INFO][4658] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2e9d8ae88b7b81553d8a7f9728135589041da4a7a9b1f5a9a597ea8a4e069e9a" Namespace="calico-system" Pod="calico-kube-controllers-565cf6bf7-fd9ql" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--565cf6bf7--fd9ql-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--565cf6bf7--fd9ql-eth0", GenerateName:"calico-kube-controllers-565cf6bf7-", Namespace:"calico-system", SelfLink:"", UID:"024044b8-ffed-437e-be0d-2af9d6b61984", ResourceVersion:"856", Generation:0, CreationTimestamp:time.Date(2025, time.October, 27, 8, 30, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"565cf6bf7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-565cf6bf7-fd9ql", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali92af3fbbbc9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 27 08:31:07.095817 containerd[1685]: 2025-10-27 08:31:07.076 [INFO][4658] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="2e9d8ae88b7b81553d8a7f9728135589041da4a7a9b1f5a9a597ea8a4e069e9a" Namespace="calico-system" Pod="calico-kube-controllers-565cf6bf7-fd9ql" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--565cf6bf7--fd9ql-eth0" Oct 27 08:31:07.095817 containerd[1685]: 2025-10-27 08:31:07.076 [INFO][4658] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali92af3fbbbc9 ContainerID="2e9d8ae88b7b81553d8a7f9728135589041da4a7a9b1f5a9a597ea8a4e069e9a" Namespace="calico-system" Pod="calico-kube-controllers-565cf6bf7-fd9ql" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--565cf6bf7--fd9ql-eth0" Oct 27 08:31:07.095817 containerd[1685]: 2025-10-27 08:31:07.083 [INFO][4658] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2e9d8ae88b7b81553d8a7f9728135589041da4a7a9b1f5a9a597ea8a4e069e9a" Namespace="calico-system" Pod="calico-kube-controllers-565cf6bf7-fd9ql" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--565cf6bf7--fd9ql-eth0" Oct 27 08:31:07.095817 containerd[1685]: 2025-10-27 08:31:07.083 [INFO][4658] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2e9d8ae88b7b81553d8a7f9728135589041da4a7a9b1f5a9a597ea8a4e069e9a" Namespace="calico-system" Pod="calico-kube-controllers-565cf6bf7-fd9ql" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--565cf6bf7--fd9ql-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--565cf6bf7--fd9ql-eth0", GenerateName:"calico-kube-controllers-565cf6bf7-", Namespace:"calico-system", SelfLink:"", UID:"024044b8-ffed-437e-be0d-2af9d6b61984", ResourceVersion:"856", Generation:0, CreationTimestamp:time.Date(2025, time.October, 27, 8, 30, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"565cf6bf7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2e9d8ae88b7b81553d8a7f9728135589041da4a7a9b1f5a9a597ea8a4e069e9a", Pod:"calico-kube-controllers-565cf6bf7-fd9ql", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali92af3fbbbc9", MAC:"3e:c1:61:d5:d1:a8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 27 08:31:07.095817 containerd[1685]: 2025-10-27 08:31:07.089 [INFO][4658] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2e9d8ae88b7b81553d8a7f9728135589041da4a7a9b1f5a9a597ea8a4e069e9a" Namespace="calico-system" Pod="calico-kube-controllers-565cf6bf7-fd9ql" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--565cf6bf7--fd9ql-eth0" Oct 27 08:31:07.104470 containerd[1685]: time="2025-10-27T08:31:07.103795292Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74fcb5d84f-hzkgp,Uid:423be6d6-16fc-456a-a6d1-876213aa577d,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"b3a0203b105733f4c30425fa4b1f3a19baa41acea4383ed1baac0b21bf2702d1\"" Oct 27 08:31:07.105290 containerd[1685]: time="2025-10-27T08:31:07.105269354Z" level=info msg="connecting to shim 2e9d8ae88b7b81553d8a7f9728135589041da4a7a9b1f5a9a597ea8a4e069e9a" address="unix:///run/containerd/s/19108718bf6d95bcfec005fcaf1456013f02ea5fef5c7bcbe8b3a374f7adeb47" namespace=k8s.io protocol=ttrpc version=3 Oct 27 08:31:07.106721 containerd[1685]: time="2025-10-27T08:31:07.106709778Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 27 08:31:07.128074 systemd[1]: Started cri-containerd-2e9d8ae88b7b81553d8a7f9728135589041da4a7a9b1f5a9a597ea8a4e069e9a.scope - libcontainer container 2e9d8ae88b7b81553d8a7f9728135589041da4a7a9b1f5a9a597ea8a4e069e9a. Oct 27 08:31:07.137634 systemd-resolved[1342]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 27 08:31:07.171157 containerd[1685]: time="2025-10-27T08:31:07.171124208Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-565cf6bf7-fd9ql,Uid:024044b8-ffed-437e-be0d-2af9d6b61984,Namespace:calico-system,Attempt:0,} returns sandbox id \"2e9d8ae88b7b81553d8a7f9728135589041da4a7a9b1f5a9a597ea8a4e069e9a\"" Oct 27 08:31:07.182393 systemd-networkd[1580]: calie58b61c6dbb: Link UP Oct 27 08:31:07.183279 systemd-networkd[1580]: calie58b61c6dbb: Gained carrier Oct 27 08:31:07.193855 containerd[1685]: 2025-10-27 08:31:06.917 [INFO][4650] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--666569f655--mxfbf-eth0 goldmane-666569f655- calico-system 01295943-bd91-493a-b86a-7f82f8f61b26 854 0 2025-10-27 08:30:38 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-666569f655-mxfbf eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calie58b61c6dbb [] [] }} ContainerID="b83067e66ad8384b43782012314b94cfc1f11e867b78ecd09a98e956e6390061" Namespace="calico-system" Pod="goldmane-666569f655-mxfbf" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--mxfbf-" Oct 27 08:31:07.193855 containerd[1685]: 2025-10-27 08:31:06.917 [INFO][4650] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b83067e66ad8384b43782012314b94cfc1f11e867b78ecd09a98e956e6390061" Namespace="calico-system" Pod="goldmane-666569f655-mxfbf" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--mxfbf-eth0" Oct 27 08:31:07.193855 containerd[1685]: 2025-10-27 08:31:06.988 [INFO][4715] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b83067e66ad8384b43782012314b94cfc1f11e867b78ecd09a98e956e6390061" HandleID="k8s-pod-network.b83067e66ad8384b43782012314b94cfc1f11e867b78ecd09a98e956e6390061" Workload="localhost-k8s-goldmane--666569f655--mxfbf-eth0" Oct 27 08:31:07.193855 containerd[1685]: 2025-10-27 08:31:06.988 [INFO][4715] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b83067e66ad8384b43782012314b94cfc1f11e867b78ecd09a98e956e6390061" HandleID="k8s-pod-network.b83067e66ad8384b43782012314b94cfc1f11e867b78ecd09a98e956e6390061" Workload="localhost-k8s-goldmane--666569f655--mxfbf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5630), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-666569f655-mxfbf", "timestamp":"2025-10-27 08:31:06.988122 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 27 08:31:07.193855 containerd[1685]: 2025-10-27 08:31:06.988 [INFO][4715] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 27 08:31:07.193855 containerd[1685]: 2025-10-27 08:31:07.071 [INFO][4715] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 27 08:31:07.193855 containerd[1685]: 2025-10-27 08:31:07.072 [INFO][4715] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 27 08:31:07.193855 containerd[1685]: 2025-10-27 08:31:07.159 [INFO][4715] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b83067e66ad8384b43782012314b94cfc1f11e867b78ecd09a98e956e6390061" host="localhost" Oct 27 08:31:07.193855 containerd[1685]: 2025-10-27 08:31:07.162 [INFO][4715] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 27 08:31:07.193855 containerd[1685]: 2025-10-27 08:31:07.165 [INFO][4715] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 27 08:31:07.193855 containerd[1685]: 2025-10-27 08:31:07.166 [INFO][4715] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 27 08:31:07.193855 containerd[1685]: 2025-10-27 08:31:07.168 [INFO][4715] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 27 08:31:07.193855 containerd[1685]: 2025-10-27 08:31:07.168 [INFO][4715] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b83067e66ad8384b43782012314b94cfc1f11e867b78ecd09a98e956e6390061" host="localhost" Oct 27 08:31:07.193855 containerd[1685]: 2025-10-27 08:31:07.169 [INFO][4715] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b83067e66ad8384b43782012314b94cfc1f11e867b78ecd09a98e956e6390061 Oct 27 08:31:07.193855 containerd[1685]: 2025-10-27 08:31:07.171 [INFO][4715] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b83067e66ad8384b43782012314b94cfc1f11e867b78ecd09a98e956e6390061" host="localhost" Oct 27 08:31:07.193855 containerd[1685]: 2025-10-27 08:31:07.176 [INFO][4715] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.b83067e66ad8384b43782012314b94cfc1f11e867b78ecd09a98e956e6390061" host="localhost" Oct 27 08:31:07.193855 containerd[1685]: 2025-10-27 08:31:07.176 [INFO][4715] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.b83067e66ad8384b43782012314b94cfc1f11e867b78ecd09a98e956e6390061" host="localhost" Oct 27 08:31:07.193855 containerd[1685]: 2025-10-27 08:31:07.176 [INFO][4715] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 27 08:31:07.193855 containerd[1685]: 2025-10-27 08:31:07.176 [INFO][4715] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="b83067e66ad8384b43782012314b94cfc1f11e867b78ecd09a98e956e6390061" HandleID="k8s-pod-network.b83067e66ad8384b43782012314b94cfc1f11e867b78ecd09a98e956e6390061" Workload="localhost-k8s-goldmane--666569f655--mxfbf-eth0" Oct 27 08:31:07.195734 containerd[1685]: 2025-10-27 08:31:07.179 [INFO][4650] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b83067e66ad8384b43782012314b94cfc1f11e867b78ecd09a98e956e6390061" Namespace="calico-system" Pod="goldmane-666569f655-mxfbf" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--mxfbf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--mxfbf-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"01295943-bd91-493a-b86a-7f82f8f61b26", ResourceVersion:"854", Generation:0, CreationTimestamp:time.Date(2025, time.October, 27, 8, 30, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-666569f655-mxfbf", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calie58b61c6dbb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 27 08:31:07.195734 containerd[1685]: 2025-10-27 08:31:07.179 [INFO][4650] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="b83067e66ad8384b43782012314b94cfc1f11e867b78ecd09a98e956e6390061" Namespace="calico-system" Pod="goldmane-666569f655-mxfbf" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--mxfbf-eth0" Oct 27 08:31:07.195734 containerd[1685]: 2025-10-27 08:31:07.179 [INFO][4650] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie58b61c6dbb ContainerID="b83067e66ad8384b43782012314b94cfc1f11e867b78ecd09a98e956e6390061" Namespace="calico-system" Pod="goldmane-666569f655-mxfbf" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--mxfbf-eth0" Oct 27 08:31:07.195734 containerd[1685]: 2025-10-27 08:31:07.182 [INFO][4650] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b83067e66ad8384b43782012314b94cfc1f11e867b78ecd09a98e956e6390061" Namespace="calico-system" Pod="goldmane-666569f655-mxfbf" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--mxfbf-eth0" Oct 27 08:31:07.195734 containerd[1685]: 2025-10-27 08:31:07.183 [INFO][4650] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b83067e66ad8384b43782012314b94cfc1f11e867b78ecd09a98e956e6390061" Namespace="calico-system" Pod="goldmane-666569f655-mxfbf" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--mxfbf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--mxfbf-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"01295943-bd91-493a-b86a-7f82f8f61b26", ResourceVersion:"854", Generation:0, CreationTimestamp:time.Date(2025, time.October, 27, 8, 30, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b83067e66ad8384b43782012314b94cfc1f11e867b78ecd09a98e956e6390061", Pod:"goldmane-666569f655-mxfbf", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calie58b61c6dbb", MAC:"8a:76:a9:b6:51:c9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 27 08:31:07.195734 containerd[1685]: 2025-10-27 08:31:07.190 [INFO][4650] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b83067e66ad8384b43782012314b94cfc1f11e867b78ecd09a98e956e6390061" Namespace="calico-system" Pod="goldmane-666569f655-mxfbf" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--mxfbf-eth0" Oct 27 08:31:07.205029 containerd[1685]: time="2025-10-27T08:31:07.204995103Z" level=info msg="connecting to shim b83067e66ad8384b43782012314b94cfc1f11e867b78ecd09a98e956e6390061" address="unix:///run/containerd/s/084aaed239ba6b045f1ff25d1bcb8287a30225cf2da9837885c286a6c55869f6" namespace=k8s.io protocol=ttrpc version=3 Oct 27 08:31:07.219058 systemd[1]: Started cri-containerd-b83067e66ad8384b43782012314b94cfc1f11e867b78ecd09a98e956e6390061.scope - libcontainer container b83067e66ad8384b43782012314b94cfc1f11e867b78ecd09a98e956e6390061. Oct 27 08:31:07.227262 systemd-resolved[1342]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 27 08:31:07.255267 containerd[1685]: time="2025-10-27T08:31:07.255241135Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-mxfbf,Uid:01295943-bd91-493a-b86a-7f82f8f61b26,Namespace:calico-system,Attempt:0,} returns sandbox id \"b83067e66ad8384b43782012314b94cfc1f11e867b78ecd09a98e956e6390061\"" Oct 27 08:31:07.301666 systemd-networkd[1580]: cali58d8bc3121a: Link UP Oct 27 08:31:07.301788 systemd-networkd[1580]: cali58d8bc3121a: Gained carrier Oct 27 08:31:07.310039 containerd[1685]: 2025-10-27 08:31:06.935 [INFO][4679] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--bnvzn-eth0 coredns-674b8bbfcf- kube-system e75ee41c-b373-44a8-9425-dc19e6224499 851 0 2025-10-27 08:30:29 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-bnvzn eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali58d8bc3121a [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="3638f1aec5d48cbc645d315e7a30af12a5fdcf9224216e68b03c377b5a4b53b6" Namespace="kube-system" Pod="coredns-674b8bbfcf-bnvzn" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--bnvzn-" Oct 27 08:31:07.310039 containerd[1685]: 2025-10-27 08:31:06.935 [INFO][4679] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3638f1aec5d48cbc645d315e7a30af12a5fdcf9224216e68b03c377b5a4b53b6" Namespace="kube-system" Pod="coredns-674b8bbfcf-bnvzn" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--bnvzn-eth0" Oct 27 08:31:07.310039 containerd[1685]: 2025-10-27 08:31:06.987 [INFO][4710] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3638f1aec5d48cbc645d315e7a30af12a5fdcf9224216e68b03c377b5a4b53b6" HandleID="k8s-pod-network.3638f1aec5d48cbc645d315e7a30af12a5fdcf9224216e68b03c377b5a4b53b6" Workload="localhost-k8s-coredns--674b8bbfcf--bnvzn-eth0" Oct 27 08:31:07.310039 containerd[1685]: 2025-10-27 08:31:06.988 [INFO][4710] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="3638f1aec5d48cbc645d315e7a30af12a5fdcf9224216e68b03c377b5a4b53b6" HandleID="k8s-pod-network.3638f1aec5d48cbc645d315e7a30af12a5fdcf9224216e68b03c377b5a4b53b6" Workload="localhost-k8s-coredns--674b8bbfcf--bnvzn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cb5b0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-bnvzn", "timestamp":"2025-10-27 08:31:06.98770795 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 27 08:31:07.310039 containerd[1685]: 2025-10-27 08:31:06.988 [INFO][4710] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 27 08:31:07.310039 containerd[1685]: 2025-10-27 08:31:07.176 [INFO][4710] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 27 08:31:07.310039 containerd[1685]: 2025-10-27 08:31:07.176 [INFO][4710] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 27 08:31:07.310039 containerd[1685]: 2025-10-27 08:31:07.258 [INFO][4710] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3638f1aec5d48cbc645d315e7a30af12a5fdcf9224216e68b03c377b5a4b53b6" host="localhost" Oct 27 08:31:07.310039 containerd[1685]: 2025-10-27 08:31:07.262 [INFO][4710] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 27 08:31:07.310039 containerd[1685]: 2025-10-27 08:31:07.275 [INFO][4710] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 27 08:31:07.310039 containerd[1685]: 2025-10-27 08:31:07.276 [INFO][4710] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 27 08:31:07.310039 containerd[1685]: 2025-10-27 08:31:07.277 [INFO][4710] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 27 08:31:07.310039 containerd[1685]: 2025-10-27 08:31:07.277 [INFO][4710] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3638f1aec5d48cbc645d315e7a30af12a5fdcf9224216e68b03c377b5a4b53b6" host="localhost" Oct 27 08:31:07.310039 containerd[1685]: 2025-10-27 08:31:07.277 [INFO][4710] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.3638f1aec5d48cbc645d315e7a30af12a5fdcf9224216e68b03c377b5a4b53b6 Oct 27 08:31:07.310039 containerd[1685]: 2025-10-27 08:31:07.281 [INFO][4710] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3638f1aec5d48cbc645d315e7a30af12a5fdcf9224216e68b03c377b5a4b53b6" host="localhost" Oct 27 08:31:07.310039 containerd[1685]: 2025-10-27 08:31:07.294 [INFO][4710] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.3638f1aec5d48cbc645d315e7a30af12a5fdcf9224216e68b03c377b5a4b53b6" host="localhost" Oct 27 08:31:07.310039 containerd[1685]: 2025-10-27 08:31:07.294 [INFO][4710] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.3638f1aec5d48cbc645d315e7a30af12a5fdcf9224216e68b03c377b5a4b53b6" host="localhost" Oct 27 08:31:07.310039 containerd[1685]: 2025-10-27 08:31:07.294 [INFO][4710] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 27 08:31:07.310039 containerd[1685]: 2025-10-27 08:31:07.294 [INFO][4710] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="3638f1aec5d48cbc645d315e7a30af12a5fdcf9224216e68b03c377b5a4b53b6" HandleID="k8s-pod-network.3638f1aec5d48cbc645d315e7a30af12a5fdcf9224216e68b03c377b5a4b53b6" Workload="localhost-k8s-coredns--674b8bbfcf--bnvzn-eth0" Oct 27 08:31:07.311413 containerd[1685]: 2025-10-27 08:31:07.296 [INFO][4679] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3638f1aec5d48cbc645d315e7a30af12a5fdcf9224216e68b03c377b5a4b53b6" Namespace="kube-system" Pod="coredns-674b8bbfcf-bnvzn" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--bnvzn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--bnvzn-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"e75ee41c-b373-44a8-9425-dc19e6224499", ResourceVersion:"851", Generation:0, CreationTimestamp:time.Date(2025, time.October, 27, 8, 30, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-bnvzn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali58d8bc3121a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 27 08:31:07.311413 containerd[1685]: 2025-10-27 08:31:07.296 [INFO][4679] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="3638f1aec5d48cbc645d315e7a30af12a5fdcf9224216e68b03c377b5a4b53b6" Namespace="kube-system" Pod="coredns-674b8bbfcf-bnvzn" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--bnvzn-eth0" Oct 27 08:31:07.311413 containerd[1685]: 2025-10-27 08:31:07.296 [INFO][4679] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali58d8bc3121a ContainerID="3638f1aec5d48cbc645d315e7a30af12a5fdcf9224216e68b03c377b5a4b53b6" Namespace="kube-system" Pod="coredns-674b8bbfcf-bnvzn" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--bnvzn-eth0" Oct 27 08:31:07.311413 containerd[1685]: 2025-10-27 08:31:07.301 [INFO][4679] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3638f1aec5d48cbc645d315e7a30af12a5fdcf9224216e68b03c377b5a4b53b6" Namespace="kube-system" Pod="coredns-674b8bbfcf-bnvzn" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--bnvzn-eth0" Oct 27 08:31:07.311413 containerd[1685]: 2025-10-27 08:31:07.302 [INFO][4679] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3638f1aec5d48cbc645d315e7a30af12a5fdcf9224216e68b03c377b5a4b53b6" Namespace="kube-system" Pod="coredns-674b8bbfcf-bnvzn" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--bnvzn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--bnvzn-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"e75ee41c-b373-44a8-9425-dc19e6224499", ResourceVersion:"851", Generation:0, CreationTimestamp:time.Date(2025, time.October, 27, 8, 30, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3638f1aec5d48cbc645d315e7a30af12a5fdcf9224216e68b03c377b5a4b53b6", Pod:"coredns-674b8bbfcf-bnvzn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali58d8bc3121a", MAC:"1e:1d:13:38:24:01", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 27 08:31:07.311413 containerd[1685]: 2025-10-27 08:31:07.307 [INFO][4679] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3638f1aec5d48cbc645d315e7a30af12a5fdcf9224216e68b03c377b5a4b53b6" Namespace="kube-system" Pod="coredns-674b8bbfcf-bnvzn" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--bnvzn-eth0" Oct 27 08:31:07.319663 containerd[1685]: time="2025-10-27T08:31:07.319642153Z" level=info msg="connecting to shim 3638f1aec5d48cbc645d315e7a30af12a5fdcf9224216e68b03c377b5a4b53b6" address="unix:///run/containerd/s/69334e506ab68a50239fcdebd9edc0c38eeab692855ad89cb53fd704f9ae891b" namespace=k8s.io protocol=ttrpc version=3 Oct 27 08:31:07.333015 systemd[1]: Started cri-containerd-3638f1aec5d48cbc645d315e7a30af12a5fdcf9224216e68b03c377b5a4b53b6.scope - libcontainer container 3638f1aec5d48cbc645d315e7a30af12a5fdcf9224216e68b03c377b5a4b53b6. Oct 27 08:31:07.343710 systemd-resolved[1342]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 27 08:31:07.373262 containerd[1685]: time="2025-10-27T08:31:07.373101625Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-bnvzn,Uid:e75ee41c-b373-44a8-9425-dc19e6224499,Namespace:kube-system,Attempt:0,} returns sandbox id \"3638f1aec5d48cbc645d315e7a30af12a5fdcf9224216e68b03c377b5a4b53b6\"" Oct 27 08:31:07.380984 containerd[1685]: time="2025-10-27T08:31:07.380657522Z" level=info msg="CreateContainer within sandbox \"3638f1aec5d48cbc645d315e7a30af12a5fdcf9224216e68b03c377b5a4b53b6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 27 08:31:07.386891 containerd[1685]: time="2025-10-27T08:31:07.386876631Z" level=info msg="Container e5fb4f476c43cc852f756e672731285786c8243059f04cad9829a39957477f54: CDI devices from CRI Config.CDIDevices: []" Oct 27 08:31:07.399934 containerd[1685]: time="2025-10-27T08:31:07.399721145Z" level=info msg="CreateContainer within sandbox \"3638f1aec5d48cbc645d315e7a30af12a5fdcf9224216e68b03c377b5a4b53b6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e5fb4f476c43cc852f756e672731285786c8243059f04cad9829a39957477f54\"" Oct 27 08:31:07.401789 containerd[1685]: time="2025-10-27T08:31:07.401571660Z" level=info msg="StartContainer for \"e5fb4f476c43cc852f756e672731285786c8243059f04cad9829a39957477f54\"" Oct 27 08:31:07.403885 containerd[1685]: time="2025-10-27T08:31:07.403473877Z" level=info msg="connecting to shim e5fb4f476c43cc852f756e672731285786c8243059f04cad9829a39957477f54" address="unix:///run/containerd/s/69334e506ab68a50239fcdebd9edc0c38eeab692855ad89cb53fd704f9ae891b" protocol=ttrpc version=3 Oct 27 08:31:07.420076 systemd[1]: Started cri-containerd-e5fb4f476c43cc852f756e672731285786c8243059f04cad9829a39957477f54.scope - libcontainer container e5fb4f476c43cc852f756e672731285786c8243059f04cad9829a39957477f54. Oct 27 08:31:07.456142 containerd[1685]: time="2025-10-27T08:31:07.456121480Z" level=info msg="StartContainer for \"e5fb4f476c43cc852f756e672731285786c8243059f04cad9829a39957477f54\" returns successfully" Oct 27 08:31:07.521895 containerd[1685]: time="2025-10-27T08:31:07.521868912Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:31:07.522300 containerd[1685]: time="2025-10-27T08:31:07.522277530Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 27 08:31:07.522396 containerd[1685]: time="2025-10-27T08:31:07.522345565Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 27 08:31:07.522775 kubelet[2996]: E1027 08:31:07.522649 2996 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 27 08:31:07.523005 kubelet[2996]: E1027 08:31:07.522784 2996 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 27 08:31:07.523297 containerd[1685]: time="2025-10-27T08:31:07.523054720Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 27 08:31:07.523440 kubelet[2996]: E1027 08:31:07.523123 2996 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6g9tz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-74fcb5d84f-hzkgp_calico-apiserver(423be6d6-16fc-456a-a6d1-876213aa577d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 27 08:31:07.524950 kubelet[2996]: E1027 08:31:07.524929 2996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74fcb5d84f-hzkgp" podUID="423be6d6-16fc-456a-a6d1-876213aa577d" Oct 27 08:31:07.852884 containerd[1685]: time="2025-10-27T08:31:07.852676125Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-l9mcn,Uid:ef789cb0-639f-477f-87e6-d52226d43664,Namespace:calico-system,Attempt:0,}" Oct 27 08:31:07.921046 systemd-networkd[1580]: cali9f09423946b: Link UP Oct 27 08:31:07.921742 systemd-networkd[1580]: cali9f09423946b: Gained carrier Oct 27 08:31:07.931406 containerd[1685]: 2025-10-27 08:31:07.880 [INFO][4984] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--l9mcn-eth0 csi-node-driver- calico-system ef789cb0-639f-477f-87e6-d52226d43664 746 0 2025-10-27 08:30:40 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-l9mcn eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali9f09423946b [] [] }} ContainerID="f564d4001639b7cb5d3e5a4452e228e5671f0ff5d37ff260f8c5e6f374d1741f" Namespace="calico-system" Pod="csi-node-driver-l9mcn" WorkloadEndpoint="localhost-k8s-csi--node--driver--l9mcn-" Oct 27 08:31:07.931406 containerd[1685]: 2025-10-27 08:31:07.881 [INFO][4984] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f564d4001639b7cb5d3e5a4452e228e5671f0ff5d37ff260f8c5e6f374d1741f" Namespace="calico-system" Pod="csi-node-driver-l9mcn" WorkloadEndpoint="localhost-k8s-csi--node--driver--l9mcn-eth0" Oct 27 08:31:07.931406 containerd[1685]: 2025-10-27 08:31:07.895 [INFO][4996] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f564d4001639b7cb5d3e5a4452e228e5671f0ff5d37ff260f8c5e6f374d1741f" HandleID="k8s-pod-network.f564d4001639b7cb5d3e5a4452e228e5671f0ff5d37ff260f8c5e6f374d1741f" Workload="localhost-k8s-csi--node--driver--l9mcn-eth0" Oct 27 08:31:07.931406 containerd[1685]: 2025-10-27 08:31:07.895 [INFO][4996] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="f564d4001639b7cb5d3e5a4452e228e5671f0ff5d37ff260f8c5e6f374d1741f" HandleID="k8s-pod-network.f564d4001639b7cb5d3e5a4452e228e5671f0ff5d37ff260f8c5e6f374d1741f" Workload="localhost-k8s-csi--node--driver--l9mcn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ac930), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-l9mcn", "timestamp":"2025-10-27 08:31:07.895728977 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 27 08:31:07.931406 containerd[1685]: 2025-10-27 08:31:07.895 [INFO][4996] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 27 08:31:07.931406 containerd[1685]: 2025-10-27 08:31:07.895 [INFO][4996] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 27 08:31:07.931406 containerd[1685]: 2025-10-27 08:31:07.895 [INFO][4996] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 27 08:31:07.931406 containerd[1685]: 2025-10-27 08:31:07.900 [INFO][4996] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f564d4001639b7cb5d3e5a4452e228e5671f0ff5d37ff260f8c5e6f374d1741f" host="localhost" Oct 27 08:31:07.931406 containerd[1685]: 2025-10-27 08:31:07.905 [INFO][4996] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 27 08:31:07.931406 containerd[1685]: 2025-10-27 08:31:07.907 [INFO][4996] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 27 08:31:07.931406 containerd[1685]: 2025-10-27 08:31:07.908 [INFO][4996] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 27 08:31:07.931406 containerd[1685]: 2025-10-27 08:31:07.909 [INFO][4996] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 27 08:31:07.931406 containerd[1685]: 2025-10-27 08:31:07.909 [INFO][4996] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f564d4001639b7cb5d3e5a4452e228e5671f0ff5d37ff260f8c5e6f374d1741f" host="localhost" Oct 27 08:31:07.931406 containerd[1685]: 2025-10-27 08:31:07.910 [INFO][4996] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.f564d4001639b7cb5d3e5a4452e228e5671f0ff5d37ff260f8c5e6f374d1741f Oct 27 08:31:07.931406 containerd[1685]: 2025-10-27 08:31:07.912 [INFO][4996] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f564d4001639b7cb5d3e5a4452e228e5671f0ff5d37ff260f8c5e6f374d1741f" host="localhost" Oct 27 08:31:07.931406 containerd[1685]: 2025-10-27 08:31:07.914 [INFO][4996] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.f564d4001639b7cb5d3e5a4452e228e5671f0ff5d37ff260f8c5e6f374d1741f" host="localhost" Oct 27 08:31:07.931406 containerd[1685]: 2025-10-27 08:31:07.914 [INFO][4996] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.f564d4001639b7cb5d3e5a4452e228e5671f0ff5d37ff260f8c5e6f374d1741f" host="localhost" Oct 27 08:31:07.931406 containerd[1685]: 2025-10-27 08:31:07.914 [INFO][4996] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 27 08:31:07.931406 containerd[1685]: 2025-10-27 08:31:07.914 [INFO][4996] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="f564d4001639b7cb5d3e5a4452e228e5671f0ff5d37ff260f8c5e6f374d1741f" HandleID="k8s-pod-network.f564d4001639b7cb5d3e5a4452e228e5671f0ff5d37ff260f8c5e6f374d1741f" Workload="localhost-k8s-csi--node--driver--l9mcn-eth0" Oct 27 08:31:07.932732 containerd[1685]: 2025-10-27 08:31:07.916 [INFO][4984] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f564d4001639b7cb5d3e5a4452e228e5671f0ff5d37ff260f8c5e6f374d1741f" Namespace="calico-system" Pod="csi-node-driver-l9mcn" WorkloadEndpoint="localhost-k8s-csi--node--driver--l9mcn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--l9mcn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ef789cb0-639f-477f-87e6-d52226d43664", ResourceVersion:"746", Generation:0, CreationTimestamp:time.Date(2025, time.October, 27, 8, 30, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-l9mcn", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali9f09423946b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 27 08:31:07.932732 containerd[1685]: 2025-10-27 08:31:07.916 [INFO][4984] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="f564d4001639b7cb5d3e5a4452e228e5671f0ff5d37ff260f8c5e6f374d1741f" Namespace="calico-system" Pod="csi-node-driver-l9mcn" WorkloadEndpoint="localhost-k8s-csi--node--driver--l9mcn-eth0" Oct 27 08:31:07.932732 containerd[1685]: 2025-10-27 08:31:07.916 [INFO][4984] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9f09423946b ContainerID="f564d4001639b7cb5d3e5a4452e228e5671f0ff5d37ff260f8c5e6f374d1741f" Namespace="calico-system" Pod="csi-node-driver-l9mcn" WorkloadEndpoint="localhost-k8s-csi--node--driver--l9mcn-eth0" Oct 27 08:31:07.932732 containerd[1685]: 2025-10-27 08:31:07.922 [INFO][4984] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f564d4001639b7cb5d3e5a4452e228e5671f0ff5d37ff260f8c5e6f374d1741f" Namespace="calico-system" Pod="csi-node-driver-l9mcn" WorkloadEndpoint="localhost-k8s-csi--node--driver--l9mcn-eth0" Oct 27 08:31:07.932732 containerd[1685]: 2025-10-27 08:31:07.922 [INFO][4984] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f564d4001639b7cb5d3e5a4452e228e5671f0ff5d37ff260f8c5e6f374d1741f" Namespace="calico-system" Pod="csi-node-driver-l9mcn" WorkloadEndpoint="localhost-k8s-csi--node--driver--l9mcn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--l9mcn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ef789cb0-639f-477f-87e6-d52226d43664", ResourceVersion:"746", Generation:0, CreationTimestamp:time.Date(2025, time.October, 27, 8, 30, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f564d4001639b7cb5d3e5a4452e228e5671f0ff5d37ff260f8c5e6f374d1741f", Pod:"csi-node-driver-l9mcn", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali9f09423946b", MAC:"8e:62:41:3e:4d:89", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 27 08:31:07.932732 containerd[1685]: 2025-10-27 08:31:07.928 [INFO][4984] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f564d4001639b7cb5d3e5a4452e228e5671f0ff5d37ff260f8c5e6f374d1741f" Namespace="calico-system" Pod="csi-node-driver-l9mcn" WorkloadEndpoint="localhost-k8s-csi--node--driver--l9mcn-eth0" Oct 27 08:31:07.946402 containerd[1685]: time="2025-10-27T08:31:07.946375028Z" level=info msg="connecting to shim f564d4001639b7cb5d3e5a4452e228e5671f0ff5d37ff260f8c5e6f374d1741f" address="unix:///run/containerd/s/2ab16b0d2788c6c95007d648e412f107595842298bfa489022a9bd1ab6354eee" namespace=k8s.io protocol=ttrpc version=3 Oct 27 08:31:07.969022 systemd[1]: Started cri-containerd-f564d4001639b7cb5d3e5a4452e228e5671f0ff5d37ff260f8c5e6f374d1741f.scope - libcontainer container f564d4001639b7cb5d3e5a4452e228e5671f0ff5d37ff260f8c5e6f374d1741f. Oct 27 08:31:07.978636 systemd-resolved[1342]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 27 08:31:07.991749 containerd[1685]: time="2025-10-27T08:31:07.991383579Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-l9mcn,Uid:ef789cb0-639f-477f-87e6-d52226d43664,Namespace:calico-system,Attempt:0,} returns sandbox id \"f564d4001639b7cb5d3e5a4452e228e5671f0ff5d37ff260f8c5e6f374d1741f\"" Oct 27 08:31:08.007973 containerd[1685]: time="2025-10-27T08:31:08.007943549Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:31:08.008225 containerd[1685]: time="2025-10-27T08:31:08.008200776Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 27 08:31:08.008628 containerd[1685]: time="2025-10-27T08:31:08.008247824Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Oct 27 08:31:08.008664 kubelet[2996]: E1027 08:31:08.008308 2996 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 27 08:31:08.008664 kubelet[2996]: E1027 08:31:08.008333 2996 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 27 08:31:08.008664 kubelet[2996]: E1027 08:31:08.008448 2996 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-thxmv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-565cf6bf7-fd9ql_calico-system(024044b8-ffed-437e-be0d-2af9d6b61984): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 27 08:31:08.009371 containerd[1685]: time="2025-10-27T08:31:08.009354285Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 27 08:31:08.010488 kubelet[2996]: E1027 08:31:08.010459 2996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-565cf6bf7-fd9ql" podUID="024044b8-ffed-437e-be0d-2af9d6b61984" Oct 27 08:31:08.073746 kubelet[2996]: E1027 08:31:08.073618 2996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-565cf6bf7-fd9ql" podUID="024044b8-ffed-437e-be0d-2af9d6b61984" Oct 27 08:31:08.083980 kubelet[2996]: E1027 08:31:08.083850 2996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74fcb5d84f-hzkgp" podUID="423be6d6-16fc-456a-a6d1-876213aa577d" Oct 27 08:31:08.108622 kubelet[2996]: I1027 08:31:08.108081 2996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-bnvzn" podStartSLOduration=39.108069409 podStartE2EDuration="39.108069409s" podCreationTimestamp="2025-10-27 08:30:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 08:31:08.098548504 +0000 UTC m=+46.411564395" watchObservedRunningTime="2025-10-27 08:31:08.108069409 +0000 UTC m=+46.421085300" Oct 27 08:31:08.151052 systemd-networkd[1580]: vxlan.calico: Gained IPv6LL Oct 27 08:31:08.355417 containerd[1685]: time="2025-10-27T08:31:08.355288131Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:31:08.355775 containerd[1685]: time="2025-10-27T08:31:08.355702720Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 27 08:31:08.355848 containerd[1685]: time="2025-10-27T08:31:08.355770867Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Oct 27 08:31:08.355942 kubelet[2996]: E1027 08:31:08.355889 2996 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 27 08:31:08.356012 kubelet[2996]: E1027 08:31:08.355939 2996 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 27 08:31:08.356295 containerd[1685]: time="2025-10-27T08:31:08.356201717Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 27 08:31:08.359611 kubelet[2996]: E1027 08:31:08.359224 2996 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4b62w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-mxfbf_calico-system(01295943-bd91-493a-b86a-7f82f8f61b26): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 27 08:31:08.360394 kubelet[2996]: E1027 08:31:08.360372 2996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-mxfbf" podUID="01295943-bd91-493a-b86a-7f82f8f61b26" Oct 27 08:31:08.470019 systemd-networkd[1580]: cali9debba472cf: Gained IPv6LL Oct 27 08:31:08.534259 systemd-networkd[1580]: cali58d8bc3121a: Gained IPv6LL Oct 27 08:31:08.662024 systemd-networkd[1580]: calie58b61c6dbb: Gained IPv6LL Oct 27 08:31:08.690109 containerd[1685]: time="2025-10-27T08:31:08.690070082Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:31:08.697294 containerd[1685]: time="2025-10-27T08:31:08.697263833Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 27 08:31:08.697339 containerd[1685]: time="2025-10-27T08:31:08.697328788Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Oct 27 08:31:08.697437 kubelet[2996]: E1027 08:31:08.697410 2996 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 27 08:31:08.697714 kubelet[2996]: E1027 08:31:08.697444 2996 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 27 08:31:08.697714 kubelet[2996]: E1027 08:31:08.697534 2996 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9dndt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-l9mcn_calico-system(ef789cb0-639f-477f-87e6-d52226d43664): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 27 08:31:08.708965 containerd[1685]: time="2025-10-27T08:31:08.708492275Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 27 08:31:08.982082 systemd-networkd[1580]: cali92af3fbbbc9: Gained IPv6LL Oct 27 08:31:09.086478 kubelet[2996]: E1027 08:31:09.086430 2996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74fcb5d84f-hzkgp" podUID="423be6d6-16fc-456a-a6d1-876213aa577d" Oct 27 08:31:09.086706 kubelet[2996]: E1027 08:31:09.086659 2996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-mxfbf" podUID="01295943-bd91-493a-b86a-7f82f8f61b26" Oct 27 08:31:09.086971 kubelet[2996]: E1027 08:31:09.086953 2996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-565cf6bf7-fd9ql" podUID="024044b8-ffed-437e-be0d-2af9d6b61984" Oct 27 08:31:09.211239 containerd[1685]: time="2025-10-27T08:31:09.211203894Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:31:09.211621 containerd[1685]: time="2025-10-27T08:31:09.211573090Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 27 08:31:09.211621 containerd[1685]: time="2025-10-27T08:31:09.211603580Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Oct 27 08:31:09.211762 kubelet[2996]: E1027 08:31:09.211710 2996 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 27 08:31:09.211802 kubelet[2996]: E1027 08:31:09.211769 2996 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 27 08:31:09.212132 kubelet[2996]: E1027 08:31:09.211862 2996 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9dndt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-l9mcn_calico-system(ef789cb0-639f-477f-87e6-d52226d43664): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 27 08:31:09.213098 kubelet[2996]: E1027 08:31:09.213076 2996 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-l9mcn" podUID="ef789cb0-639f-477f-87e6-d52226d43664" Oct 27 08:31:09.622040 systemd-networkd[1580]: cali9f09423946b: Gained IPv6LL Oct 27 08:31:10.089017 kubelet[2996]: E1027 08:31:10.088573 2996 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-l9mcn" podUID="ef789cb0-639f-477f-87e6-d52226d43664" Oct 27 08:31:15.855997 containerd[1685]: time="2025-10-27T08:31:15.855942587Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 27 08:31:16.262948 containerd[1685]: time="2025-10-27T08:31:16.262729432Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:31:16.263203 containerd[1685]: time="2025-10-27T08:31:16.263182224Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 27 08:31:16.263274 containerd[1685]: time="2025-10-27T08:31:16.263255499Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Oct 27 08:31:16.263423 kubelet[2996]: E1027 08:31:16.263399 2996 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 27 08:31:16.263919 kubelet[2996]: E1027 08:31:16.263846 2996 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 27 08:31:16.264998 kubelet[2996]: E1027 08:31:16.264968 2996 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:857366382d06447f833489122285d3f7,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-854sq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-86b9c948c9-rsbhz_calico-system(275ad17e-d663-404d-99d3-03d139a8cc73): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 27 08:31:16.267246 containerd[1685]: time="2025-10-27T08:31:16.267227135Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 27 08:31:16.761184 containerd[1685]: time="2025-10-27T08:31:16.761147124Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:31:16.761639 containerd[1685]: time="2025-10-27T08:31:16.761605587Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 27 08:31:16.761697 containerd[1685]: time="2025-10-27T08:31:16.761674663Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Oct 27 08:31:16.761837 kubelet[2996]: E1027 08:31:16.761806 2996 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 27 08:31:16.761880 kubelet[2996]: E1027 08:31:16.761847 2996 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 27 08:31:16.762312 kubelet[2996]: E1027 08:31:16.762050 2996 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-854sq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-86b9c948c9-rsbhz_calico-system(275ad17e-d663-404d-99d3-03d139a8cc73): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 27 08:31:16.763216 kubelet[2996]: E1027 08:31:16.763186 2996 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-86b9c948c9-rsbhz" podUID="275ad17e-d663-404d-99d3-03d139a8cc73" Oct 27 08:31:18.853919 containerd[1685]: time="2025-10-27T08:31:18.853725642Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 27 08:31:19.201848 containerd[1685]: time="2025-10-27T08:31:19.201812595Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:31:19.202265 containerd[1685]: time="2025-10-27T08:31:19.202224813Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 27 08:31:19.202265 containerd[1685]: time="2025-10-27T08:31:19.202245864Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 27 08:31:19.202398 kubelet[2996]: E1027 08:31:19.202325 2996 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 27 08:31:19.202398 kubelet[2996]: E1027 08:31:19.202355 2996 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 27 08:31:19.202627 kubelet[2996]: E1027 08:31:19.202444 2996 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j45rg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-74fcb5d84f-x85t2_calico-apiserver(f209a925-c7ce-4786-8b27-b2615413f1ab): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 27 08:31:19.203900 kubelet[2996]: E1027 08:31:19.203839 2996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74fcb5d84f-x85t2" podUID="f209a925-c7ce-4786-8b27-b2615413f1ab" Oct 27 08:31:20.853497 containerd[1685]: time="2025-10-27T08:31:20.853463713Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 27 08:31:21.205964 containerd[1685]: time="2025-10-27T08:31:21.205826561Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:31:21.206266 containerd[1685]: time="2025-10-27T08:31:21.206245747Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 27 08:31:21.206316 containerd[1685]: time="2025-10-27T08:31:21.206287813Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Oct 27 08:31:21.206403 kubelet[2996]: E1027 08:31:21.206372 2996 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 27 08:31:21.206403 kubelet[2996]: E1027 08:31:21.206406 2996 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 27 08:31:21.206681 kubelet[2996]: E1027 08:31:21.206495 2996 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9dndt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-l9mcn_calico-system(ef789cb0-639f-477f-87e6-d52226d43664): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 27 08:31:21.209517 containerd[1685]: time="2025-10-27T08:31:21.209208203Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 27 08:31:21.576150 containerd[1685]: time="2025-10-27T08:31:21.576072878Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:31:21.576457 containerd[1685]: time="2025-10-27T08:31:21.576428748Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 27 08:31:21.576537 containerd[1685]: time="2025-10-27T08:31:21.576518294Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Oct 27 08:31:21.576855 kubelet[2996]: E1027 08:31:21.576683 2996 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 27 08:31:21.576855 kubelet[2996]: E1027 08:31:21.576725 2996 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 27 08:31:21.576855 kubelet[2996]: E1027 08:31:21.576815 2996 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9dndt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-l9mcn_calico-system(ef789cb0-639f-477f-87e6-d52226d43664): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 27 08:31:21.578631 kubelet[2996]: E1027 08:31:21.578600 2996 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-l9mcn" podUID="ef789cb0-639f-477f-87e6-d52226d43664" Oct 27 08:31:22.852731 containerd[1685]: time="2025-10-27T08:31:22.852663390Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 27 08:31:23.964976 containerd[1685]: time="2025-10-27T08:31:23.964942831Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:31:23.965288 containerd[1685]: time="2025-10-27T08:31:23.965258796Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 27 08:31:23.965330 containerd[1685]: time="2025-10-27T08:31:23.965319673Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 27 08:31:23.965438 kubelet[2996]: E1027 08:31:23.965410 2996 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 27 08:31:23.966128 kubelet[2996]: E1027 08:31:23.965454 2996 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 27 08:31:23.966128 kubelet[2996]: E1027 08:31:23.965650 2996 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6g9tz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-74fcb5d84f-hzkgp_calico-apiserver(423be6d6-16fc-456a-a6d1-876213aa577d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 27 08:31:23.966249 containerd[1685]: time="2025-10-27T08:31:23.965655528Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 27 08:31:23.967596 kubelet[2996]: E1027 08:31:23.967563 2996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74fcb5d84f-hzkgp" podUID="423be6d6-16fc-456a-a6d1-876213aa577d" Oct 27 08:31:24.299573 containerd[1685]: time="2025-10-27T08:31:24.299467323Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:31:24.299968 containerd[1685]: time="2025-10-27T08:31:24.299923499Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 27 08:31:24.299968 containerd[1685]: time="2025-10-27T08:31:24.299947495Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Oct 27 08:31:24.300156 kubelet[2996]: E1027 08:31:24.300104 2996 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 27 08:31:24.300156 kubelet[2996]: E1027 08:31:24.300141 2996 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 27 08:31:24.300348 kubelet[2996]: E1027 08:31:24.300270 2996 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-thxmv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-565cf6bf7-fd9ql_calico-system(024044b8-ffed-437e-be0d-2af9d6b61984): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 27 08:31:24.301444 containerd[1685]: time="2025-10-27T08:31:24.301361749Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 27 08:31:24.301531 kubelet[2996]: E1027 08:31:24.301415 2996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-565cf6bf7-fd9ql" podUID="024044b8-ffed-437e-be0d-2af9d6b61984" Oct 27 08:31:24.629801 containerd[1685]: time="2025-10-27T08:31:24.629713594Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:31:24.630359 containerd[1685]: time="2025-10-27T08:31:24.630281921Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 27 08:31:24.630359 containerd[1685]: time="2025-10-27T08:31:24.630338704Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Oct 27 08:31:24.630562 kubelet[2996]: E1027 08:31:24.630518 2996 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 27 08:31:24.630613 kubelet[2996]: E1027 08:31:24.630568 2996 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 27 08:31:24.630928 kubelet[2996]: E1027 08:31:24.630699 2996 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4b62w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-mxfbf_calico-system(01295943-bd91-493a-b86a-7f82f8f61b26): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 27 08:31:24.632142 kubelet[2996]: E1027 08:31:24.632124 2996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-mxfbf" podUID="01295943-bd91-493a-b86a-7f82f8f61b26" Oct 27 08:31:27.854082 kubelet[2996]: E1027 08:31:27.853799 2996 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-86b9c948c9-rsbhz" podUID="275ad17e-d663-404d-99d3-03d139a8cc73" Oct 27 08:31:29.853931 kubelet[2996]: E1027 08:31:29.853831 2996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74fcb5d84f-x85t2" podUID="f209a925-c7ce-4786-8b27-b2615413f1ab" Oct 27 08:31:31.155695 containerd[1685]: time="2025-10-27T08:31:31.155615290Z" level=info msg="TaskExit event in podsandbox handler container_id:\"10b23570a41acdbf3a7699fed3775bf0e43909572be3bb7b85d4c7f02d7b8544\" id:\"c8bf8a92255d89ba1b84aa8090a1d6563a9abc0fa84fafab5da35d88d6939ea5\" pid:5107 exited_at:{seconds:1761553891 nanos:155421072}" Oct 27 08:31:32.596218 systemd[1]: Started sshd@7-139.178.70.104:22-147.75.109.163:48574.service - OpenSSH per-connection server daemon (147.75.109.163:48574). Oct 27 08:31:32.685749 sshd[5123]: Accepted publickey for core from 147.75.109.163 port 48574 ssh2: RSA SHA256:SdSJLhtzP1zCvmJPnLHIt4mznFlZ9Z/x6JYKhlZWnDw Oct 27 08:31:32.687417 sshd-session[5123]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 08:31:32.691533 systemd-logind[1652]: New session 10 of user core. Oct 27 08:31:32.701092 systemd[1]: Started session-10.scope - Session 10 of User core. Oct 27 08:31:32.854738 kubelet[2996]: E1027 08:31:32.854584 2996 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-l9mcn" podUID="ef789cb0-639f-477f-87e6-d52226d43664" Oct 27 08:31:33.196778 sshd[5126]: Connection closed by 147.75.109.163 port 48574 Oct 27 08:31:33.197345 sshd-session[5123]: pam_unix(sshd:session): session closed for user core Oct 27 08:31:33.200820 systemd[1]: sshd@7-139.178.70.104:22-147.75.109.163:48574.service: Deactivated successfully. Oct 27 08:31:33.201937 systemd[1]: session-10.scope: Deactivated successfully. Oct 27 08:31:33.202397 systemd-logind[1652]: Session 10 logged out. Waiting for processes to exit. Oct 27 08:31:33.203051 systemd-logind[1652]: Removed session 10. Oct 27 08:31:36.853212 kubelet[2996]: E1027 08:31:36.853171 2996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74fcb5d84f-hzkgp" podUID="423be6d6-16fc-456a-a6d1-876213aa577d" Oct 27 08:31:37.853077 kubelet[2996]: E1027 08:31:37.852663 2996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-mxfbf" podUID="01295943-bd91-493a-b86a-7f82f8f61b26" Oct 27 08:31:38.215639 systemd[1]: Started sshd@8-139.178.70.104:22-147.75.109.163:48584.service - OpenSSH per-connection server daemon (147.75.109.163:48584). Oct 27 08:31:38.348715 sshd[5144]: Accepted publickey for core from 147.75.109.163 port 48584 ssh2: RSA SHA256:SdSJLhtzP1zCvmJPnLHIt4mznFlZ9Z/x6JYKhlZWnDw Oct 27 08:31:38.349653 sshd-session[5144]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 08:31:38.354172 systemd-logind[1652]: New session 11 of user core. Oct 27 08:31:38.358147 systemd[1]: Started session-11.scope - Session 11 of User core. Oct 27 08:31:38.458733 sshd[5147]: Connection closed by 147.75.109.163 port 48584 Oct 27 08:31:38.459107 sshd-session[5144]: pam_unix(sshd:session): session closed for user core Oct 27 08:31:38.461247 systemd[1]: sshd@8-139.178.70.104:22-147.75.109.163:48584.service: Deactivated successfully. Oct 27 08:31:38.462401 systemd[1]: session-11.scope: Deactivated successfully. Oct 27 08:31:38.462970 systemd-logind[1652]: Session 11 logged out. Waiting for processes to exit. Oct 27 08:31:38.463707 systemd-logind[1652]: Removed session 11. Oct 27 08:31:38.852811 kubelet[2996]: E1027 08:31:38.852751 2996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-565cf6bf7-fd9ql" podUID="024044b8-ffed-437e-be0d-2af9d6b61984" Oct 27 08:31:40.853616 containerd[1685]: time="2025-10-27T08:31:40.853587317Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 27 08:31:41.390677 containerd[1685]: time="2025-10-27T08:31:41.390644047Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:31:41.391215 containerd[1685]: time="2025-10-27T08:31:41.391149921Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 27 08:31:41.391215 containerd[1685]: time="2025-10-27T08:31:41.391196310Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Oct 27 08:31:41.391481 kubelet[2996]: E1027 08:31:41.391451 2996 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 27 08:31:41.391707 kubelet[2996]: E1027 08:31:41.391489 2996 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 27 08:31:41.391707 kubelet[2996]: E1027 08:31:41.391567 2996 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:857366382d06447f833489122285d3f7,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-854sq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-86b9c948c9-rsbhz_calico-system(275ad17e-d663-404d-99d3-03d139a8cc73): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 27 08:31:41.393913 containerd[1685]: time="2025-10-27T08:31:41.393271760Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 27 08:31:41.718076 containerd[1685]: time="2025-10-27T08:31:41.717997849Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:31:41.718326 containerd[1685]: time="2025-10-27T08:31:41.718305589Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 27 08:31:41.718372 containerd[1685]: time="2025-10-27T08:31:41.718360710Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Oct 27 08:31:41.719962 kubelet[2996]: E1027 08:31:41.718590 2996 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 27 08:31:41.719962 kubelet[2996]: E1027 08:31:41.718623 2996 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 27 08:31:41.719962 kubelet[2996]: E1027 08:31:41.718704 2996 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-854sq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-86b9c948c9-rsbhz_calico-system(275ad17e-d663-404d-99d3-03d139a8cc73): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 27 08:31:41.719962 kubelet[2996]: E1027 08:31:41.719938 2996 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-86b9c948c9-rsbhz" podUID="275ad17e-d663-404d-99d3-03d139a8cc73" Oct 27 08:31:41.855000 containerd[1685]: time="2025-10-27T08:31:41.854971949Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 27 08:31:42.633213 containerd[1685]: time="2025-10-27T08:31:42.632987498Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:31:42.633556 containerd[1685]: time="2025-10-27T08:31:42.633530975Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 27 08:31:42.633705 containerd[1685]: time="2025-10-27T08:31:42.633573350Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 27 08:31:42.633814 kubelet[2996]: E1027 08:31:42.633773 2996 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 27 08:31:42.634752 kubelet[2996]: E1027 08:31:42.633815 2996 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 27 08:31:42.634752 kubelet[2996]: E1027 08:31:42.633900 2996 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j45rg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-74fcb5d84f-x85t2_calico-apiserver(f209a925-c7ce-4786-8b27-b2615413f1ab): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 27 08:31:42.635047 kubelet[2996]: E1027 08:31:42.635023 2996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74fcb5d84f-x85t2" podUID="f209a925-c7ce-4786-8b27-b2615413f1ab" Oct 27 08:31:43.469686 systemd[1]: Started sshd@9-139.178.70.104:22-147.75.109.163:57150.service - OpenSSH per-connection server daemon (147.75.109.163:57150). Oct 27 08:31:43.688844 sshd[5160]: Accepted publickey for core from 147.75.109.163 port 57150 ssh2: RSA SHA256:SdSJLhtzP1zCvmJPnLHIt4mznFlZ9Z/x6JYKhlZWnDw Oct 27 08:31:43.689938 sshd-session[5160]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 08:31:43.693405 systemd-logind[1652]: New session 12 of user core. Oct 27 08:31:43.700020 systemd[1]: Started session-12.scope - Session 12 of User core. Oct 27 08:31:43.829799 sshd[5165]: Connection closed by 147.75.109.163 port 57150 Oct 27 08:31:43.830823 sshd-session[5160]: pam_unix(sshd:session): session closed for user core Oct 27 08:31:43.836510 systemd[1]: sshd@9-139.178.70.104:22-147.75.109.163:57150.service: Deactivated successfully. Oct 27 08:31:43.837636 systemd[1]: session-12.scope: Deactivated successfully. Oct 27 08:31:43.838386 systemd-logind[1652]: Session 12 logged out. Waiting for processes to exit. Oct 27 08:31:43.840090 systemd-logind[1652]: Removed session 12. Oct 27 08:31:43.841316 systemd[1]: Started sshd@10-139.178.70.104:22-147.75.109.163:57158.service - OpenSSH per-connection server daemon (147.75.109.163:57158). Oct 27 08:31:43.880341 sshd[5178]: Accepted publickey for core from 147.75.109.163 port 57158 ssh2: RSA SHA256:SdSJLhtzP1zCvmJPnLHIt4mznFlZ9Z/x6JYKhlZWnDw Oct 27 08:31:43.880609 sshd-session[5178]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 08:31:43.883715 systemd-logind[1652]: New session 13 of user core. Oct 27 08:31:43.886988 systemd[1]: Started session-13.scope - Session 13 of User core. Oct 27 08:31:44.048044 sshd[5181]: Connection closed by 147.75.109.163 port 57158 Oct 27 08:31:44.048239 sshd-session[5178]: pam_unix(sshd:session): session closed for user core Oct 27 08:31:44.055494 systemd[1]: sshd@10-139.178.70.104:22-147.75.109.163:57158.service: Deactivated successfully. Oct 27 08:31:44.059258 systemd[1]: session-13.scope: Deactivated successfully. Oct 27 08:31:44.061184 systemd-logind[1652]: Session 13 logged out. Waiting for processes to exit. Oct 27 08:31:44.066013 systemd[1]: Started sshd@11-139.178.70.104:22-147.75.109.163:57166.service - OpenSSH per-connection server daemon (147.75.109.163:57166). Oct 27 08:31:44.067081 systemd-logind[1652]: Removed session 13. Oct 27 08:31:44.124990 sshd[5191]: Accepted publickey for core from 147.75.109.163 port 57166 ssh2: RSA SHA256:SdSJLhtzP1zCvmJPnLHIt4mznFlZ9Z/x6JYKhlZWnDw Oct 27 08:31:44.126401 sshd-session[5191]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 08:31:44.130216 systemd-logind[1652]: New session 14 of user core. Oct 27 08:31:44.135094 systemd[1]: Started session-14.scope - Session 14 of User core. Oct 27 08:31:44.243622 sshd[5194]: Connection closed by 147.75.109.163 port 57166 Oct 27 08:31:44.243556 sshd-session[5191]: pam_unix(sshd:session): session closed for user core Oct 27 08:31:44.247014 systemd[1]: sshd@11-139.178.70.104:22-147.75.109.163:57166.service: Deactivated successfully. Oct 27 08:31:44.248062 systemd[1]: session-14.scope: Deactivated successfully. Oct 27 08:31:44.248595 systemd-logind[1652]: Session 14 logged out. Waiting for processes to exit. Oct 27 08:31:44.249201 systemd-logind[1652]: Removed session 14. Oct 27 08:31:46.853480 containerd[1685]: time="2025-10-27T08:31:46.853442873Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 27 08:31:47.373191 containerd[1685]: time="2025-10-27T08:31:47.373008736Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:31:47.373375 containerd[1685]: time="2025-10-27T08:31:47.373349276Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 27 08:31:47.373424 containerd[1685]: time="2025-10-27T08:31:47.373413903Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Oct 27 08:31:47.373701 kubelet[2996]: E1027 08:31:47.373511 2996 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 27 08:31:47.373701 kubelet[2996]: E1027 08:31:47.373552 2996 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 27 08:31:47.373701 kubelet[2996]: E1027 08:31:47.373635 2996 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9dndt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-l9mcn_calico-system(ef789cb0-639f-477f-87e6-d52226d43664): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 27 08:31:47.376625 containerd[1685]: time="2025-10-27T08:31:47.376561773Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 27 08:31:47.727199 containerd[1685]: time="2025-10-27T08:31:47.727167654Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:31:47.733012 containerd[1685]: time="2025-10-27T08:31:47.732990170Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 27 08:31:47.733060 containerd[1685]: time="2025-10-27T08:31:47.733045393Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Oct 27 08:31:47.733289 kubelet[2996]: E1027 08:31:47.733138 2996 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 27 08:31:47.733289 kubelet[2996]: E1027 08:31:47.733176 2996 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 27 08:31:47.733289 kubelet[2996]: E1027 08:31:47.733256 2996 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9dndt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-l9mcn_calico-system(ef789cb0-639f-477f-87e6-d52226d43664): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 27 08:31:47.735012 kubelet[2996]: E1027 08:31:47.734995 2996 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-l9mcn" podUID="ef789cb0-639f-477f-87e6-d52226d43664" Oct 27 08:31:48.853072 containerd[1685]: time="2025-10-27T08:31:48.853046944Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 27 08:31:49.202593 containerd[1685]: time="2025-10-27T08:31:49.202562025Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:31:49.202992 containerd[1685]: time="2025-10-27T08:31:49.202960338Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 27 08:31:49.203076 containerd[1685]: time="2025-10-27T08:31:49.203011531Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 27 08:31:49.203110 kubelet[2996]: E1027 08:31:49.203075 2996 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 27 08:31:49.203110 kubelet[2996]: E1027 08:31:49.203101 2996 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 27 08:31:49.204538 kubelet[2996]: E1027 08:31:49.203194 2996 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6g9tz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-74fcb5d84f-hzkgp_calico-apiserver(423be6d6-16fc-456a-a6d1-876213aa577d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 27 08:31:49.204538 kubelet[2996]: E1027 08:31:49.204442 2996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74fcb5d84f-hzkgp" podUID="423be6d6-16fc-456a-a6d1-876213aa577d" Oct 27 08:31:49.252619 systemd[1]: Started sshd@12-139.178.70.104:22-147.75.109.163:57168.service - OpenSSH per-connection server daemon (147.75.109.163:57168). Oct 27 08:31:49.298720 sshd[5221]: Accepted publickey for core from 147.75.109.163 port 57168 ssh2: RSA SHA256:SdSJLhtzP1zCvmJPnLHIt4mznFlZ9Z/x6JYKhlZWnDw Oct 27 08:31:49.299563 sshd-session[5221]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 08:31:49.302784 systemd-logind[1652]: New session 15 of user core. Oct 27 08:31:49.311018 systemd[1]: Started session-15.scope - Session 15 of User core. Oct 27 08:31:49.406932 sshd[5224]: Connection closed by 147.75.109.163 port 57168 Oct 27 08:31:49.407262 sshd-session[5221]: pam_unix(sshd:session): session closed for user core Oct 27 08:31:49.409367 systemd-logind[1652]: Session 15 logged out. Waiting for processes to exit. Oct 27 08:31:49.409428 systemd[1]: sshd@12-139.178.70.104:22-147.75.109.163:57168.service: Deactivated successfully. Oct 27 08:31:49.410518 systemd[1]: session-15.scope: Deactivated successfully. Oct 27 08:31:49.411803 systemd-logind[1652]: Removed session 15. Oct 27 08:31:49.855806 containerd[1685]: time="2025-10-27T08:31:49.855779671Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 27 08:31:50.223604 containerd[1685]: time="2025-10-27T08:31:50.223578323Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:31:50.305417 containerd[1685]: time="2025-10-27T08:31:50.305380822Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Oct 27 08:31:50.306220 containerd[1685]: time="2025-10-27T08:31:50.306186932Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 27 08:31:50.306603 kubelet[2996]: E1027 08:31:50.306375 2996 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 27 08:31:50.306603 kubelet[2996]: E1027 08:31:50.306429 2996 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 27 08:31:50.306603 kubelet[2996]: E1027 08:31:50.306538 2996 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4b62w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-mxfbf_calico-system(01295943-bd91-493a-b86a-7f82f8f61b26): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 27 08:31:50.308629 kubelet[2996]: E1027 08:31:50.308591 2996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-mxfbf" podUID="01295943-bd91-493a-b86a-7f82f8f61b26" Oct 27 08:31:51.853952 containerd[1685]: time="2025-10-27T08:31:51.853596531Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 27 08:31:52.323052 containerd[1685]: time="2025-10-27T08:31:52.322954266Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:31:52.323386 containerd[1685]: time="2025-10-27T08:31:52.323357430Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 27 08:31:52.323460 containerd[1685]: time="2025-10-27T08:31:52.323370257Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Oct 27 08:31:52.323561 kubelet[2996]: E1027 08:31:52.323507 2996 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 27 08:31:52.323764 kubelet[2996]: E1027 08:31:52.323565 2996 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 27 08:31:52.323764 kubelet[2996]: E1027 08:31:52.323667 2996 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-thxmv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-565cf6bf7-fd9ql_calico-system(024044b8-ffed-437e-be0d-2af9d6b61984): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 27 08:31:52.324984 kubelet[2996]: E1027 08:31:52.324952 2996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-565cf6bf7-fd9ql" podUID="024044b8-ffed-437e-be0d-2af9d6b61984" Oct 27 08:31:53.854530 kubelet[2996]: E1027 08:31:53.854496 2996 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-86b9c948c9-rsbhz" podUID="275ad17e-d663-404d-99d3-03d139a8cc73" Oct 27 08:31:54.421592 systemd[1]: Started sshd@13-139.178.70.104:22-147.75.109.163:55936.service - OpenSSH per-connection server daemon (147.75.109.163:55936). Oct 27 08:31:54.472727 sshd[5237]: Accepted publickey for core from 147.75.109.163 port 55936 ssh2: RSA SHA256:SdSJLhtzP1zCvmJPnLHIt4mznFlZ9Z/x6JYKhlZWnDw Oct 27 08:31:54.473399 sshd-session[5237]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 08:31:54.475846 systemd-logind[1652]: New session 16 of user core. Oct 27 08:31:54.484251 systemd[1]: Started session-16.scope - Session 16 of User core. Oct 27 08:31:54.576496 sshd[5240]: Connection closed by 147.75.109.163 port 55936 Oct 27 08:31:54.576441 sshd-session[5237]: pam_unix(sshd:session): session closed for user core Oct 27 08:31:54.579389 systemd[1]: sshd@13-139.178.70.104:22-147.75.109.163:55936.service: Deactivated successfully. Oct 27 08:31:54.580769 systemd[1]: session-16.scope: Deactivated successfully. Oct 27 08:31:54.581554 systemd-logind[1652]: Session 16 logged out. Waiting for processes to exit. Oct 27 08:31:54.582625 systemd-logind[1652]: Removed session 16. Oct 27 08:31:57.853972 kubelet[2996]: E1027 08:31:57.853861 2996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74fcb5d84f-x85t2" podUID="f209a925-c7ce-4786-8b27-b2615413f1ab" Oct 27 08:31:59.587521 systemd[1]: Started sshd@14-139.178.70.104:22-147.75.109.163:55948.service - OpenSSH per-connection server daemon (147.75.109.163:55948). Oct 27 08:31:59.630792 sshd[5254]: Accepted publickey for core from 147.75.109.163 port 55948 ssh2: RSA SHA256:SdSJLhtzP1zCvmJPnLHIt4mznFlZ9Z/x6JYKhlZWnDw Oct 27 08:31:59.631581 sshd-session[5254]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 08:31:59.634233 systemd-logind[1652]: New session 17 of user core. Oct 27 08:31:59.644021 systemd[1]: Started session-17.scope - Session 17 of User core. Oct 27 08:31:59.734014 sshd[5259]: Connection closed by 147.75.109.163 port 55948 Oct 27 08:31:59.734341 sshd-session[5254]: pam_unix(sshd:session): session closed for user core Oct 27 08:31:59.736634 systemd[1]: sshd@14-139.178.70.104:22-147.75.109.163:55948.service: Deactivated successfully. Oct 27 08:31:59.737677 systemd[1]: session-17.scope: Deactivated successfully. Oct 27 08:31:59.738679 systemd-logind[1652]: Session 17 logged out. Waiting for processes to exit. Oct 27 08:31:59.739302 systemd-logind[1652]: Removed session 17. Oct 27 08:32:01.108892 containerd[1685]: time="2025-10-27T08:32:01.108830191Z" level=info msg="TaskExit event in podsandbox handler container_id:\"10b23570a41acdbf3a7699fed3775bf0e43909572be3bb7b85d4c7f02d7b8544\" id:\"3e74038306877cb8ffe9afdef1ef6e070b3099337594c893a37b845f73def585\" pid:5281 exited_at:{seconds:1761553921 nanos:108585647}" Oct 27 08:32:01.855273 kubelet[2996]: E1027 08:32:01.855236 2996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74fcb5d84f-hzkgp" podUID="423be6d6-16fc-456a-a6d1-876213aa577d" Oct 27 08:32:02.856246 kubelet[2996]: E1027 08:32:02.856203 2996 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-l9mcn" podUID="ef789cb0-639f-477f-87e6-d52226d43664" Oct 27 08:32:04.749546 systemd[1]: Started sshd@15-139.178.70.104:22-147.75.109.163:59702.service - OpenSSH per-connection server daemon (147.75.109.163:59702). Oct 27 08:32:04.810642 sshd[5295]: Accepted publickey for core from 147.75.109.163 port 59702 ssh2: RSA SHA256:SdSJLhtzP1zCvmJPnLHIt4mznFlZ9Z/x6JYKhlZWnDw Oct 27 08:32:04.811513 sshd-session[5295]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 08:32:04.815308 systemd-logind[1652]: New session 18 of user core. Oct 27 08:32:04.821018 systemd[1]: Started session-18.scope - Session 18 of User core. Oct 27 08:32:04.852850 kubelet[2996]: E1027 08:32:04.852813 2996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-mxfbf" podUID="01295943-bd91-493a-b86a-7f82f8f61b26" Oct 27 08:32:04.853413 kubelet[2996]: E1027 08:32:04.852843 2996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-565cf6bf7-fd9ql" podUID="024044b8-ffed-437e-be0d-2af9d6b61984" Oct 27 08:32:04.973834 sshd[5298]: Connection closed by 147.75.109.163 port 59702 Oct 27 08:32:04.973427 sshd-session[5295]: pam_unix(sshd:session): session closed for user core Oct 27 08:32:04.984702 systemd[1]: sshd@15-139.178.70.104:22-147.75.109.163:59702.service: Deactivated successfully. Oct 27 08:32:04.986364 systemd[1]: session-18.scope: Deactivated successfully. Oct 27 08:32:04.987236 systemd-logind[1652]: Session 18 logged out. Waiting for processes to exit. Oct 27 08:32:04.990302 systemd[1]: Started sshd@16-139.178.70.104:22-147.75.109.163:59712.service - OpenSSH per-connection server daemon (147.75.109.163:59712). Oct 27 08:32:04.993119 systemd-logind[1652]: Removed session 18. Oct 27 08:32:05.035301 sshd[5310]: Accepted publickey for core from 147.75.109.163 port 59712 ssh2: RSA SHA256:SdSJLhtzP1zCvmJPnLHIt4mznFlZ9Z/x6JYKhlZWnDw Oct 27 08:32:05.036124 sshd-session[5310]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 08:32:05.040197 systemd-logind[1652]: New session 19 of user core. Oct 27 08:32:05.044018 systemd[1]: Started session-19.scope - Session 19 of User core. Oct 27 08:32:05.414750 sshd[5313]: Connection closed by 147.75.109.163 port 59712 Oct 27 08:32:05.415744 sshd-session[5310]: pam_unix(sshd:session): session closed for user core Oct 27 08:32:05.424203 systemd[1]: sshd@16-139.178.70.104:22-147.75.109.163:59712.service: Deactivated successfully. Oct 27 08:32:05.425199 systemd[1]: session-19.scope: Deactivated successfully. Oct 27 08:32:05.425935 systemd-logind[1652]: Session 19 logged out. Waiting for processes to exit. Oct 27 08:32:05.427207 systemd[1]: Started sshd@17-139.178.70.104:22-147.75.109.163:59722.service - OpenSSH per-connection server daemon (147.75.109.163:59722). Oct 27 08:32:05.429740 systemd-logind[1652]: Removed session 19. Oct 27 08:32:05.498103 sshd[5323]: Accepted publickey for core from 147.75.109.163 port 59722 ssh2: RSA SHA256:SdSJLhtzP1zCvmJPnLHIt4mznFlZ9Z/x6JYKhlZWnDw Oct 27 08:32:05.514339 sshd-session[5323]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 08:32:05.523594 systemd-logind[1652]: New session 20 of user core. Oct 27 08:32:05.527025 systemd[1]: Started session-20.scope - Session 20 of User core. Oct 27 08:32:05.990196 sshd[5326]: Connection closed by 147.75.109.163 port 59722 Oct 27 08:32:05.993413 sshd-session[5323]: pam_unix(sshd:session): session closed for user core Oct 27 08:32:06.003065 systemd[1]: Started sshd@18-139.178.70.104:22-147.75.109.163:59738.service - OpenSSH per-connection server daemon (147.75.109.163:59738). Oct 27 08:32:06.003419 systemd[1]: sshd@17-139.178.70.104:22-147.75.109.163:59722.service: Deactivated successfully. Oct 27 08:32:06.005148 systemd[1]: session-20.scope: Deactivated successfully. Oct 27 08:32:06.010004 systemd-logind[1652]: Session 20 logged out. Waiting for processes to exit. Oct 27 08:32:06.011002 systemd-logind[1652]: Removed session 20. Oct 27 08:32:06.074620 sshd[5342]: Accepted publickey for core from 147.75.109.163 port 59738 ssh2: RSA SHA256:SdSJLhtzP1zCvmJPnLHIt4mznFlZ9Z/x6JYKhlZWnDw Oct 27 08:32:06.075665 sshd-session[5342]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 08:32:06.079219 systemd-logind[1652]: New session 21 of user core. Oct 27 08:32:06.082026 systemd[1]: Started session-21.scope - Session 21 of User core. Oct 27 08:32:06.346810 sshd[5349]: Connection closed by 147.75.109.163 port 59738 Oct 27 08:32:06.347541 sshd-session[5342]: pam_unix(sshd:session): session closed for user core Oct 27 08:32:06.353173 systemd[1]: sshd@18-139.178.70.104:22-147.75.109.163:59738.service: Deactivated successfully. Oct 27 08:32:06.354484 systemd[1]: session-21.scope: Deactivated successfully. Oct 27 08:32:06.355441 systemd-logind[1652]: Session 21 logged out. Waiting for processes to exit. Oct 27 08:32:06.358317 systemd[1]: Started sshd@19-139.178.70.104:22-147.75.109.163:59750.service - OpenSSH per-connection server daemon (147.75.109.163:59750). Oct 27 08:32:06.362130 systemd-logind[1652]: Removed session 21. Oct 27 08:32:06.405316 sshd[5361]: Accepted publickey for core from 147.75.109.163 port 59750 ssh2: RSA SHA256:SdSJLhtzP1zCvmJPnLHIt4mznFlZ9Z/x6JYKhlZWnDw Oct 27 08:32:06.406201 sshd-session[5361]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 08:32:06.408941 systemd-logind[1652]: New session 22 of user core. Oct 27 08:32:06.415141 systemd[1]: Started session-22.scope - Session 22 of User core. Oct 27 08:32:06.515918 sshd[5364]: Connection closed by 147.75.109.163 port 59750 Oct 27 08:32:06.516043 sshd-session[5361]: pam_unix(sshd:session): session closed for user core Oct 27 08:32:06.518394 systemd[1]: sshd@19-139.178.70.104:22-147.75.109.163:59750.service: Deactivated successfully. Oct 27 08:32:06.519822 systemd[1]: session-22.scope: Deactivated successfully. Oct 27 08:32:06.520870 systemd-logind[1652]: Session 22 logged out. Waiting for processes to exit. Oct 27 08:32:06.521736 systemd-logind[1652]: Removed session 22. Oct 27 08:32:08.865064 kubelet[2996]: E1027 08:32:08.864766 2996 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-86b9c948c9-rsbhz" podUID="275ad17e-d663-404d-99d3-03d139a8cc73" Oct 27 08:32:11.523995 systemd[1]: Started sshd@20-139.178.70.104:22-147.75.109.163:40634.service - OpenSSH per-connection server daemon (147.75.109.163:40634). Oct 27 08:32:11.576482 sshd[5380]: Accepted publickey for core from 147.75.109.163 port 40634 ssh2: RSA SHA256:SdSJLhtzP1zCvmJPnLHIt4mznFlZ9Z/x6JYKhlZWnDw Oct 27 08:32:11.577300 sshd-session[5380]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 08:32:11.580951 systemd-logind[1652]: New session 23 of user core. Oct 27 08:32:11.585080 systemd[1]: Started session-23.scope - Session 23 of User core. Oct 27 08:32:11.682312 sshd[5383]: Connection closed by 147.75.109.163 port 40634 Oct 27 08:32:11.682635 sshd-session[5380]: pam_unix(sshd:session): session closed for user core Oct 27 08:32:11.684812 systemd[1]: sshd@20-139.178.70.104:22-147.75.109.163:40634.service: Deactivated successfully. Oct 27 08:32:11.686058 systemd[1]: session-23.scope: Deactivated successfully. Oct 27 08:32:11.686598 systemd-logind[1652]: Session 23 logged out. Waiting for processes to exit. Oct 27 08:32:11.687437 systemd-logind[1652]: Removed session 23. Oct 27 08:32:11.854384 kubelet[2996]: E1027 08:32:11.853804 2996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74fcb5d84f-x85t2" podUID="f209a925-c7ce-4786-8b27-b2615413f1ab" Oct 27 08:32:14.907049 kubelet[2996]: E1027 08:32:14.907003 2996 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-l9mcn" podUID="ef789cb0-639f-477f-87e6-d52226d43664" Oct 27 08:32:15.853082 kubelet[2996]: E1027 08:32:15.853050 2996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74fcb5d84f-hzkgp" podUID="423be6d6-16fc-456a-a6d1-876213aa577d" Oct 27 08:32:16.692236 systemd[1]: Started sshd@21-139.178.70.104:22-147.75.109.163:40644.service - OpenSSH per-connection server daemon (147.75.109.163:40644). Oct 27 08:32:16.739887 sshd[5395]: Accepted publickey for core from 147.75.109.163 port 40644 ssh2: RSA SHA256:SdSJLhtzP1zCvmJPnLHIt4mznFlZ9Z/x6JYKhlZWnDw Oct 27 08:32:16.740716 sshd-session[5395]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 08:32:16.744374 systemd-logind[1652]: New session 24 of user core. Oct 27 08:32:16.753007 systemd[1]: Started session-24.scope - Session 24 of User core. Oct 27 08:32:16.847098 sshd[5398]: Connection closed by 147.75.109.163 port 40644 Oct 27 08:32:16.847515 sshd-session[5395]: pam_unix(sshd:session): session closed for user core Oct 27 08:32:16.850363 systemd[1]: sshd@21-139.178.70.104:22-147.75.109.163:40644.service: Deactivated successfully. Oct 27 08:32:16.851920 systemd[1]: session-24.scope: Deactivated successfully. Oct 27 08:32:16.853131 systemd-logind[1652]: Session 24 logged out. Waiting for processes to exit. Oct 27 08:32:16.854325 systemd-logind[1652]: Removed session 24. Oct 27 08:32:17.856979 kubelet[2996]: E1027 08:32:17.856615 2996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-565cf6bf7-fd9ql" podUID="024044b8-ffed-437e-be0d-2af9d6b61984" Oct 27 08:32:18.853893 kubelet[2996]: E1027 08:32:18.853336 2996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-mxfbf" podUID="01295943-bd91-493a-b86a-7f82f8f61b26" Oct 27 08:32:21.859078 systemd[1]: Started sshd@22-139.178.70.104:22-147.75.109.163:51994.service - OpenSSH per-connection server daemon (147.75.109.163:51994). Oct 27 08:32:21.906928 sshd[5411]: Accepted publickey for core from 147.75.109.163 port 51994 ssh2: RSA SHA256:SdSJLhtzP1zCvmJPnLHIt4mznFlZ9Z/x6JYKhlZWnDw Oct 27 08:32:21.908413 sshd-session[5411]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 08:32:21.915108 systemd-logind[1652]: New session 25 of user core. Oct 27 08:32:21.920986 systemd[1]: Started session-25.scope - Session 25 of User core. Oct 27 08:32:22.021662 sshd[5416]: Connection closed by 147.75.109.163 port 51994 Oct 27 08:32:22.022194 sshd-session[5411]: pam_unix(sshd:session): session closed for user core Oct 27 08:32:22.026406 systemd[1]: sshd@22-139.178.70.104:22-147.75.109.163:51994.service: Deactivated successfully. Oct 27 08:32:22.027528 systemd[1]: session-25.scope: Deactivated successfully. Oct 27 08:32:22.029394 systemd-logind[1652]: Session 25 logged out. Waiting for processes to exit. Oct 27 08:32:22.030328 systemd-logind[1652]: Removed session 25.