Oct 29 23:59:39.599595 kernel: Linux version 6.12.54-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Wed Oct 29 22:08:54 -00 2025 Oct 29 23:59:39.599614 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=56cc5d11e9ee9e328725323e5b298567de51aff19ad0756381062170c9c03796 Oct 29 23:59:39.599622 kernel: Disabled fast string operations Oct 29 23:59:39.599626 kernel: BIOS-provided physical RAM map: Oct 29 23:59:39.599631 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ebff] usable Oct 29 23:59:39.599635 kernel: BIOS-e820: [mem 0x000000000009ec00-0x000000000009ffff] reserved Oct 29 23:59:39.599642 kernel: BIOS-e820: [mem 0x00000000000dc000-0x00000000000fffff] reserved Oct 29 23:59:39.599647 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007fedffff] usable Oct 29 23:59:39.599652 kernel: BIOS-e820: [mem 0x000000007fee0000-0x000000007fefefff] ACPI data Oct 29 23:59:39.599656 kernel: BIOS-e820: [mem 0x000000007feff000-0x000000007fefffff] ACPI NVS Oct 29 23:59:39.599661 kernel: BIOS-e820: [mem 0x000000007ff00000-0x000000007fffffff] usable Oct 29 23:59:39.599666 kernel: BIOS-e820: [mem 0x00000000f0000000-0x00000000f7ffffff] reserved Oct 29 23:59:39.599670 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec0ffff] reserved Oct 29 23:59:39.599675 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Oct 29 23:59:39.599682 kernel: BIOS-e820: [mem 0x00000000fffe0000-0x00000000ffffffff] reserved Oct 29 23:59:39.599687 kernel: NX (Execute Disable) protection: active Oct 29 23:59:39.599692 kernel: APIC: Static calls initialized Oct 29 23:59:39.599698 kernel: SMBIOS 2.7 present. Oct 29 23:59:39.599703 kernel: DMI: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 05/28/2020 Oct 29 23:59:39.599708 kernel: DMI: Memory slots populated: 1/128 Oct 29 23:59:39.599715 kernel: vmware: hypercall mode: 0x00 Oct 29 23:59:39.599720 kernel: Hypervisor detected: VMware Oct 29 23:59:39.599725 kernel: vmware: TSC freq read from hypervisor : 3408.000 MHz Oct 29 23:59:39.599730 kernel: vmware: Host bus clock speed read from hypervisor : 66000000 Hz Oct 29 23:59:39.599735 kernel: vmware: using clock offset of 2849948287 ns Oct 29 23:59:39.599741 kernel: tsc: Detected 3408.000 MHz processor Oct 29 23:59:39.599746 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Oct 29 23:59:39.599752 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Oct 29 23:59:39.599758 kernel: last_pfn = 0x80000 max_arch_pfn = 0x400000000 Oct 29 23:59:39.599765 kernel: total RAM covered: 3072M Oct 29 23:59:39.599770 kernel: Found optimal setting for mtrr clean up Oct 29 23:59:39.599776 kernel: gran_size: 64K chunk_size: 64K num_reg: 2 lose cover RAM: 0G Oct 29 23:59:39.599782 kernel: MTRR map: 6 entries (5 fixed + 1 variable; max 21), built from 8 variable MTRRs Oct 29 23:59:39.599788 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Oct 29 23:59:39.599793 kernel: Using GB pages for direct mapping Oct 29 23:59:39.599798 kernel: ACPI: Early table checksum verification disabled Oct 29 23:59:39.599804 kernel: ACPI: RSDP 0x00000000000F6A00 000024 (v02 PTLTD ) Oct 29 23:59:39.599810 kernel: ACPI: XSDT 0x000000007FEE965B 00005C (v01 INTEL 440BX 06040000 VMW 01324272) Oct 29 23:59:39.599816 kernel: ACPI: FACP 0x000000007FEFEE73 0000F4 (v04 INTEL 440BX 06040000 PTL 000F4240) Oct 29 23:59:39.599822 kernel: ACPI: DSDT 0x000000007FEEAD55 01411E (v01 PTLTD Custom 06040000 MSFT 03000001) Oct 29 23:59:39.599829 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Oct 29 23:59:39.599834 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Oct 29 23:59:39.599841 kernel: ACPI: BOOT 0x000000007FEEAD2D 000028 (v01 PTLTD $SBFTBL$ 06040000 LTP 00000001) Oct 29 23:59:39.599847 kernel: ACPI: APIC 0x000000007FEEA5EB 000742 (v01 PTLTD ? APIC 06040000 LTP 00000000) Oct 29 23:59:39.599853 kernel: ACPI: MCFG 0x000000007FEEA5AF 00003C (v01 PTLTD $PCITBL$ 06040000 LTP 00000001) Oct 29 23:59:39.599859 kernel: ACPI: SRAT 0x000000007FEE9757 0008A8 (v02 VMWARE MEMPLUG 06040000 VMW 00000001) Oct 29 23:59:39.599865 kernel: ACPI: HPET 0x000000007FEE971F 000038 (v01 VMWARE VMW HPET 06040000 VMW 00000001) Oct 29 23:59:39.599870 kernel: ACPI: WAET 0x000000007FEE96F7 000028 (v01 VMWARE VMW WAET 06040000 VMW 00000001) Oct 29 23:59:39.599877 kernel: ACPI: Reserving FACP table memory at [mem 0x7fefee73-0x7fefef66] Oct 29 23:59:39.599883 kernel: ACPI: Reserving DSDT table memory at [mem 0x7feead55-0x7fefee72] Oct 29 23:59:39.599889 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Oct 29 23:59:39.599895 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Oct 29 23:59:39.599900 kernel: ACPI: Reserving BOOT table memory at [mem 0x7feead2d-0x7feead54] Oct 29 23:59:39.599906 kernel: ACPI: Reserving APIC table memory at [mem 0x7feea5eb-0x7feead2c] Oct 29 23:59:39.599911 kernel: ACPI: Reserving MCFG table memory at [mem 0x7feea5af-0x7feea5ea] Oct 29 23:59:39.599917 kernel: ACPI: Reserving SRAT table memory at [mem 0x7fee9757-0x7fee9ffe] Oct 29 23:59:39.599924 kernel: ACPI: Reserving HPET table memory at [mem 0x7fee971f-0x7fee9756] Oct 29 23:59:39.599929 kernel: ACPI: Reserving WAET table memory at [mem 0x7fee96f7-0x7fee971e] Oct 29 23:59:39.599935 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Oct 29 23:59:39.599941 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Oct 29 23:59:39.599947 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000-0xbfffffff] hotplug Oct 29 23:59:39.599953 kernel: NUMA: Node 0 [mem 0x00001000-0x0009ffff] + [mem 0x00100000-0x7fffffff] -> [mem 0x00001000-0x7fffffff] Oct 29 23:59:39.599959 kernel: NODE_DATA(0) allocated [mem 0x7fff8dc0-0x7fffffff] Oct 29 23:59:39.599966 kernel: Zone ranges: Oct 29 23:59:39.599972 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Oct 29 23:59:39.599977 kernel: DMA32 [mem 0x0000000001000000-0x000000007fffffff] Oct 29 23:59:39.599983 kernel: Normal empty Oct 29 23:59:39.599988 kernel: Device empty Oct 29 23:59:39.599994 kernel: Movable zone start for each node Oct 29 23:59:39.600000 kernel: Early memory node ranges Oct 29 23:59:39.600006 kernel: node 0: [mem 0x0000000000001000-0x000000000009dfff] Oct 29 23:59:39.600012 kernel: node 0: [mem 0x0000000000100000-0x000000007fedffff] Oct 29 23:59:39.600019 kernel: node 0: [mem 0x000000007ff00000-0x000000007fffffff] Oct 29 23:59:39.600198 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007fffffff] Oct 29 23:59:39.600208 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Oct 29 23:59:39.600215 kernel: On node 0, zone DMA: 98 pages in unavailable ranges Oct 29 23:59:39.600220 kernel: On node 0, zone DMA32: 32 pages in unavailable ranges Oct 29 23:59:39.600226 kernel: ACPI: PM-Timer IO Port: 0x1008 Oct 29 23:59:39.600232 kernel: ACPI: LAPIC_NMI (acpi_id[0x00] high edge lint[0x1]) Oct 29 23:59:39.600240 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Oct 29 23:59:39.600246 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Oct 29 23:59:39.600251 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Oct 29 23:59:39.600257 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Oct 29 23:59:39.600263 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Oct 29 23:59:39.600268 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Oct 29 23:59:39.600274 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Oct 29 23:59:39.600280 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Oct 29 23:59:39.600287 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Oct 29 23:59:39.600292 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Oct 29 23:59:39.600298 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Oct 29 23:59:39.600303 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Oct 29 23:59:39.600309 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Oct 29 23:59:39.600314 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Oct 29 23:59:39.600320 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Oct 29 23:59:39.600326 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Oct 29 23:59:39.600332 kernel: ACPI: LAPIC_NMI (acpi_id[0x11] high edge lint[0x1]) Oct 29 23:59:39.600338 kernel: ACPI: LAPIC_NMI (acpi_id[0x12] high edge lint[0x1]) Oct 29 23:59:39.600343 kernel: ACPI: LAPIC_NMI (acpi_id[0x13] high edge lint[0x1]) Oct 29 23:59:39.600349 kernel: ACPI: LAPIC_NMI (acpi_id[0x14] high edge lint[0x1]) Oct 29 23:59:39.600355 kernel: ACPI: LAPIC_NMI (acpi_id[0x15] high edge lint[0x1]) Oct 29 23:59:39.600360 kernel: ACPI: LAPIC_NMI (acpi_id[0x16] high edge lint[0x1]) Oct 29 23:59:39.600366 kernel: ACPI: LAPIC_NMI (acpi_id[0x17] high edge lint[0x1]) Oct 29 23:59:39.600371 kernel: ACPI: LAPIC_NMI (acpi_id[0x18] high edge lint[0x1]) Oct 29 23:59:39.600378 kernel: ACPI: LAPIC_NMI (acpi_id[0x19] high edge lint[0x1]) Oct 29 23:59:39.600384 kernel: ACPI: LAPIC_NMI (acpi_id[0x1a] high edge lint[0x1]) Oct 29 23:59:39.600392 kernel: ACPI: LAPIC_NMI (acpi_id[0x1b] high edge lint[0x1]) Oct 29 23:59:39.600505 kernel: ACPI: LAPIC_NMI (acpi_id[0x1c] high edge lint[0x1]) Oct 29 23:59:39.600511 kernel: ACPI: LAPIC_NMI (acpi_id[0x1d] high edge lint[0x1]) Oct 29 23:59:39.600517 kernel: ACPI: LAPIC_NMI (acpi_id[0x1e] high edge lint[0x1]) Oct 29 23:59:39.600522 kernel: ACPI: LAPIC_NMI (acpi_id[0x1f] high edge lint[0x1]) Oct 29 23:59:39.600528 kernel: ACPI: LAPIC_NMI (acpi_id[0x20] high edge lint[0x1]) Oct 29 23:59:39.600536 kernel: ACPI: LAPIC_NMI (acpi_id[0x21] high edge lint[0x1]) Oct 29 23:59:39.600542 kernel: ACPI: LAPIC_NMI (acpi_id[0x22] high edge lint[0x1]) Oct 29 23:59:39.600547 kernel: ACPI: LAPIC_NMI (acpi_id[0x23] high edge lint[0x1]) Oct 29 23:59:39.600553 kernel: ACPI: LAPIC_NMI (acpi_id[0x24] high edge lint[0x1]) Oct 29 23:59:39.600559 kernel: ACPI: LAPIC_NMI (acpi_id[0x25] high edge lint[0x1]) Oct 29 23:59:39.600564 kernel: ACPI: LAPIC_NMI (acpi_id[0x26] high edge lint[0x1]) Oct 29 23:59:39.600570 kernel: ACPI: LAPIC_NMI (acpi_id[0x27] high edge lint[0x1]) Oct 29 23:59:39.600580 kernel: ACPI: LAPIC_NMI (acpi_id[0x28] high edge lint[0x1]) Oct 29 23:59:39.600586 kernel: ACPI: LAPIC_NMI (acpi_id[0x29] high edge lint[0x1]) Oct 29 23:59:39.600592 kernel: ACPI: LAPIC_NMI (acpi_id[0x2a] high edge lint[0x1]) Oct 29 23:59:39.600599 kernel: ACPI: LAPIC_NMI (acpi_id[0x2b] high edge lint[0x1]) Oct 29 23:59:39.600605 kernel: ACPI: LAPIC_NMI (acpi_id[0x2c] high edge lint[0x1]) Oct 29 23:59:39.601692 kernel: ACPI: LAPIC_NMI (acpi_id[0x2d] high edge lint[0x1]) Oct 29 23:59:39.601893 kernel: ACPI: LAPIC_NMI (acpi_id[0x2e] high edge lint[0x1]) Oct 29 23:59:39.601901 kernel: ACPI: LAPIC_NMI (acpi_id[0x2f] high edge lint[0x1]) Oct 29 23:59:39.601915 kernel: ACPI: LAPIC_NMI (acpi_id[0x30] high edge lint[0x1]) Oct 29 23:59:39.601922 kernel: ACPI: LAPIC_NMI (acpi_id[0x31] high edge lint[0x1]) Oct 29 23:59:39.601928 kernel: ACPI: LAPIC_NMI (acpi_id[0x32] high edge lint[0x1]) Oct 29 23:59:39.601934 kernel: ACPI: LAPIC_NMI (acpi_id[0x33] high edge lint[0x1]) Oct 29 23:59:39.601940 kernel: ACPI: LAPIC_NMI (acpi_id[0x34] high edge lint[0x1]) Oct 29 23:59:39.601946 kernel: ACPI: LAPIC_NMI (acpi_id[0x35] high edge lint[0x1]) Oct 29 23:59:39.601952 kernel: ACPI: LAPIC_NMI (acpi_id[0x36] high edge lint[0x1]) Oct 29 23:59:39.601958 kernel: ACPI: LAPIC_NMI (acpi_id[0x37] high edge lint[0x1]) Oct 29 23:59:39.601964 kernel: ACPI: LAPIC_NMI (acpi_id[0x38] high edge lint[0x1]) Oct 29 23:59:39.601971 kernel: ACPI: LAPIC_NMI (acpi_id[0x39] high edge lint[0x1]) Oct 29 23:59:39.601977 kernel: ACPI: LAPIC_NMI (acpi_id[0x3a] high edge lint[0x1]) Oct 29 23:59:39.601983 kernel: ACPI: LAPIC_NMI (acpi_id[0x3b] high edge lint[0x1]) Oct 29 23:59:39.601989 kernel: ACPI: LAPIC_NMI (acpi_id[0x3c] high edge lint[0x1]) Oct 29 23:59:39.601995 kernel: ACPI: LAPIC_NMI (acpi_id[0x3d] high edge lint[0x1]) Oct 29 23:59:39.602001 kernel: ACPI: LAPIC_NMI (acpi_id[0x3e] high edge lint[0x1]) Oct 29 23:59:39.602007 kernel: ACPI: LAPIC_NMI (acpi_id[0x3f] high edge lint[0x1]) Oct 29 23:59:39.602013 kernel: ACPI: LAPIC_NMI (acpi_id[0x40] high edge lint[0x1]) Oct 29 23:59:39.602020 kernel: ACPI: LAPIC_NMI (acpi_id[0x41] high edge lint[0x1]) Oct 29 23:59:39.602034 kernel: ACPI: LAPIC_NMI (acpi_id[0x42] high edge lint[0x1]) Oct 29 23:59:39.602041 kernel: ACPI: LAPIC_NMI (acpi_id[0x43] high edge lint[0x1]) Oct 29 23:59:39.602047 kernel: ACPI: LAPIC_NMI (acpi_id[0x44] high edge lint[0x1]) Oct 29 23:59:39.602053 kernel: ACPI: LAPIC_NMI (acpi_id[0x45] high edge lint[0x1]) Oct 29 23:59:39.602058 kernel: ACPI: LAPIC_NMI (acpi_id[0x46] high edge lint[0x1]) Oct 29 23:59:39.602065 kernel: ACPI: LAPIC_NMI (acpi_id[0x47] high edge lint[0x1]) Oct 29 23:59:39.602070 kernel: ACPI: LAPIC_NMI (acpi_id[0x48] high edge lint[0x1]) Oct 29 23:59:39.602078 kernel: ACPI: LAPIC_NMI (acpi_id[0x49] high edge lint[0x1]) Oct 29 23:59:39.602084 kernel: ACPI: LAPIC_NMI (acpi_id[0x4a] high edge lint[0x1]) Oct 29 23:59:39.602090 kernel: ACPI: LAPIC_NMI (acpi_id[0x4b] high edge lint[0x1]) Oct 29 23:59:39.602096 kernel: ACPI: LAPIC_NMI (acpi_id[0x4c] high edge lint[0x1]) Oct 29 23:59:39.602102 kernel: ACPI: LAPIC_NMI (acpi_id[0x4d] high edge lint[0x1]) Oct 29 23:59:39.602108 kernel: ACPI: LAPIC_NMI (acpi_id[0x4e] high edge lint[0x1]) Oct 29 23:59:39.602114 kernel: ACPI: LAPIC_NMI (acpi_id[0x4f] high edge lint[0x1]) Oct 29 23:59:39.602120 kernel: ACPI: LAPIC_NMI (acpi_id[0x50] high edge lint[0x1]) Oct 29 23:59:39.602127 kernel: ACPI: LAPIC_NMI (acpi_id[0x51] high edge lint[0x1]) Oct 29 23:59:39.602133 kernel: ACPI: LAPIC_NMI (acpi_id[0x52] high edge lint[0x1]) Oct 29 23:59:39.602139 kernel: ACPI: LAPIC_NMI (acpi_id[0x53] high edge lint[0x1]) Oct 29 23:59:39.602144 kernel: ACPI: LAPIC_NMI (acpi_id[0x54] high edge lint[0x1]) Oct 29 23:59:39.602150 kernel: ACPI: LAPIC_NMI (acpi_id[0x55] high edge lint[0x1]) Oct 29 23:59:39.602156 kernel: ACPI: LAPIC_NMI (acpi_id[0x56] high edge lint[0x1]) Oct 29 23:59:39.602163 kernel: ACPI: LAPIC_NMI (acpi_id[0x57] high edge lint[0x1]) Oct 29 23:59:39.602168 kernel: ACPI: LAPIC_NMI (acpi_id[0x58] high edge lint[0x1]) Oct 29 23:59:39.602174 kernel: ACPI: LAPIC_NMI (acpi_id[0x59] high edge lint[0x1]) Oct 29 23:59:39.602181 kernel: ACPI: LAPIC_NMI (acpi_id[0x5a] high edge lint[0x1]) Oct 29 23:59:39.602187 kernel: ACPI: LAPIC_NMI (acpi_id[0x5b] high edge lint[0x1]) Oct 29 23:59:39.602193 kernel: ACPI: LAPIC_NMI (acpi_id[0x5c] high edge lint[0x1]) Oct 29 23:59:39.602199 kernel: ACPI: LAPIC_NMI (acpi_id[0x5d] high edge lint[0x1]) Oct 29 23:59:39.602205 kernel: ACPI: LAPIC_NMI (acpi_id[0x5e] high edge lint[0x1]) Oct 29 23:59:39.602211 kernel: ACPI: LAPIC_NMI (acpi_id[0x5f] high edge lint[0x1]) Oct 29 23:59:39.602216 kernel: ACPI: LAPIC_NMI (acpi_id[0x60] high edge lint[0x1]) Oct 29 23:59:39.602222 kernel: ACPI: LAPIC_NMI (acpi_id[0x61] high edge lint[0x1]) Oct 29 23:59:39.602229 kernel: ACPI: LAPIC_NMI (acpi_id[0x62] high edge lint[0x1]) Oct 29 23:59:39.602236 kernel: ACPI: LAPIC_NMI (acpi_id[0x63] high edge lint[0x1]) Oct 29 23:59:39.602241 kernel: ACPI: LAPIC_NMI (acpi_id[0x64] high edge lint[0x1]) Oct 29 23:59:39.602248 kernel: ACPI: LAPIC_NMI (acpi_id[0x65] high edge lint[0x1]) Oct 29 23:59:39.602253 kernel: ACPI: LAPIC_NMI (acpi_id[0x66] high edge lint[0x1]) Oct 29 23:59:39.602259 kernel: ACPI: LAPIC_NMI (acpi_id[0x67] high edge lint[0x1]) Oct 29 23:59:39.602265 kernel: ACPI: LAPIC_NMI (acpi_id[0x68] high edge lint[0x1]) Oct 29 23:59:39.602271 kernel: ACPI: LAPIC_NMI (acpi_id[0x69] high edge lint[0x1]) Oct 29 23:59:39.602278 kernel: ACPI: LAPIC_NMI (acpi_id[0x6a] high edge lint[0x1]) Oct 29 23:59:39.602284 kernel: ACPI: LAPIC_NMI (acpi_id[0x6b] high edge lint[0x1]) Oct 29 23:59:39.602290 kernel: ACPI: LAPIC_NMI (acpi_id[0x6c] high edge lint[0x1]) Oct 29 23:59:39.602296 kernel: ACPI: LAPIC_NMI (acpi_id[0x6d] high edge lint[0x1]) Oct 29 23:59:39.602302 kernel: ACPI: LAPIC_NMI (acpi_id[0x6e] high edge lint[0x1]) Oct 29 23:59:39.602308 kernel: ACPI: LAPIC_NMI (acpi_id[0x6f] high edge lint[0x1]) Oct 29 23:59:39.602313 kernel: ACPI: LAPIC_NMI (acpi_id[0x70] high edge lint[0x1]) Oct 29 23:59:39.602319 kernel: ACPI: LAPIC_NMI (acpi_id[0x71] high edge lint[0x1]) Oct 29 23:59:39.602326 kernel: ACPI: LAPIC_NMI (acpi_id[0x72] high edge lint[0x1]) Oct 29 23:59:39.602332 kernel: ACPI: LAPIC_NMI (acpi_id[0x73] high edge lint[0x1]) Oct 29 23:59:39.602338 kernel: ACPI: LAPIC_NMI (acpi_id[0x74] high edge lint[0x1]) Oct 29 23:59:39.602344 kernel: ACPI: LAPIC_NMI (acpi_id[0x75] high edge lint[0x1]) Oct 29 23:59:39.602350 kernel: ACPI: LAPIC_NMI (acpi_id[0x76] high edge lint[0x1]) Oct 29 23:59:39.602356 kernel: ACPI: LAPIC_NMI (acpi_id[0x77] high edge lint[0x1]) Oct 29 23:59:39.602362 kernel: ACPI: LAPIC_NMI (acpi_id[0x78] high edge lint[0x1]) Oct 29 23:59:39.602368 kernel: ACPI: LAPIC_NMI (acpi_id[0x79] high edge lint[0x1]) Oct 29 23:59:39.602375 kernel: ACPI: LAPIC_NMI (acpi_id[0x7a] high edge lint[0x1]) Oct 29 23:59:39.602381 kernel: ACPI: LAPIC_NMI (acpi_id[0x7b] high edge lint[0x1]) Oct 29 23:59:39.602387 kernel: ACPI: LAPIC_NMI (acpi_id[0x7c] high edge lint[0x1]) Oct 29 23:59:39.602392 kernel: ACPI: LAPIC_NMI (acpi_id[0x7d] high edge lint[0x1]) Oct 29 23:59:39.602398 kernel: ACPI: LAPIC_NMI (acpi_id[0x7e] high edge lint[0x1]) Oct 29 23:59:39.602404 kernel: ACPI: LAPIC_NMI (acpi_id[0x7f] high edge lint[0x1]) Oct 29 23:59:39.602410 kernel: IOAPIC[0]: apic_id 1, version 17, address 0xfec00000, GSI 0-23 Oct 29 23:59:39.602416 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 high edge) Oct 29 23:59:39.602423 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Oct 29 23:59:39.602430 kernel: ACPI: HPET id: 0x8086af01 base: 0xfed00000 Oct 29 23:59:39.602436 kernel: TSC deadline timer available Oct 29 23:59:39.602442 kernel: CPU topo: Max. logical packages: 128 Oct 29 23:59:39.602448 kernel: CPU topo: Max. logical dies: 128 Oct 29 23:59:39.602455 kernel: CPU topo: Max. dies per package: 1 Oct 29 23:59:39.602460 kernel: CPU topo: Max. threads per core: 1 Oct 29 23:59:39.602466 kernel: CPU topo: Num. cores per package: 1 Oct 29 23:59:39.602473 kernel: CPU topo: Num. threads per package: 1 Oct 29 23:59:39.602479 kernel: CPU topo: Allowing 2 present CPUs plus 126 hotplug CPUs Oct 29 23:59:39.602485 kernel: [mem 0x80000000-0xefffffff] available for PCI devices Oct 29 23:59:39.602491 kernel: Booting paravirtualized kernel on VMware hypervisor Oct 29 23:59:39.602497 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Oct 29 23:59:39.602503 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:128 nr_cpu_ids:128 nr_node_ids:1 Oct 29 23:59:39.602509 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u262144 Oct 29 23:59:39.602516 kernel: pcpu-alloc: s207832 r8192 d29736 u262144 alloc=1*2097152 Oct 29 23:59:39.602523 kernel: pcpu-alloc: [0] 000 001 002 003 004 005 006 007 Oct 29 23:59:39.602529 kernel: pcpu-alloc: [0] 008 009 010 011 012 013 014 015 Oct 29 23:59:39.602535 kernel: pcpu-alloc: [0] 016 017 018 019 020 021 022 023 Oct 29 23:59:39.602541 kernel: pcpu-alloc: [0] 024 025 026 027 028 029 030 031 Oct 29 23:59:39.602547 kernel: pcpu-alloc: [0] 032 033 034 035 036 037 038 039 Oct 29 23:59:39.602553 kernel: pcpu-alloc: [0] 040 041 042 043 044 045 046 047 Oct 29 23:59:39.602559 kernel: pcpu-alloc: [0] 048 049 050 051 052 053 054 055 Oct 29 23:59:39.602566 kernel: pcpu-alloc: [0] 056 057 058 059 060 061 062 063 Oct 29 23:59:39.602572 kernel: pcpu-alloc: [0] 064 065 066 067 068 069 070 071 Oct 29 23:59:39.602578 kernel: pcpu-alloc: [0] 072 073 074 075 076 077 078 079 Oct 29 23:59:39.602584 kernel: pcpu-alloc: [0] 080 081 082 083 084 085 086 087 Oct 29 23:59:39.602590 kernel: pcpu-alloc: [0] 088 089 090 091 092 093 094 095 Oct 29 23:59:39.602596 kernel: pcpu-alloc: [0] 096 097 098 099 100 101 102 103 Oct 29 23:59:39.602602 kernel: pcpu-alloc: [0] 104 105 106 107 108 109 110 111 Oct 29 23:59:39.602609 kernel: pcpu-alloc: [0] 112 113 114 115 116 117 118 119 Oct 29 23:59:39.602615 kernel: pcpu-alloc: [0] 120 121 122 123 124 125 126 127 Oct 29 23:59:39.602622 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=56cc5d11e9ee9e328725323e5b298567de51aff19ad0756381062170c9c03796 Oct 29 23:59:39.602629 kernel: random: crng init done Oct 29 23:59:39.602635 kernel: printk: log_buf_len individual max cpu contribution: 4096 bytes Oct 29 23:59:39.602641 kernel: printk: log_buf_len total cpu_extra contributions: 520192 bytes Oct 29 23:59:39.602648 kernel: printk: log_buf_len min size: 262144 bytes Oct 29 23:59:39.602654 kernel: printk: log_buf_len: 1048576 bytes Oct 29 23:59:39.602660 kernel: printk: early log buf free: 245688(93%) Oct 29 23:59:39.602666 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 29 23:59:39.602673 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Oct 29 23:59:39.602679 kernel: Fallback order for Node 0: 0 Oct 29 23:59:39.602685 kernel: Built 1 zonelists, mobility grouping on. Total pages: 524157 Oct 29 23:59:39.602691 kernel: Policy zone: DMA32 Oct 29 23:59:39.602698 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 29 23:59:39.602704 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=128, Nodes=1 Oct 29 23:59:39.602710 kernel: ftrace: allocating 40092 entries in 157 pages Oct 29 23:59:39.602716 kernel: ftrace: allocated 157 pages with 5 groups Oct 29 23:59:39.602722 kernel: Dynamic Preempt: voluntary Oct 29 23:59:39.602728 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 29 23:59:39.602735 kernel: rcu: RCU event tracing is enabled. Oct 29 23:59:39.602742 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=128. Oct 29 23:59:39.602748 kernel: Trampoline variant of Tasks RCU enabled. Oct 29 23:59:39.602754 kernel: Rude variant of Tasks RCU enabled. Oct 29 23:59:39.602760 kernel: Tracing variant of Tasks RCU enabled. Oct 29 23:59:39.602766 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 29 23:59:39.602772 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=128 Oct 29 23:59:39.602778 kernel: RCU Tasks: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Oct 29 23:59:39.602784 kernel: RCU Tasks Rude: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Oct 29 23:59:39.602791 kernel: RCU Tasks Trace: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Oct 29 23:59:39.602798 kernel: NR_IRQS: 33024, nr_irqs: 1448, preallocated irqs: 16 Oct 29 23:59:39.602804 kernel: rcu: srcu_init: Setting srcu_struct sizes to big. Oct 29 23:59:39.602810 kernel: Console: colour VGA+ 80x25 Oct 29 23:59:39.602816 kernel: printk: legacy console [tty0] enabled Oct 29 23:59:39.602822 kernel: printk: legacy console [ttyS0] enabled Oct 29 23:59:39.602829 kernel: ACPI: Core revision 20240827 Oct 29 23:59:39.602836 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 133484882848 ns Oct 29 23:59:39.602842 kernel: APIC: Switch to symmetric I/O mode setup Oct 29 23:59:39.602848 kernel: x2apic enabled Oct 29 23:59:39.602854 kernel: APIC: Switched APIC routing to: physical x2apic Oct 29 23:59:39.602861 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Oct 29 23:59:39.602867 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Oct 29 23:59:39.602874 kernel: Calibrating delay loop (skipped) preset value.. 6816.00 BogoMIPS (lpj=3408000) Oct 29 23:59:39.602881 kernel: Disabled fast string operations Oct 29 23:59:39.602887 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Oct 29 23:59:39.602894 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Oct 29 23:59:39.602900 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Oct 29 23:59:39.602906 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall and VM exit Oct 29 23:59:39.602912 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Oct 29 23:59:39.602918 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Oct 29 23:59:39.602926 kernel: RETBleed: Mitigation: Enhanced IBRS Oct 29 23:59:39.602932 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Oct 29 23:59:39.602938 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Oct 29 23:59:39.602944 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Oct 29 23:59:39.602951 kernel: SRBDS: Unknown: Dependent on hypervisor status Oct 29 23:59:39.602957 kernel: GDS: Unknown: Dependent on hypervisor status Oct 29 23:59:39.602963 kernel: active return thunk: its_return_thunk Oct 29 23:59:39.602970 kernel: ITS: Mitigation: Aligned branch/return thunks Oct 29 23:59:39.602977 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Oct 29 23:59:39.602983 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Oct 29 23:59:39.602989 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Oct 29 23:59:39.602995 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Oct 29 23:59:39.603002 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Oct 29 23:59:39.603008 kernel: Freeing SMP alternatives memory: 32K Oct 29 23:59:39.603015 kernel: pid_max: default: 131072 minimum: 1024 Oct 29 23:59:39.603021 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Oct 29 23:59:39.605057 kernel: landlock: Up and running. Oct 29 23:59:39.605069 kernel: SELinux: Initializing. Oct 29 23:59:39.605080 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Oct 29 23:59:39.605087 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Oct 29 23:59:39.605094 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Oct 29 23:59:39.605106 kernel: Performance Events: Skylake events, core PMU driver. Oct 29 23:59:39.605113 kernel: core: CPUID marked event: 'cpu cycles' unavailable Oct 29 23:59:39.605120 kernel: core: CPUID marked event: 'instructions' unavailable Oct 29 23:59:39.605126 kernel: core: CPUID marked event: 'bus cycles' unavailable Oct 29 23:59:39.605132 kernel: core: CPUID marked event: 'cache references' unavailable Oct 29 23:59:39.605142 kernel: core: CPUID marked event: 'cache misses' unavailable Oct 29 23:59:39.605148 kernel: core: CPUID marked event: 'branch instructions' unavailable Oct 29 23:59:39.605156 kernel: core: CPUID marked event: 'branch misses' unavailable Oct 29 23:59:39.605162 kernel: ... version: 1 Oct 29 23:59:39.605172 kernel: ... bit width: 48 Oct 29 23:59:39.605178 kernel: ... generic registers: 4 Oct 29 23:59:39.605185 kernel: ... value mask: 0000ffffffffffff Oct 29 23:59:39.605191 kernel: ... max period: 000000007fffffff Oct 29 23:59:39.605201 kernel: ... fixed-purpose events: 0 Oct 29 23:59:39.605209 kernel: ... event mask: 000000000000000f Oct 29 23:59:39.605215 kernel: signal: max sigframe size: 1776 Oct 29 23:59:39.605222 kernel: rcu: Hierarchical SRCU implementation. Oct 29 23:59:39.605231 kernel: rcu: Max phase no-delay instances is 400. Oct 29 23:59:39.605238 kernel: Timer migration: 3 hierarchy levels; 8 children per group; 3 crossnode level Oct 29 23:59:39.605244 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Oct 29 23:59:39.605250 kernel: smp: Bringing up secondary CPUs ... Oct 29 23:59:39.605257 kernel: smpboot: x86: Booting SMP configuration: Oct 29 23:59:39.605271 kernel: .... node #0, CPUs: #1 Oct 29 23:59:39.605278 kernel: Disabled fast string operations Oct 29 23:59:39.605284 kernel: smp: Brought up 1 node, 2 CPUs Oct 29 23:59:39.605294 kernel: smpboot: Total of 2 processors activated (13632.00 BogoMIPS) Oct 29 23:59:39.605301 kernel: Memory: 1946764K/2096628K available (14336K kernel code, 2443K rwdata, 26064K rodata, 15956K init, 2088K bss, 138484K reserved, 0K cma-reserved) Oct 29 23:59:39.605308 kernel: devtmpfs: initialized Oct 29 23:59:39.605314 kernel: x86/mm: Memory block size: 128MB Oct 29 23:59:39.605325 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7feff000-0x7fefffff] (4096 bytes) Oct 29 23:59:39.605332 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 29 23:59:39.605339 kernel: futex hash table entries: 32768 (order: 9, 2097152 bytes, linear) Oct 29 23:59:39.605345 kernel: pinctrl core: initialized pinctrl subsystem Oct 29 23:59:39.605355 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 29 23:59:39.605361 kernel: audit: initializing netlink subsys (disabled) Oct 29 23:59:39.605367 kernel: audit: type=2000 audit(1761782376.284:1): state=initialized audit_enabled=0 res=1 Oct 29 23:59:39.605376 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 29 23:59:39.605382 kernel: thermal_sys: Registered thermal governor 'user_space' Oct 29 23:59:39.605388 kernel: cpuidle: using governor menu Oct 29 23:59:39.605394 kernel: Simple Boot Flag at 0x36 set to 0x80 Oct 29 23:59:39.605400 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 29 23:59:39.605406 kernel: dca service started, version 1.12.1 Oct 29 23:59:39.605413 kernel: PCI: ECAM [mem 0xf0000000-0xf7ffffff] (base 0xf0000000) for domain 0000 [bus 00-7f] Oct 29 23:59:39.605426 kernel: PCI: Using configuration type 1 for base access Oct 29 23:59:39.605433 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Oct 29 23:59:39.605440 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Oct 29 23:59:39.605447 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Oct 29 23:59:39.605453 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Oct 29 23:59:39.605460 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Oct 29 23:59:39.605466 kernel: ACPI: Added _OSI(Module Device) Oct 29 23:59:39.605474 kernel: ACPI: Added _OSI(Processor Device) Oct 29 23:59:39.605480 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 29 23:59:39.605487 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 29 23:59:39.605494 kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored Oct 29 23:59:39.605501 kernel: ACPI: Interpreter enabled Oct 29 23:59:39.605507 kernel: ACPI: PM: (supports S0 S1 S5) Oct 29 23:59:39.605513 kernel: ACPI: Using IOAPIC for interrupt routing Oct 29 23:59:39.605521 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Oct 29 23:59:39.605528 kernel: PCI: Using E820 reservations for host bridge windows Oct 29 23:59:39.605534 kernel: ACPI: Enabled 4 GPEs in block 00 to 0F Oct 29 23:59:39.605540 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-7f]) Oct 29 23:59:39.605669 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Oct 29 23:59:39.605742 kernel: acpi PNP0A03:00: _OSC: platform does not support [AER LTR] Oct 29 23:59:39.605812 kernel: acpi PNP0A03:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability] Oct 29 23:59:39.605822 kernel: PCI host bridge to bus 0000:00 Oct 29 23:59:39.605890 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Oct 29 23:59:39.605952 kernel: pci_bus 0000:00: root bus resource [mem 0x000cc000-0x000dbfff window] Oct 29 23:59:39.606011 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Oct 29 23:59:39.606095 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Oct 29 23:59:39.606159 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xfeff window] Oct 29 23:59:39.606218 kernel: pci_bus 0000:00: root bus resource [bus 00-7f] Oct 29 23:59:39.606303 kernel: pci 0000:00:00.0: [8086:7190] type 00 class 0x060000 conventional PCI endpoint Oct 29 23:59:39.606379 kernel: pci 0000:00:01.0: [8086:7191] type 01 class 0x060400 conventional PCI bridge Oct 29 23:59:39.606468 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Oct 29 23:59:39.606546 kernel: pci 0000:00:07.0: [8086:7110] type 00 class 0x060100 conventional PCI endpoint Oct 29 23:59:39.606619 kernel: pci 0000:00:07.1: [8086:7111] type 00 class 0x01018a conventional PCI endpoint Oct 29 23:59:39.606690 kernel: pci 0000:00:07.1: BAR 4 [io 0x1060-0x106f] Oct 29 23:59:39.606760 kernel: pci 0000:00:07.1: BAR 0 [io 0x01f0-0x01f7]: legacy IDE quirk Oct 29 23:59:39.606826 kernel: pci 0000:00:07.1: BAR 1 [io 0x03f6]: legacy IDE quirk Oct 29 23:59:39.606892 kernel: pci 0000:00:07.1: BAR 2 [io 0x0170-0x0177]: legacy IDE quirk Oct 29 23:59:39.606964 kernel: pci 0000:00:07.1: BAR 3 [io 0x0376]: legacy IDE quirk Oct 29 23:59:39.607524 kernel: pci 0000:00:07.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint Oct 29 23:59:39.607604 kernel: pci 0000:00:07.3: quirk: [io 0x1000-0x103f] claimed by PIIX4 ACPI Oct 29 23:59:39.607678 kernel: pci 0000:00:07.3: quirk: [io 0x1040-0x104f] claimed by PIIX4 SMB Oct 29 23:59:39.607751 kernel: pci 0000:00:07.7: [15ad:0740] type 00 class 0x088000 conventional PCI endpoint Oct 29 23:59:39.607818 kernel: pci 0000:00:07.7: BAR 0 [io 0x1080-0x10bf] Oct 29 23:59:39.607885 kernel: pci 0000:00:07.7: BAR 1 [mem 0xfebfe000-0xfebfffff 64bit] Oct 29 23:59:39.607957 kernel: pci 0000:00:0f.0: [15ad:0405] type 00 class 0x030000 conventional PCI endpoint Oct 29 23:59:39.608033 kernel: pci 0000:00:0f.0: BAR 0 [io 0x1070-0x107f] Oct 29 23:59:39.608108 kernel: pci 0000:00:0f.0: BAR 1 [mem 0xe8000000-0xefffffff pref] Oct 29 23:59:39.608174 kernel: pci 0000:00:0f.0: BAR 2 [mem 0xfe000000-0xfe7fffff] Oct 29 23:59:39.608239 kernel: pci 0000:00:0f.0: ROM [mem 0x00000000-0x00007fff pref] Oct 29 23:59:39.608304 kernel: pci 0000:00:0f.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Oct 29 23:59:39.608375 kernel: pci 0000:00:11.0: [15ad:0790] type 01 class 0x060401 conventional PCI bridge Oct 29 23:59:39.608443 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] (subtractive decode) Oct 29 23:59:39.608513 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Oct 29 23:59:39.608580 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Oct 29 23:59:39.608648 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Oct 29 23:59:39.608932 kernel: pci 0000:00:15.0: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 29 23:59:39.610131 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Oct 29 23:59:39.610210 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Oct 29 23:59:39.610280 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Oct 29 23:59:39.610348 kernel: pci 0000:00:15.0: PME# supported from D0 D3hot D3cold Oct 29 23:59:39.610421 kernel: pci 0000:00:15.1: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 29 23:59:39.610489 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Oct 29 23:59:39.610556 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Oct 29 23:59:39.610625 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Oct 29 23:59:39.610692 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Oct 29 23:59:39.611730 kernel: pci 0000:00:15.1: PME# supported from D0 D3hot D3cold Oct 29 23:59:39.611810 kernel: pci 0000:00:15.2: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 29 23:59:39.611881 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Oct 29 23:59:39.611954 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Oct 29 23:59:39.612023 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Oct 29 23:59:39.612109 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Oct 29 23:59:39.612177 kernel: pci 0000:00:15.2: PME# supported from D0 D3hot D3cold Oct 29 23:59:39.612249 kernel: pci 0000:00:15.3: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 29 23:59:39.612315 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Oct 29 23:59:39.612385 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Oct 29 23:59:39.612451 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Oct 29 23:59:39.612518 kernel: pci 0000:00:15.3: PME# supported from D0 D3hot D3cold Oct 29 23:59:39.618344 kernel: pci 0000:00:15.4: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 29 23:59:39.618428 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Oct 29 23:59:39.618501 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Oct 29 23:59:39.618570 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Oct 29 23:59:39.618637 kernel: pci 0000:00:15.4: PME# supported from D0 D3hot D3cold Oct 29 23:59:39.618713 kernel: pci 0000:00:15.5: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 29 23:59:39.618780 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Oct 29 23:59:39.618847 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Oct 29 23:59:39.618915 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Oct 29 23:59:39.618981 kernel: pci 0000:00:15.5: PME# supported from D0 D3hot D3cold Oct 29 23:59:39.619065 kernel: pci 0000:00:15.6: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 29 23:59:39.619135 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Oct 29 23:59:39.619201 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Oct 29 23:59:39.619267 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Oct 29 23:59:39.619337 kernel: pci 0000:00:15.6: PME# supported from D0 D3hot D3cold Oct 29 23:59:39.619411 kernel: pci 0000:00:15.7: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 29 23:59:39.619478 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Oct 29 23:59:39.619545 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Oct 29 23:59:39.619610 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Oct 29 23:59:39.619677 kernel: pci 0000:00:15.7: PME# supported from D0 D3hot D3cold Oct 29 23:59:39.619746 kernel: pci 0000:00:16.0: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 29 23:59:39.619815 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Oct 29 23:59:39.619880 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Oct 29 23:59:39.619951 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Oct 29 23:59:39.620017 kernel: pci 0000:00:16.0: PME# supported from D0 D3hot D3cold Oct 29 23:59:39.620096 kernel: pci 0000:00:16.1: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 29 23:59:39.620179 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Oct 29 23:59:39.620266 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Oct 29 23:59:39.620349 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Oct 29 23:59:39.620424 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Oct 29 23:59:39.620491 kernel: pci 0000:00:16.1: PME# supported from D0 D3hot D3cold Oct 29 23:59:39.620569 kernel: pci 0000:00:16.2: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 29 23:59:39.620656 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Oct 29 23:59:39.620724 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Oct 29 23:59:39.620790 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Oct 29 23:59:39.620855 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Oct 29 23:59:39.620920 kernel: pci 0000:00:16.2: PME# supported from D0 D3hot D3cold Oct 29 23:59:39.620992 kernel: pci 0000:00:16.3: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 29 23:59:39.621079 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Oct 29 23:59:39.621146 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Oct 29 23:59:39.621214 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Oct 29 23:59:39.621281 kernel: pci 0000:00:16.3: PME# supported from D0 D3hot D3cold Oct 29 23:59:39.621352 kernel: pci 0000:00:16.4: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 29 23:59:39.621419 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Oct 29 23:59:39.621488 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Oct 29 23:59:39.621553 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Oct 29 23:59:39.621619 kernel: pci 0000:00:16.4: PME# supported from D0 D3hot D3cold Oct 29 23:59:39.621690 kernel: pci 0000:00:16.5: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 29 23:59:39.621757 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Oct 29 23:59:39.621823 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Oct 29 23:59:39.621892 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Oct 29 23:59:39.621958 kernel: pci 0000:00:16.5: PME# supported from D0 D3hot D3cold Oct 29 23:59:39.624050 kernel: pci 0000:00:16.6: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 29 23:59:39.624154 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Oct 29 23:59:39.624225 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Oct 29 23:59:39.624294 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Oct 29 23:59:39.624365 kernel: pci 0000:00:16.6: PME# supported from D0 D3hot D3cold Oct 29 23:59:39.624440 kernel: pci 0000:00:16.7: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 29 23:59:39.624508 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Oct 29 23:59:39.624580 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Oct 29 23:59:39.624666 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Oct 29 23:59:39.624734 kernel: pci 0000:00:16.7: PME# supported from D0 D3hot D3cold Oct 29 23:59:39.624809 kernel: pci 0000:00:17.0: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 29 23:59:39.624876 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Oct 29 23:59:39.624943 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Oct 29 23:59:39.625010 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Oct 29 23:59:39.625087 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Oct 29 23:59:39.625154 kernel: pci 0000:00:17.0: PME# supported from D0 D3hot D3cold Oct 29 23:59:39.625234 kernel: pci 0000:00:17.1: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 29 23:59:39.625321 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Oct 29 23:59:39.625390 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Oct 29 23:59:39.625459 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Oct 29 23:59:39.625525 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Oct 29 23:59:39.625589 kernel: pci 0000:00:17.1: PME# supported from D0 D3hot D3cold Oct 29 23:59:39.625661 kernel: pci 0000:00:17.2: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 29 23:59:39.625741 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Oct 29 23:59:39.625824 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Oct 29 23:59:39.625903 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Oct 29 23:59:39.625989 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Oct 29 23:59:39.635201 kernel: pci 0000:00:17.2: PME# supported from D0 D3hot D3cold Oct 29 23:59:39.635298 kernel: pci 0000:00:17.3: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 29 23:59:39.635369 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Oct 29 23:59:39.635443 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Oct 29 23:59:39.635510 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Oct 29 23:59:39.635576 kernel: pci 0000:00:17.3: PME# supported from D0 D3hot D3cold Oct 29 23:59:39.635651 kernel: pci 0000:00:17.4: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 29 23:59:39.635718 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Oct 29 23:59:39.635786 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Oct 29 23:59:39.635856 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Oct 29 23:59:39.635922 kernel: pci 0000:00:17.4: PME# supported from D0 D3hot D3cold Oct 29 23:59:39.635993 kernel: pci 0000:00:17.5: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 29 23:59:39.636072 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Oct 29 23:59:39.636139 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Oct 29 23:59:39.636206 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Oct 29 23:59:39.636274 kernel: pci 0000:00:17.5: PME# supported from D0 D3hot D3cold Oct 29 23:59:39.636345 kernel: pci 0000:00:17.6: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 29 23:59:39.636411 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Oct 29 23:59:39.636477 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Oct 29 23:59:39.636541 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Oct 29 23:59:39.636607 kernel: pci 0000:00:17.6: PME# supported from D0 D3hot D3cold Oct 29 23:59:39.636682 kernel: pci 0000:00:17.7: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 29 23:59:39.636748 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Oct 29 23:59:39.636815 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Oct 29 23:59:39.636881 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Oct 29 23:59:39.636953 kernel: pci 0000:00:17.7: PME# supported from D0 D3hot D3cold Oct 29 23:59:39.638526 kernel: pci 0000:00:18.0: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 29 23:59:39.638623 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Oct 29 23:59:39.638695 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Oct 29 23:59:39.638764 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Oct 29 23:59:39.638831 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Oct 29 23:59:39.638899 kernel: pci 0000:00:18.0: PME# supported from D0 D3hot D3cold Oct 29 23:59:39.638972 kernel: pci 0000:00:18.1: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 29 23:59:39.639054 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Oct 29 23:59:39.639129 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Oct 29 23:59:39.639196 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Oct 29 23:59:39.639263 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Oct 29 23:59:39.639329 kernel: pci 0000:00:18.1: PME# supported from D0 D3hot D3cold Oct 29 23:59:39.639407 kernel: pci 0000:00:18.2: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 29 23:59:39.639479 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Oct 29 23:59:39.639545 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Oct 29 23:59:39.639611 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Oct 29 23:59:39.639678 kernel: pci 0000:00:18.2: PME# supported from D0 D3hot D3cold Oct 29 23:59:39.639750 kernel: pci 0000:00:18.3: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 29 23:59:39.639820 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Oct 29 23:59:39.639886 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Oct 29 23:59:39.639951 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Oct 29 23:59:39.640016 kernel: pci 0000:00:18.3: PME# supported from D0 D3hot D3cold Oct 29 23:59:39.640160 kernel: pci 0000:00:18.4: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 29 23:59:39.640230 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Oct 29 23:59:39.640300 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Oct 29 23:59:39.640365 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Oct 29 23:59:39.640431 kernel: pci 0000:00:18.4: PME# supported from D0 D3hot D3cold Oct 29 23:59:39.640500 kernel: pci 0000:00:18.5: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 29 23:59:39.640567 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Oct 29 23:59:39.640632 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Oct 29 23:59:39.640702 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Oct 29 23:59:39.640768 kernel: pci 0000:00:18.5: PME# supported from D0 D3hot D3cold Oct 29 23:59:39.640838 kernel: pci 0000:00:18.6: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 29 23:59:39.640904 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Oct 29 23:59:39.640970 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Oct 29 23:59:39.641049 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Oct 29 23:59:39.641122 kernel: pci 0000:00:18.6: PME# supported from D0 D3hot D3cold Oct 29 23:59:39.641279 kernel: pci 0000:00:18.7: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 29 23:59:39.641348 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Oct 29 23:59:39.641415 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Oct 29 23:59:39.641481 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Oct 29 23:59:39.641548 kernel: pci 0000:00:18.7: PME# supported from D0 D3hot D3cold Oct 29 23:59:39.641618 kernel: pci_bus 0000:01: extended config space not accessible Oct 29 23:59:39.641687 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Oct 29 23:59:39.641757 kernel: pci_bus 0000:02: extended config space not accessible Oct 29 23:59:39.641767 kernel: acpiphp: Slot [32] registered Oct 29 23:59:39.641774 kernel: acpiphp: Slot [33] registered Oct 29 23:59:39.641781 kernel: acpiphp: Slot [34] registered Oct 29 23:59:39.641788 kernel: acpiphp: Slot [35] registered Oct 29 23:59:39.641797 kernel: acpiphp: Slot [36] registered Oct 29 23:59:39.641803 kernel: acpiphp: Slot [37] registered Oct 29 23:59:39.641810 kernel: acpiphp: Slot [38] registered Oct 29 23:59:39.641816 kernel: acpiphp: Slot [39] registered Oct 29 23:59:39.641823 kernel: acpiphp: Slot [40] registered Oct 29 23:59:39.641829 kernel: acpiphp: Slot [41] registered Oct 29 23:59:39.641836 kernel: acpiphp: Slot [42] registered Oct 29 23:59:39.641843 kernel: acpiphp: Slot [43] registered Oct 29 23:59:39.641850 kernel: acpiphp: Slot [44] registered Oct 29 23:59:39.641857 kernel: acpiphp: Slot [45] registered Oct 29 23:59:39.641864 kernel: acpiphp: Slot [46] registered Oct 29 23:59:39.641870 kernel: acpiphp: Slot [47] registered Oct 29 23:59:39.641877 kernel: acpiphp: Slot [48] registered Oct 29 23:59:39.641883 kernel: acpiphp: Slot [49] registered Oct 29 23:59:39.641889 kernel: acpiphp: Slot [50] registered Oct 29 23:59:39.641897 kernel: acpiphp: Slot [51] registered Oct 29 23:59:39.641903 kernel: acpiphp: Slot [52] registered Oct 29 23:59:39.641910 kernel: acpiphp: Slot [53] registered Oct 29 23:59:39.641917 kernel: acpiphp: Slot [54] registered Oct 29 23:59:39.641923 kernel: acpiphp: Slot [55] registered Oct 29 23:59:39.641930 kernel: acpiphp: Slot [56] registered Oct 29 23:59:39.641939 kernel: acpiphp: Slot [57] registered Oct 29 23:59:39.641950 kernel: acpiphp: Slot [58] registered Oct 29 23:59:39.641960 kernel: acpiphp: Slot [59] registered Oct 29 23:59:39.641971 kernel: acpiphp: Slot [60] registered Oct 29 23:59:39.641979 kernel: acpiphp: Slot [61] registered Oct 29 23:59:39.641985 kernel: acpiphp: Slot [62] registered Oct 29 23:59:39.641992 kernel: acpiphp: Slot [63] registered Oct 29 23:59:39.642084 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] (subtractive decode) Oct 29 23:59:39.642156 kernel: pci 0000:00:11.0: bridge window [mem 0x000a0000-0x000bffff window] (subtractive decode) Oct 29 23:59:39.642272 kernel: pci 0000:00:11.0: bridge window [mem 0x000cc000-0x000dbfff window] (subtractive decode) Oct 29 23:59:39.642339 kernel: pci 0000:00:11.0: bridge window [mem 0xc0000000-0xfebfffff window] (subtractive decode) Oct 29 23:59:39.642404 kernel: pci 0000:00:11.0: bridge window [io 0x0000-0x0cf7 window] (subtractive decode) Oct 29 23:59:39.642470 kernel: pci 0000:00:11.0: bridge window [io 0x0d00-0xfeff window] (subtractive decode) Oct 29 23:59:39.642545 kernel: pci 0000:03:00.0: [15ad:07c0] type 00 class 0x010700 PCIe Endpoint Oct 29 23:59:39.642617 kernel: pci 0000:03:00.0: BAR 0 [io 0x4000-0x4007] Oct 29 23:59:39.642685 kernel: pci 0000:03:00.0: BAR 1 [mem 0xfd5f8000-0xfd5fffff 64bit] Oct 29 23:59:39.642753 kernel: pci 0000:03:00.0: ROM [mem 0x00000000-0x0000ffff pref] Oct 29 23:59:39.642819 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Oct 29 23:59:39.642887 kernel: pci 0000:03:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Oct 29 23:59:39.642954 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Oct 29 23:59:39.643024 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Oct 29 23:59:39.643164 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Oct 29 23:59:39.643237 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Oct 29 23:59:39.643305 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Oct 29 23:59:39.643540 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Oct 29 23:59:39.643612 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Oct 29 23:59:39.643686 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Oct 29 23:59:39.643759 kernel: pci 0000:0b:00.0: [15ad:07b0] type 00 class 0x020000 PCIe Endpoint Oct 29 23:59:39.643827 kernel: pci 0000:0b:00.0: BAR 0 [mem 0xfd4fc000-0xfd4fcfff] Oct 29 23:59:39.643893 kernel: pci 0000:0b:00.0: BAR 1 [mem 0xfd4fd000-0xfd4fdfff] Oct 29 23:59:39.643974 kernel: pci 0000:0b:00.0: BAR 2 [mem 0xfd4fe000-0xfd4fffff] Oct 29 23:59:39.644174 kernel: pci 0000:0b:00.0: BAR 3 [io 0x5000-0x500f] Oct 29 23:59:39.644249 kernel: pci 0000:0b:00.0: ROM [mem 0x00000000-0x0000ffff pref] Oct 29 23:59:39.644317 kernel: pci 0000:0b:00.0: supports D1 D2 Oct 29 23:59:39.644386 kernel: pci 0000:0b:00.0: PME# supported from D0 D1 D2 D3hot D3cold Oct 29 23:59:39.644452 kernel: pci 0000:0b:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Oct 29 23:59:39.644521 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Oct 29 23:59:39.645072 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Oct 29 23:59:39.645151 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Oct 29 23:59:39.645222 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Oct 29 23:59:39.645292 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Oct 29 23:59:39.645361 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Oct 29 23:59:39.645431 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Oct 29 23:59:39.645499 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Oct 29 23:59:39.645570 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Oct 29 23:59:39.645637 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Oct 29 23:59:39.645706 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Oct 29 23:59:39.645774 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Oct 29 23:59:39.645843 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Oct 29 23:59:39.645911 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Oct 29 23:59:39.645983 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Oct 29 23:59:39.646067 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Oct 29 23:59:39.646138 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Oct 29 23:59:39.646206 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Oct 29 23:59:39.646275 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Oct 29 23:59:39.646345 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Oct 29 23:59:39.646871 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Oct 29 23:59:39.646961 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Oct 29 23:59:39.647042 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Oct 29 23:59:39.647118 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Oct 29 23:59:39.647128 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 9 Oct 29 23:59:39.647136 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 0 Oct 29 23:59:39.647143 kernel: ACPI: PCI: Interrupt link LNKB disabled Oct 29 23:59:39.647152 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Oct 29 23:59:39.647159 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 10 Oct 29 23:59:39.647165 kernel: iommu: Default domain type: Translated Oct 29 23:59:39.647172 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Oct 29 23:59:39.647179 kernel: PCI: Using ACPI for IRQ routing Oct 29 23:59:39.647186 kernel: PCI: pci_cache_line_size set to 64 bytes Oct 29 23:59:39.647193 kernel: e820: reserve RAM buffer [mem 0x0009ec00-0x0009ffff] Oct 29 23:59:39.647201 kernel: e820: reserve RAM buffer [mem 0x7fee0000-0x7fffffff] Oct 29 23:59:39.647269 kernel: pci 0000:00:0f.0: vgaarb: setting as boot VGA device Oct 29 23:59:39.647338 kernel: pci 0000:00:0f.0: vgaarb: bridge control possible Oct 29 23:59:39.649129 kernel: pci 0000:00:0f.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Oct 29 23:59:39.649140 kernel: vgaarb: loaded Oct 29 23:59:39.649148 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 Oct 29 23:59:39.649154 kernel: hpet0: 16 comparators, 64-bit 14.318180 MHz counter Oct 29 23:59:39.649164 kernel: clocksource: Switched to clocksource tsc-early Oct 29 23:59:39.649171 kernel: VFS: Disk quotas dquot_6.6.0 Oct 29 23:59:39.649177 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 29 23:59:39.649184 kernel: pnp: PnP ACPI init Oct 29 23:59:39.649260 kernel: system 00:00: [io 0x1000-0x103f] has been reserved Oct 29 23:59:39.649328 kernel: system 00:00: [io 0x1040-0x104f] has been reserved Oct 29 23:59:39.649394 kernel: system 00:00: [io 0x0cf0-0x0cf1] has been reserved Oct 29 23:59:39.649460 kernel: system 00:04: [mem 0xfed00000-0xfed003ff] has been reserved Oct 29 23:59:39.649525 kernel: pnp 00:06: [dma 2] Oct 29 23:59:39.649840 kernel: system 00:07: [io 0xfce0-0xfcff] has been reserved Oct 29 23:59:39.649915 kernel: system 00:07: [mem 0xf0000000-0xf7ffffff] has been reserved Oct 29 23:59:39.649982 kernel: system 00:07: [mem 0xfe800000-0xfe9fffff] has been reserved Oct 29 23:59:39.649992 kernel: pnp: PnP ACPI: found 8 devices Oct 29 23:59:39.649999 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Oct 29 23:59:39.650006 kernel: NET: Registered PF_INET protocol family Oct 29 23:59:39.650014 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 29 23:59:39.650021 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Oct 29 23:59:39.650063 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 29 23:59:39.650074 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Oct 29 23:59:39.650080 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Oct 29 23:59:39.650087 kernel: TCP: Hash tables configured (established 16384 bind 16384) Oct 29 23:59:39.650094 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Oct 29 23:59:39.650101 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Oct 29 23:59:39.650107 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 29 23:59:39.650114 kernel: NET: Registered PF_XDP protocol family Oct 29 23:59:39.650192 kernel: pci 0000:00:15.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Oct 29 23:59:39.650263 kernel: pci 0000:00:15.3: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Oct 29 23:59:39.650332 kernel: pci 0000:00:15.4: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Oct 29 23:59:39.650402 kernel: pci 0000:00:15.5: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Oct 29 23:59:39.650471 kernel: pci 0000:00:15.6: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Oct 29 23:59:39.650541 kernel: pci 0000:00:15.7: bridge window [io 0x1000-0x0fff] to [bus 0a] add_size 1000 Oct 29 23:59:39.650609 kernel: pci 0000:00:16.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0b] add_size 200000 add_align 100000 Oct 29 23:59:39.650681 kernel: pci 0000:00:16.3: bridge window [io 0x1000-0x0fff] to [bus 0e] add_size 1000 Oct 29 23:59:39.650752 kernel: pci 0000:00:16.4: bridge window [io 0x1000-0x0fff] to [bus 0f] add_size 1000 Oct 29 23:59:39.650822 kernel: pci 0000:00:16.5: bridge window [io 0x1000-0x0fff] to [bus 10] add_size 1000 Oct 29 23:59:39.650891 kernel: pci 0000:00:16.6: bridge window [io 0x1000-0x0fff] to [bus 11] add_size 1000 Oct 29 23:59:39.650958 kernel: pci 0000:00:16.7: bridge window [io 0x1000-0x0fff] to [bus 12] add_size 1000 Oct 29 23:59:39.651982 kernel: pci 0000:00:17.3: bridge window [io 0x1000-0x0fff] to [bus 16] add_size 1000 Oct 29 23:59:39.652157 kernel: pci 0000:00:17.4: bridge window [io 0x1000-0x0fff] to [bus 17] add_size 1000 Oct 29 23:59:39.652235 kernel: pci 0000:00:17.5: bridge window [io 0x1000-0x0fff] to [bus 18] add_size 1000 Oct 29 23:59:39.652307 kernel: pci 0000:00:17.6: bridge window [io 0x1000-0x0fff] to [bus 19] add_size 1000 Oct 29 23:59:39.652377 kernel: pci 0000:00:17.7: bridge window [io 0x1000-0x0fff] to [bus 1a] add_size 1000 Oct 29 23:59:39.652446 kernel: pci 0000:00:18.2: bridge window [io 0x1000-0x0fff] to [bus 1d] add_size 1000 Oct 29 23:59:39.652515 kernel: pci 0000:00:18.3: bridge window [io 0x1000-0x0fff] to [bus 1e] add_size 1000 Oct 29 23:59:39.652588 kernel: pci 0000:00:18.4: bridge window [io 0x1000-0x0fff] to [bus 1f] add_size 1000 Oct 29 23:59:39.652657 kernel: pci 0000:00:18.5: bridge window [io 0x1000-0x0fff] to [bus 20] add_size 1000 Oct 29 23:59:39.652725 kernel: pci 0000:00:18.6: bridge window [io 0x1000-0x0fff] to [bus 21] add_size 1000 Oct 29 23:59:39.652794 kernel: pci 0000:00:18.7: bridge window [io 0x1000-0x0fff] to [bus 22] add_size 1000 Oct 29 23:59:39.652862 kernel: pci 0000:00:15.0: bridge window [mem 0xc0000000-0xc01fffff 64bit pref]: assigned Oct 29 23:59:39.652941 kernel: pci 0000:00:16.0: bridge window [mem 0xc0200000-0xc03fffff 64bit pref]: assigned Oct 29 23:59:39.653013 kernel: pci 0000:00:15.3: bridge window [io size 0x1000]: can't assign; no space Oct 29 23:59:39.653104 kernel: pci 0000:00:15.3: bridge window [io size 0x1000]: failed to assign Oct 29 23:59:39.653174 kernel: pci 0000:00:15.4: bridge window [io size 0x1000]: can't assign; no space Oct 29 23:59:39.653242 kernel: pci 0000:00:15.4: bridge window [io size 0x1000]: failed to assign Oct 29 23:59:39.653312 kernel: pci 0000:00:15.5: bridge window [io size 0x1000]: can't assign; no space Oct 29 23:59:39.653411 kernel: pci 0000:00:15.5: bridge window [io size 0x1000]: failed to assign Oct 29 23:59:39.653481 kernel: pci 0000:00:15.6: bridge window [io size 0x1000]: can't assign; no space Oct 29 23:59:39.653552 kernel: pci 0000:00:15.6: bridge window [io size 0x1000]: failed to assign Oct 29 23:59:39.653620 kernel: pci 0000:00:15.7: bridge window [io size 0x1000]: can't assign; no space Oct 29 23:59:39.653686 kernel: pci 0000:00:15.7: bridge window [io size 0x1000]: failed to assign Oct 29 23:59:39.653752 kernel: pci 0000:00:16.3: bridge window [io size 0x1000]: can't assign; no space Oct 29 23:59:39.653818 kernel: pci 0000:00:16.3: bridge window [io size 0x1000]: failed to assign Oct 29 23:59:39.653884 kernel: pci 0000:00:16.4: bridge window [io size 0x1000]: can't assign; no space Oct 29 23:59:39.653961 kernel: pci 0000:00:16.4: bridge window [io size 0x1000]: failed to assign Oct 29 23:59:39.654037 kernel: pci 0000:00:16.5: bridge window [io size 0x1000]: can't assign; no space Oct 29 23:59:39.654105 kernel: pci 0000:00:16.5: bridge window [io size 0x1000]: failed to assign Oct 29 23:59:39.654172 kernel: pci 0000:00:16.6: bridge window [io size 0x1000]: can't assign; no space Oct 29 23:59:39.654237 kernel: pci 0000:00:16.6: bridge window [io size 0x1000]: failed to assign Oct 29 23:59:39.654304 kernel: pci 0000:00:16.7: bridge window [io size 0x1000]: can't assign; no space Oct 29 23:59:39.654371 kernel: pci 0000:00:16.7: bridge window [io size 0x1000]: failed to assign Oct 29 23:59:39.654440 kernel: pci 0000:00:17.3: bridge window [io size 0x1000]: can't assign; no space Oct 29 23:59:39.654507 kernel: pci 0000:00:17.3: bridge window [io size 0x1000]: failed to assign Oct 29 23:59:39.654573 kernel: pci 0000:00:17.4: bridge window [io size 0x1000]: can't assign; no space Oct 29 23:59:39.654639 kernel: pci 0000:00:17.4: bridge window [io size 0x1000]: failed to assign Oct 29 23:59:39.654706 kernel: pci 0000:00:17.5: bridge window [io size 0x1000]: can't assign; no space Oct 29 23:59:39.654772 kernel: pci 0000:00:17.5: bridge window [io size 0x1000]: failed to assign Oct 29 23:59:39.654842 kernel: pci 0000:00:17.6: bridge window [io size 0x1000]: can't assign; no space Oct 29 23:59:39.654907 kernel: pci 0000:00:17.6: bridge window [io size 0x1000]: failed to assign Oct 29 23:59:39.654974 kernel: pci 0000:00:17.7: bridge window [io size 0x1000]: can't assign; no space Oct 29 23:59:39.655048 kernel: pci 0000:00:17.7: bridge window [io size 0x1000]: failed to assign Oct 29 23:59:39.655171 kernel: pci 0000:00:18.2: bridge window [io size 0x1000]: can't assign; no space Oct 29 23:59:39.655239 kernel: pci 0000:00:18.2: bridge window [io size 0x1000]: failed to assign Oct 29 23:59:39.655310 kernel: pci 0000:00:18.3: bridge window [io size 0x1000]: can't assign; no space Oct 29 23:59:39.655378 kernel: pci 0000:00:18.3: bridge window [io size 0x1000]: failed to assign Oct 29 23:59:39.655446 kernel: pci 0000:00:18.4: bridge window [io size 0x1000]: can't assign; no space Oct 29 23:59:39.655609 kernel: pci 0000:00:18.4: bridge window [io size 0x1000]: failed to assign Oct 29 23:59:39.655682 kernel: pci 0000:00:18.5: bridge window [io size 0x1000]: can't assign; no space Oct 29 23:59:39.655749 kernel: pci 0000:00:18.5: bridge window [io size 0x1000]: failed to assign Oct 29 23:59:39.655816 kernel: pci 0000:00:18.6: bridge window [io size 0x1000]: can't assign; no space Oct 29 23:59:39.655884 kernel: pci 0000:00:18.6: bridge window [io size 0x1000]: failed to assign Oct 29 23:59:39.655953 kernel: pci 0000:00:18.7: bridge window [io size 0x1000]: can't assign; no space Oct 29 23:59:39.656019 kernel: pci 0000:00:18.7: bridge window [io size 0x1000]: failed to assign Oct 29 23:59:39.656102 kernel: pci 0000:00:18.7: bridge window [io size 0x1000]: can't assign; no space Oct 29 23:59:39.656169 kernel: pci 0000:00:18.7: bridge window [io size 0x1000]: failed to assign Oct 29 23:59:39.656296 kernel: pci 0000:00:18.6: bridge window [io size 0x1000]: can't assign; no space Oct 29 23:59:39.656364 kernel: pci 0000:00:18.6: bridge window [io size 0x1000]: failed to assign Oct 29 23:59:39.656434 kernel: pci 0000:00:18.5: bridge window [io size 0x1000]: can't assign; no space Oct 29 23:59:39.656500 kernel: pci 0000:00:18.5: bridge window [io size 0x1000]: failed to assign Oct 29 23:59:39.656565 kernel: pci 0000:00:18.4: bridge window [io size 0x1000]: can't assign; no space Oct 29 23:59:39.656631 kernel: pci 0000:00:18.4: bridge window [io size 0x1000]: failed to assign Oct 29 23:59:39.656697 kernel: pci 0000:00:18.3: bridge window [io size 0x1000]: can't assign; no space Oct 29 23:59:39.656762 kernel: pci 0000:00:18.3: bridge window [io size 0x1000]: failed to assign Oct 29 23:59:39.656828 kernel: pci 0000:00:18.2: bridge window [io size 0x1000]: can't assign; no space Oct 29 23:59:39.656897 kernel: pci 0000:00:18.2: bridge window [io size 0x1000]: failed to assign Oct 29 23:59:39.656966 kernel: pci 0000:00:17.7: bridge window [io size 0x1000]: can't assign; no space Oct 29 23:59:39.657060 kernel: pci 0000:00:17.7: bridge window [io size 0x1000]: failed to assign Oct 29 23:59:39.657133 kernel: pci 0000:00:17.6: bridge window [io size 0x1000]: can't assign; no space Oct 29 23:59:39.657199 kernel: pci 0000:00:17.6: bridge window [io size 0x1000]: failed to assign Oct 29 23:59:39.657265 kernel: pci 0000:00:17.5: bridge window [io size 0x1000]: can't assign; no space Oct 29 23:59:39.657332 kernel: pci 0000:00:17.5: bridge window [io size 0x1000]: failed to assign Oct 29 23:59:39.657403 kernel: pci 0000:00:17.4: bridge window [io size 0x1000]: can't assign; no space Oct 29 23:59:39.657470 kernel: pci 0000:00:17.4: bridge window [io size 0x1000]: failed to assign Oct 29 23:59:39.658112 kernel: pci 0000:00:17.3: bridge window [io size 0x1000]: can't assign; no space Oct 29 23:59:39.658222 kernel: pci 0000:00:17.3: bridge window [io size 0x1000]: failed to assign Oct 29 23:59:39.658295 kernel: pci 0000:00:16.7: bridge window [io size 0x1000]: can't assign; no space Oct 29 23:59:39.658363 kernel: pci 0000:00:16.7: bridge window [io size 0x1000]: failed to assign Oct 29 23:59:39.658431 kernel: pci 0000:00:16.6: bridge window [io size 0x1000]: can't assign; no space Oct 29 23:59:39.658498 kernel: pci 0000:00:16.6: bridge window [io size 0x1000]: failed to assign Oct 29 23:59:39.658571 kernel: pci 0000:00:16.5: bridge window [io size 0x1000]: can't assign; no space Oct 29 23:59:39.658639 kernel: pci 0000:00:16.5: bridge window [io size 0x1000]: failed to assign Oct 29 23:59:39.658708 kernel: pci 0000:00:16.4: bridge window [io size 0x1000]: can't assign; no space Oct 29 23:59:39.658775 kernel: pci 0000:00:16.4: bridge window [io size 0x1000]: failed to assign Oct 29 23:59:39.658844 kernel: pci 0000:00:16.3: bridge window [io size 0x1000]: can't assign; no space Oct 29 23:59:39.658910 kernel: pci 0000:00:16.3: bridge window [io size 0x1000]: failed to assign Oct 29 23:59:39.658981 kernel: pci 0000:00:15.7: bridge window [io size 0x1000]: can't assign; no space Oct 29 23:59:39.659062 kernel: pci 0000:00:15.7: bridge window [io size 0x1000]: failed to assign Oct 29 23:59:39.659133 kernel: pci 0000:00:15.6: bridge window [io size 0x1000]: can't assign; no space Oct 29 23:59:39.659200 kernel: pci 0000:00:15.6: bridge window [io size 0x1000]: failed to assign Oct 29 23:59:39.659268 kernel: pci 0000:00:15.5: bridge window [io size 0x1000]: can't assign; no space Oct 29 23:59:39.659334 kernel: pci 0000:00:15.5: bridge window [io size 0x1000]: failed to assign Oct 29 23:59:39.659405 kernel: pci 0000:00:15.4: bridge window [io size 0x1000]: can't assign; no space Oct 29 23:59:39.659471 kernel: pci 0000:00:15.4: bridge window [io size 0x1000]: failed to assign Oct 29 23:59:39.659539 kernel: pci 0000:00:15.3: bridge window [io size 0x1000]: can't assign; no space Oct 29 23:59:39.659606 kernel: pci 0000:00:15.3: bridge window [io size 0x1000]: failed to assign Oct 29 23:59:39.659674 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Oct 29 23:59:39.659744 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] Oct 29 23:59:39.659810 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Oct 29 23:59:39.659879 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Oct 29 23:59:39.659950 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Oct 29 23:59:39.660022 kernel: pci 0000:03:00.0: ROM [mem 0xfd500000-0xfd50ffff pref]: assigned Oct 29 23:59:39.660156 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Oct 29 23:59:39.660224 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Oct 29 23:59:39.660291 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Oct 29 23:59:39.660358 kernel: pci 0000:00:15.0: bridge window [mem 0xc0000000-0xc01fffff 64bit pref] Oct 29 23:59:39.660431 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Oct 29 23:59:39.660498 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Oct 29 23:59:39.660565 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Oct 29 23:59:39.660632 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Oct 29 23:59:39.660701 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Oct 29 23:59:39.660767 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Oct 29 23:59:39.660834 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Oct 29 23:59:39.661057 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Oct 29 23:59:39.661128 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Oct 29 23:59:39.661196 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Oct 29 23:59:39.661262 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Oct 29 23:59:39.661329 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Oct 29 23:59:39.661395 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Oct 29 23:59:39.661461 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Oct 29 23:59:39.661533 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Oct 29 23:59:39.661600 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Oct 29 23:59:39.661664 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Oct 29 23:59:39.661730 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Oct 29 23:59:39.661795 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Oct 29 23:59:39.661860 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Oct 29 23:59:39.661929 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Oct 29 23:59:39.661995 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Oct 29 23:59:39.662075 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Oct 29 23:59:39.662147 kernel: pci 0000:0b:00.0: ROM [mem 0xfd400000-0xfd40ffff pref]: assigned Oct 29 23:59:39.662215 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Oct 29 23:59:39.662280 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Oct 29 23:59:39.662348 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Oct 29 23:59:39.662414 kernel: pci 0000:00:16.0: bridge window [mem 0xc0200000-0xc03fffff 64bit pref] Oct 29 23:59:39.662481 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Oct 29 23:59:39.662550 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Oct 29 23:59:39.662617 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Oct 29 23:59:39.662683 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Oct 29 23:59:39.662750 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Oct 29 23:59:39.662818 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Oct 29 23:59:39.662884 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Oct 29 23:59:39.662953 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Oct 29 23:59:39.663020 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Oct 29 23:59:39.666135 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Oct 29 23:59:39.666213 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Oct 29 23:59:39.666285 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Oct 29 23:59:39.666358 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Oct 29 23:59:39.666426 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Oct 29 23:59:39.666497 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Oct 29 23:59:39.666572 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Oct 29 23:59:39.666639 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Oct 29 23:59:39.666708 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Oct 29 23:59:39.666776 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Oct 29 23:59:39.666842 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Oct 29 23:59:39.666917 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Oct 29 23:59:39.666986 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Oct 29 23:59:39.667065 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Oct 29 23:59:39.667136 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Oct 29 23:59:39.667206 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Oct 29 23:59:39.667271 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Oct 29 23:59:39.667337 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Oct 29 23:59:39.667404 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Oct 29 23:59:39.667471 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Oct 29 23:59:39.667549 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Oct 29 23:59:39.667616 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Oct 29 23:59:39.667687 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Oct 29 23:59:39.667753 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Oct 29 23:59:39.667821 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Oct 29 23:59:39.667887 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Oct 29 23:59:39.667954 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Oct 29 23:59:39.668020 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Oct 29 23:59:39.668313 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Oct 29 23:59:39.668389 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Oct 29 23:59:39.668457 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Oct 29 23:59:39.668525 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Oct 29 23:59:39.668594 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Oct 29 23:59:39.668661 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Oct 29 23:59:39.668728 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Oct 29 23:59:39.668800 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Oct 29 23:59:39.668866 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Oct 29 23:59:39.668933 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Oct 29 23:59:39.669002 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Oct 29 23:59:39.669094 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Oct 29 23:59:39.669163 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Oct 29 23:59:39.669236 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Oct 29 23:59:39.669304 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Oct 29 23:59:39.669370 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Oct 29 23:59:39.669436 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Oct 29 23:59:39.669504 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Oct 29 23:59:39.669570 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Oct 29 23:59:39.669635 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Oct 29 23:59:39.670253 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Oct 29 23:59:39.670339 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Oct 29 23:59:39.670409 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Oct 29 23:59:39.670477 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Oct 29 23:59:39.670546 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Oct 29 23:59:39.670614 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Oct 29 23:59:39.670680 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Oct 29 23:59:39.670751 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Oct 29 23:59:39.670818 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Oct 29 23:59:39.670884 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Oct 29 23:59:39.670953 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Oct 29 23:59:39.671039 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Oct 29 23:59:39.671107 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Oct 29 23:59:39.671179 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Oct 29 23:59:39.671244 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Oct 29 23:59:39.671309 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Oct 29 23:59:39.671376 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Oct 29 23:59:39.671443 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Oct 29 23:59:39.671509 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Oct 29 23:59:39.671575 kernel: pci_bus 0000:00: resource 4 [mem 0x000a0000-0x000bffff window] Oct 29 23:59:39.671634 kernel: pci_bus 0000:00: resource 5 [mem 0x000cc000-0x000dbfff window] Oct 29 23:59:39.671700 kernel: pci_bus 0000:00: resource 6 [mem 0xc0000000-0xfebfffff window] Oct 29 23:59:39.671759 kernel: pci_bus 0000:00: resource 7 [io 0x0000-0x0cf7 window] Oct 29 23:59:39.671816 kernel: pci_bus 0000:00: resource 8 [io 0x0d00-0xfeff window] Oct 29 23:59:39.671881 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x3fff] Oct 29 23:59:39.671955 kernel: pci_bus 0000:02: resource 1 [mem 0xfd600000-0xfdffffff] Oct 29 23:59:39.672018 kernel: pci_bus 0000:02: resource 2 [mem 0xe7b00000-0xe7ffffff 64bit pref] Oct 29 23:59:39.674861 kernel: pci_bus 0000:02: resource 4 [mem 0x000a0000-0x000bffff window] Oct 29 23:59:39.674932 kernel: pci_bus 0000:02: resource 5 [mem 0x000cc000-0x000dbfff window] Oct 29 23:59:39.674996 kernel: pci_bus 0000:02: resource 6 [mem 0xc0000000-0xfebfffff window] Oct 29 23:59:39.675080 kernel: pci_bus 0000:02: resource 7 [io 0x0000-0x0cf7 window] Oct 29 23:59:39.675146 kernel: pci_bus 0000:02: resource 8 [io 0x0d00-0xfeff window] Oct 29 23:59:39.675215 kernel: pci_bus 0000:03: resource 0 [io 0x4000-0x4fff] Oct 29 23:59:39.675277 kernel: pci_bus 0000:03: resource 1 [mem 0xfd500000-0xfd5fffff] Oct 29 23:59:39.675338 kernel: pci_bus 0000:03: resource 2 [mem 0xc0000000-0xc01fffff 64bit pref] Oct 29 23:59:39.675403 kernel: pci_bus 0000:04: resource 0 [io 0x8000-0x8fff] Oct 29 23:59:39.675464 kernel: pci_bus 0000:04: resource 1 [mem 0xfd100000-0xfd1fffff] Oct 29 23:59:39.675527 kernel: pci_bus 0000:04: resource 2 [mem 0xe7800000-0xe78fffff 64bit pref] Oct 29 23:59:39.675592 kernel: pci_bus 0000:05: resource 0 [io 0xc000-0xcfff] Oct 29 23:59:39.675654 kernel: pci_bus 0000:05: resource 1 [mem 0xfcd00000-0xfcdfffff] Oct 29 23:59:39.675714 kernel: pci_bus 0000:05: resource 2 [mem 0xe7400000-0xe74fffff 64bit pref] Oct 29 23:59:39.675780 kernel: pci_bus 0000:06: resource 1 [mem 0xfc900000-0xfc9fffff] Oct 29 23:59:39.675843 kernel: pci_bus 0000:06: resource 2 [mem 0xe7000000-0xe70fffff 64bit pref] Oct 29 23:59:39.675911 kernel: pci_bus 0000:07: resource 1 [mem 0xfc500000-0xfc5fffff] Oct 29 23:59:39.675974 kernel: pci_bus 0000:07: resource 2 [mem 0xe6c00000-0xe6cfffff 64bit pref] Oct 29 23:59:39.676060 kernel: pci_bus 0000:08: resource 1 [mem 0xfc100000-0xfc1fffff] Oct 29 23:59:39.676123 kernel: pci_bus 0000:08: resource 2 [mem 0xe6800000-0xe68fffff 64bit pref] Oct 29 23:59:39.676188 kernel: pci_bus 0000:09: resource 1 [mem 0xfbd00000-0xfbdfffff] Oct 29 23:59:39.676252 kernel: pci_bus 0000:09: resource 2 [mem 0xe6400000-0xe64fffff 64bit pref] Oct 29 23:59:39.676317 kernel: pci_bus 0000:0a: resource 1 [mem 0xfb900000-0xfb9fffff] Oct 29 23:59:39.676393 kernel: pci_bus 0000:0a: resource 2 [mem 0xe6000000-0xe60fffff 64bit pref] Oct 29 23:59:39.676461 kernel: pci_bus 0000:0b: resource 0 [io 0x5000-0x5fff] Oct 29 23:59:39.676525 kernel: pci_bus 0000:0b: resource 1 [mem 0xfd400000-0xfd4fffff] Oct 29 23:59:39.676586 kernel: pci_bus 0000:0b: resource 2 [mem 0xc0200000-0xc03fffff 64bit pref] Oct 29 23:59:39.676652 kernel: pci_bus 0000:0c: resource 0 [io 0x9000-0x9fff] Oct 29 23:59:39.676712 kernel: pci_bus 0000:0c: resource 1 [mem 0xfd000000-0xfd0fffff] Oct 29 23:59:39.676784 kernel: pci_bus 0000:0c: resource 2 [mem 0xe7700000-0xe77fffff 64bit pref] Oct 29 23:59:39.676855 kernel: pci_bus 0000:0d: resource 0 [io 0xd000-0xdfff] Oct 29 23:59:39.676921 kernel: pci_bus 0000:0d: resource 1 [mem 0xfcc00000-0xfccfffff] Oct 29 23:59:39.676990 kernel: pci_bus 0000:0d: resource 2 [mem 0xe7300000-0xe73fffff 64bit pref] Oct 29 23:59:39.677892 kernel: pci_bus 0000:0e: resource 1 [mem 0xfc800000-0xfc8fffff] Oct 29 23:59:39.677964 kernel: pci_bus 0000:0e: resource 2 [mem 0xe6f00000-0xe6ffffff 64bit pref] Oct 29 23:59:39.678068 kernel: pci_bus 0000:0f: resource 1 [mem 0xfc400000-0xfc4fffff] Oct 29 23:59:39.678140 kernel: pci_bus 0000:0f: resource 2 [mem 0xe6b00000-0xe6bfffff 64bit pref] Oct 29 23:59:39.678207 kernel: pci_bus 0000:10: resource 1 [mem 0xfc000000-0xfc0fffff] Oct 29 23:59:39.678268 kernel: pci_bus 0000:10: resource 2 [mem 0xe6700000-0xe67fffff 64bit pref] Oct 29 23:59:39.678334 kernel: pci_bus 0000:11: resource 1 [mem 0xfbc00000-0xfbcfffff] Oct 29 23:59:39.678395 kernel: pci_bus 0000:11: resource 2 [mem 0xe6300000-0xe63fffff 64bit pref] Oct 29 23:59:39.678464 kernel: pci_bus 0000:12: resource 1 [mem 0xfb800000-0xfb8fffff] Oct 29 23:59:39.678525 kernel: pci_bus 0000:12: resource 2 [mem 0xe5f00000-0xe5ffffff 64bit pref] Oct 29 23:59:39.678592 kernel: pci_bus 0000:13: resource 0 [io 0x6000-0x6fff] Oct 29 23:59:39.678652 kernel: pci_bus 0000:13: resource 1 [mem 0xfd300000-0xfd3fffff] Oct 29 23:59:39.678712 kernel: pci_bus 0000:13: resource 2 [mem 0xe7a00000-0xe7afffff 64bit pref] Oct 29 23:59:39.678778 kernel: pci_bus 0000:14: resource 0 [io 0xa000-0xafff] Oct 29 23:59:39.678841 kernel: pci_bus 0000:14: resource 1 [mem 0xfcf00000-0xfcffffff] Oct 29 23:59:39.678902 kernel: pci_bus 0000:14: resource 2 [mem 0xe7600000-0xe76fffff 64bit pref] Oct 29 23:59:39.678972 kernel: pci_bus 0000:15: resource 0 [io 0xe000-0xefff] Oct 29 23:59:39.679054 kernel: pci_bus 0000:15: resource 1 [mem 0xfcb00000-0xfcbfffff] Oct 29 23:59:39.679118 kernel: pci_bus 0000:15: resource 2 [mem 0xe7200000-0xe72fffff 64bit pref] Oct 29 23:59:39.679187 kernel: pci_bus 0000:16: resource 1 [mem 0xfc700000-0xfc7fffff] Oct 29 23:59:39.679249 kernel: pci_bus 0000:16: resource 2 [mem 0xe6e00000-0xe6efffff 64bit pref] Oct 29 23:59:39.679314 kernel: pci_bus 0000:17: resource 1 [mem 0xfc300000-0xfc3fffff] Oct 29 23:59:39.679374 kernel: pci_bus 0000:17: resource 2 [mem 0xe6a00000-0xe6afffff 64bit pref] Oct 29 23:59:39.679443 kernel: pci_bus 0000:18: resource 1 [mem 0xfbf00000-0xfbffffff] Oct 29 23:59:39.679504 kernel: pci_bus 0000:18: resource 2 [mem 0xe6600000-0xe66fffff 64bit pref] Oct 29 23:59:39.679572 kernel: pci_bus 0000:19: resource 1 [mem 0xfbb00000-0xfbbfffff] Oct 29 23:59:39.679633 kernel: pci_bus 0000:19: resource 2 [mem 0xe6200000-0xe62fffff 64bit pref] Oct 29 23:59:39.679697 kernel: pci_bus 0000:1a: resource 1 [mem 0xfb700000-0xfb7fffff] Oct 29 23:59:39.679759 kernel: pci_bus 0000:1a: resource 2 [mem 0xe5e00000-0xe5efffff 64bit pref] Oct 29 23:59:39.679824 kernel: pci_bus 0000:1b: resource 0 [io 0x7000-0x7fff] Oct 29 23:59:39.679888 kernel: pci_bus 0000:1b: resource 1 [mem 0xfd200000-0xfd2fffff] Oct 29 23:59:39.679948 kernel: pci_bus 0000:1b: resource 2 [mem 0xe7900000-0xe79fffff 64bit pref] Oct 29 23:59:39.680016 kernel: pci_bus 0000:1c: resource 0 [io 0xb000-0xbfff] Oct 29 23:59:39.680090 kernel: pci_bus 0000:1c: resource 1 [mem 0xfce00000-0xfcefffff] Oct 29 23:59:39.680151 kernel: pci_bus 0000:1c: resource 2 [mem 0xe7500000-0xe75fffff 64bit pref] Oct 29 23:59:39.680215 kernel: pci_bus 0000:1d: resource 1 [mem 0xfca00000-0xfcafffff] Oct 29 23:59:39.680279 kernel: pci_bus 0000:1d: resource 2 [mem 0xe7100000-0xe71fffff 64bit pref] Oct 29 23:59:39.680344 kernel: pci_bus 0000:1e: resource 1 [mem 0xfc600000-0xfc6fffff] Oct 29 23:59:39.680404 kernel: pci_bus 0000:1e: resource 2 [mem 0xe6d00000-0xe6dfffff 64bit pref] Oct 29 23:59:39.680468 kernel: pci_bus 0000:1f: resource 1 [mem 0xfc200000-0xfc2fffff] Oct 29 23:59:39.680528 kernel: pci_bus 0000:1f: resource 2 [mem 0xe6900000-0xe69fffff 64bit pref] Oct 29 23:59:39.680597 kernel: pci_bus 0000:20: resource 1 [mem 0xfbe00000-0xfbefffff] Oct 29 23:59:39.680680 kernel: pci_bus 0000:20: resource 2 [mem 0xe6500000-0xe65fffff 64bit pref] Oct 29 23:59:39.680754 kernel: pci_bus 0000:21: resource 1 [mem 0xfba00000-0xfbafffff] Oct 29 23:59:39.680824 kernel: pci_bus 0000:21: resource 2 [mem 0xe6100000-0xe61fffff 64bit pref] Oct 29 23:59:39.680890 kernel: pci_bus 0000:22: resource 1 [mem 0xfb600000-0xfb6fffff] Oct 29 23:59:39.680954 kernel: pci_bus 0000:22: resource 2 [mem 0xe5d00000-0xe5dfffff 64bit pref] Oct 29 23:59:39.681039 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Oct 29 23:59:39.681051 kernel: PCI: CLS 32 bytes, default 64 Oct 29 23:59:39.681058 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Oct 29 23:59:39.681065 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Oct 29 23:59:39.681073 kernel: clocksource: Switched to clocksource tsc Oct 29 23:59:39.681081 kernel: Initialise system trusted keyrings Oct 29 23:59:39.681088 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Oct 29 23:59:39.681095 kernel: Key type asymmetric registered Oct 29 23:59:39.681102 kernel: Asymmetric key parser 'x509' registered Oct 29 23:59:39.681108 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Oct 29 23:59:39.681115 kernel: io scheduler mq-deadline registered Oct 29 23:59:39.681122 kernel: io scheduler kyber registered Oct 29 23:59:39.681128 kernel: io scheduler bfq registered Oct 29 23:59:39.681205 kernel: pcieport 0000:00:15.0: PME: Signaling with IRQ 24 Oct 29 23:59:39.681273 kernel: pcieport 0000:00:15.0: pciehp: Slot #160 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 29 23:59:39.681342 kernel: pcieport 0000:00:15.1: PME: Signaling with IRQ 25 Oct 29 23:59:39.682445 kernel: pcieport 0000:00:15.1: pciehp: Slot #161 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 29 23:59:39.682526 kernel: pcieport 0000:00:15.2: PME: Signaling with IRQ 26 Oct 29 23:59:39.682607 kernel: pcieport 0000:00:15.2: pciehp: Slot #162 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 29 23:59:39.682677 kernel: pcieport 0000:00:15.3: PME: Signaling with IRQ 27 Oct 29 23:59:39.682751 kernel: pcieport 0000:00:15.3: pciehp: Slot #163 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 29 23:59:39.682824 kernel: pcieport 0000:00:15.4: PME: Signaling with IRQ 28 Oct 29 23:59:39.682892 kernel: pcieport 0000:00:15.4: pciehp: Slot #164 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 29 23:59:39.682964 kernel: pcieport 0000:00:15.5: PME: Signaling with IRQ 29 Oct 29 23:59:39.684060 kernel: pcieport 0000:00:15.5: pciehp: Slot #165 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 29 23:59:39.684141 kernel: pcieport 0000:00:15.6: PME: Signaling with IRQ 30 Oct 29 23:59:39.684211 kernel: pcieport 0000:00:15.6: pciehp: Slot #166 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 29 23:59:39.684281 kernel: pcieport 0000:00:15.7: PME: Signaling with IRQ 31 Oct 29 23:59:39.684349 kernel: pcieport 0000:00:15.7: pciehp: Slot #167 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 29 23:59:39.684418 kernel: pcieport 0000:00:16.0: PME: Signaling with IRQ 32 Oct 29 23:59:39.684484 kernel: pcieport 0000:00:16.0: pciehp: Slot #192 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 29 23:59:39.684555 kernel: pcieport 0000:00:16.1: PME: Signaling with IRQ 33 Oct 29 23:59:39.684621 kernel: pcieport 0000:00:16.1: pciehp: Slot #193 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 29 23:59:39.684689 kernel: pcieport 0000:00:16.2: PME: Signaling with IRQ 34 Oct 29 23:59:39.684756 kernel: pcieport 0000:00:16.2: pciehp: Slot #194 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 29 23:59:39.684824 kernel: pcieport 0000:00:16.3: PME: Signaling with IRQ 35 Oct 29 23:59:39.684894 kernel: pcieport 0000:00:16.3: pciehp: Slot #195 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 29 23:59:39.684971 kernel: pcieport 0000:00:16.4: PME: Signaling with IRQ 36 Oct 29 23:59:39.685059 kernel: pcieport 0000:00:16.4: pciehp: Slot #196 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 29 23:59:39.685128 kernel: pcieport 0000:00:16.5: PME: Signaling with IRQ 37 Oct 29 23:59:39.685194 kernel: pcieport 0000:00:16.5: pciehp: Slot #197 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 29 23:59:39.685261 kernel: pcieport 0000:00:16.6: PME: Signaling with IRQ 38 Oct 29 23:59:39.685331 kernel: pcieport 0000:00:16.6: pciehp: Slot #198 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 29 23:59:39.685399 kernel: pcieport 0000:00:16.7: PME: Signaling with IRQ 39 Oct 29 23:59:39.685466 kernel: pcieport 0000:00:16.7: pciehp: Slot #199 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 29 23:59:39.685535 kernel: pcieport 0000:00:17.0: PME: Signaling with IRQ 40 Oct 29 23:59:39.685601 kernel: pcieport 0000:00:17.0: pciehp: Slot #224 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 29 23:59:39.685669 kernel: pcieport 0000:00:17.1: PME: Signaling with IRQ 41 Oct 29 23:59:39.685736 kernel: pcieport 0000:00:17.1: pciehp: Slot #225 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 29 23:59:39.685806 kernel: pcieport 0000:00:17.2: PME: Signaling with IRQ 42 Oct 29 23:59:39.685873 kernel: pcieport 0000:00:17.2: pciehp: Slot #226 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 29 23:59:39.685943 kernel: pcieport 0000:00:17.3: PME: Signaling with IRQ 43 Oct 29 23:59:39.686013 kernel: pcieport 0000:00:17.3: pciehp: Slot #227 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 29 23:59:39.688488 kernel: pcieport 0000:00:17.4: PME: Signaling with IRQ 44 Oct 29 23:59:39.688594 kernel: pcieport 0000:00:17.4: pciehp: Slot #228 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 29 23:59:39.688671 kernel: pcieport 0000:00:17.5: PME: Signaling with IRQ 45 Oct 29 23:59:39.688741 kernel: pcieport 0000:00:17.5: pciehp: Slot #229 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 29 23:59:39.688810 kernel: pcieport 0000:00:17.6: PME: Signaling with IRQ 46 Oct 29 23:59:39.688878 kernel: pcieport 0000:00:17.6: pciehp: Slot #230 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 29 23:59:39.688948 kernel: pcieport 0000:00:17.7: PME: Signaling with IRQ 47 Oct 29 23:59:39.689016 kernel: pcieport 0000:00:17.7: pciehp: Slot #231 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 29 23:59:39.689103 kernel: pcieport 0000:00:18.0: PME: Signaling with IRQ 48 Oct 29 23:59:39.689172 kernel: pcieport 0000:00:18.0: pciehp: Slot #256 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 29 23:59:39.689241 kernel: pcieport 0000:00:18.1: PME: Signaling with IRQ 49 Oct 29 23:59:39.689309 kernel: pcieport 0000:00:18.1: pciehp: Slot #257 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 29 23:59:39.689376 kernel: pcieport 0000:00:18.2: PME: Signaling with IRQ 50 Oct 29 23:59:39.689443 kernel: pcieport 0000:00:18.2: pciehp: Slot #258 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 29 23:59:39.689513 kernel: pcieport 0000:00:18.3: PME: Signaling with IRQ 51 Oct 29 23:59:39.689579 kernel: pcieport 0000:00:18.3: pciehp: Slot #259 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 29 23:59:39.689648 kernel: pcieport 0000:00:18.4: PME: Signaling with IRQ 52 Oct 29 23:59:39.689715 kernel: pcieport 0000:00:18.4: pciehp: Slot #260 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 29 23:59:39.689795 kernel: pcieport 0000:00:18.5: PME: Signaling with IRQ 53 Oct 29 23:59:39.689865 kernel: pcieport 0000:00:18.5: pciehp: Slot #261 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 29 23:59:39.693058 kernel: pcieport 0000:00:18.6: PME: Signaling with IRQ 54 Oct 29 23:59:39.693144 kernel: pcieport 0000:00:18.6: pciehp: Slot #262 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 29 23:59:39.693218 kernel: pcieport 0000:00:18.7: PME: Signaling with IRQ 55 Oct 29 23:59:39.693289 kernel: pcieport 0000:00:18.7: pciehp: Slot #263 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 29 23:59:39.693303 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Oct 29 23:59:39.693311 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 29 23:59:39.693319 kernel: 00:05: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Oct 29 23:59:39.693327 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBC,PNP0f13:MOUS] at 0x60,0x64 irq 1,12 Oct 29 23:59:39.693334 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Oct 29 23:59:39.693341 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Oct 29 23:59:39.693412 kernel: rtc_cmos 00:01: registered as rtc0 Oct 29 23:59:39.693475 kernel: rtc_cmos 00:01: setting system clock to 2025-10-29T23:59:38 UTC (1761782378) Oct 29 23:59:39.693487 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Oct 29 23:59:39.693548 kernel: rtc_cmos 00:01: alarms up to one month, y3k, 114 bytes nvram Oct 29 23:59:39.693558 kernel: intel_pstate: CPU model not supported Oct 29 23:59:39.693565 kernel: NET: Registered PF_INET6 protocol family Oct 29 23:59:39.693572 kernel: Segment Routing with IPv6 Oct 29 23:59:39.693580 kernel: In-situ OAM (IOAM) with IPv6 Oct 29 23:59:39.693587 kernel: NET: Registered PF_PACKET protocol family Oct 29 23:59:39.693596 kernel: Key type dns_resolver registered Oct 29 23:59:39.693603 kernel: IPI shorthand broadcast: enabled Oct 29 23:59:39.693610 kernel: sched_clock: Marking stable (1443003251, 167626866)->(1625341342, -14711225) Oct 29 23:59:39.693617 kernel: registered taskstats version 1 Oct 29 23:59:39.693624 kernel: Loading compiled-in X.509 certificates Oct 29 23:59:39.693631 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.54-flatcar: b5a3367ee15a1313a0db8339b653e9e56c1bb8d0' Oct 29 23:59:39.693638 kernel: Demotion targets for Node 0: null Oct 29 23:59:39.693646 kernel: Key type .fscrypt registered Oct 29 23:59:39.693653 kernel: Key type fscrypt-provisioning registered Oct 29 23:59:39.693660 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 29 23:59:39.693667 kernel: ima: Allocated hash algorithm: sha1 Oct 29 23:59:39.693673 kernel: ima: No architecture policies found Oct 29 23:59:39.693681 kernel: clk: Disabling unused clocks Oct 29 23:59:39.693687 kernel: Freeing unused kernel image (initmem) memory: 15956K Oct 29 23:59:39.693696 kernel: Write protecting the kernel read-only data: 40960k Oct 29 23:59:39.693703 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Oct 29 23:59:39.693710 kernel: Run /init as init process Oct 29 23:59:39.693717 kernel: with arguments: Oct 29 23:59:39.693724 kernel: /init Oct 29 23:59:39.693731 kernel: with environment: Oct 29 23:59:39.693738 kernel: HOME=/ Oct 29 23:59:39.693746 kernel: TERM=linux Oct 29 23:59:39.693754 kernel: SCSI subsystem initialized Oct 29 23:59:39.693761 kernel: VMware PVSCSI driver - version 1.0.7.0-k Oct 29 23:59:39.693768 kernel: vmw_pvscsi: using 64bit dma Oct 29 23:59:39.693775 kernel: vmw_pvscsi: max_id: 16 Oct 29 23:59:39.693783 kernel: vmw_pvscsi: setting ring_pages to 8 Oct 29 23:59:39.693789 kernel: vmw_pvscsi: enabling reqCallThreshold Oct 29 23:59:39.693796 kernel: vmw_pvscsi: driver-based request coalescing enabled Oct 29 23:59:39.693804 kernel: vmw_pvscsi: using MSI-X Oct 29 23:59:39.693883 kernel: scsi host0: VMware PVSCSI storage adapter rev 2, req/cmp/msg rings: 8/8/1 pages, cmd_per_lun=254 Oct 29 23:59:39.693970 kernel: vmw_pvscsi 0000:03:00.0: VMware PVSCSI rev 2 host #0 Oct 29 23:59:39.694217 kernel: scsi 0:0:0:0: Direct-Access VMware Virtual disk 2.0 PQ: 0 ANSI: 6 Oct 29 23:59:39.694295 kernel: sd 0:0:0:0: [sda] 25804800 512-byte logical blocks: (13.2 GB/12.3 GiB) Oct 29 23:59:39.694369 kernel: sd 0:0:0:0: [sda] Write Protect is off Oct 29 23:59:39.694444 kernel: sd 0:0:0:0: [sda] Mode Sense: 31 00 00 00 Oct 29 23:59:39.694515 kernel: sd 0:0:0:0: [sda] Cache data unavailable Oct 29 23:59:39.694588 kernel: sd 0:0:0:0: [sda] Assuming drive cache: write through Oct 29 23:59:39.694598 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Oct 29 23:59:39.694668 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Oct 29 23:59:39.694678 kernel: libata version 3.00 loaded. Oct 29 23:59:39.694750 kernel: ata_piix 0000:00:07.1: version 2.13 Oct 29 23:59:39.694823 kernel: scsi host1: ata_piix Oct 29 23:59:39.694894 kernel: scsi host2: ata_piix Oct 29 23:59:39.694905 kernel: ata1: PATA max UDMA/33 cmd 0x1f0 ctl 0x3f6 bmdma 0x1060 irq 14 lpm-pol 0 Oct 29 23:59:39.694913 kernel: ata2: PATA max UDMA/33 cmd 0x170 ctl 0x376 bmdma 0x1068 irq 15 lpm-pol 0 Oct 29 23:59:39.694919 kernel: ata2.00: ATAPI: VMware Virtual IDE CDROM Drive, 00000001, max UDMA/33 Oct 29 23:59:39.695000 kernel: scsi 2:0:0:0: CD-ROM NECVMWar VMware IDE CDR10 1.00 PQ: 0 ANSI: 5 Oct 29 23:59:39.695082 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 1x/1x writer dvd-ram cd/rw xa/form2 cdda tray Oct 29 23:59:39.695093 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Oct 29 23:59:39.695100 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 29 23:59:39.695107 kernel: device-mapper: uevent: version 1.0.3 Oct 29 23:59:39.695115 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Oct 29 23:59:39.695185 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Oct 29 23:59:39.695198 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Oct 29 23:59:39.695205 kernel: raid6: avx2x4 gen() 46992 MB/s Oct 29 23:59:39.695212 kernel: raid6: avx2x2 gen() 52962 MB/s Oct 29 23:59:39.695219 kernel: raid6: avx2x1 gen() 43198 MB/s Oct 29 23:59:39.695226 kernel: raid6: using algorithm avx2x2 gen() 52962 MB/s Oct 29 23:59:39.695233 kernel: raid6: .... xor() 32019 MB/s, rmw enabled Oct 29 23:59:39.695240 kernel: raid6: using avx2x2 recovery algorithm Oct 29 23:59:39.695249 kernel: xor: automatically using best checksumming function avx Oct 29 23:59:39.695256 kernel: Btrfs loaded, zoned=no, fsverity=no Oct 29 23:59:39.695263 kernel: BTRFS: device fsid 6b7350c1-23d8-4ac8-84c6-3e4efb0085fe devid 1 transid 37 /dev/mapper/usr (254:0) scanned by mount (196) Oct 29 23:59:39.695270 kernel: BTRFS info (device dm-0): first mount of filesystem 6b7350c1-23d8-4ac8-84c6-3e4efb0085fe Oct 29 23:59:39.695277 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Oct 29 23:59:39.695284 kernel: BTRFS info (device dm-0): enabling ssd optimizations Oct 29 23:59:39.695291 kernel: BTRFS info (device dm-0): disabling log replay at mount time Oct 29 23:59:39.695299 kernel: BTRFS info (device dm-0): enabling free space tree Oct 29 23:59:39.695307 kernel: loop: module loaded Oct 29 23:59:39.695314 kernel: loop0: detected capacity change from 0 to 100120 Oct 29 23:59:39.695321 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Oct 29 23:59:39.695330 systemd[1]: Successfully made /usr/ read-only. Oct 29 23:59:39.695339 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Oct 29 23:59:39.695348 systemd[1]: Detected virtualization vmware. Oct 29 23:59:39.695355 systemd[1]: Detected architecture x86-64. Oct 29 23:59:39.695362 systemd[1]: Running in initrd. Oct 29 23:59:39.695369 systemd[1]: No hostname configured, using default hostname. Oct 29 23:59:39.695376 systemd[1]: Hostname set to . Oct 29 23:59:39.695383 systemd[1]: Initializing machine ID from random generator. Oct 29 23:59:39.695392 systemd[1]: Queued start job for default target initrd.target. Oct 29 23:59:39.695399 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Oct 29 23:59:39.695407 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 29 23:59:39.695414 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 29 23:59:39.695422 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Oct 29 23:59:39.695429 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 29 23:59:39.695438 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Oct 29 23:59:39.695446 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Oct 29 23:59:39.695453 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 29 23:59:39.695460 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 29 23:59:39.695467 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Oct 29 23:59:39.695474 systemd[1]: Reached target paths.target - Path Units. Oct 29 23:59:39.695482 systemd[1]: Reached target slices.target - Slice Units. Oct 29 23:59:39.695489 systemd[1]: Reached target swap.target - Swaps. Oct 29 23:59:39.695516 systemd[1]: Reached target timers.target - Timer Units. Oct 29 23:59:39.695526 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Oct 29 23:59:39.695534 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 29 23:59:39.695541 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 29 23:59:39.695548 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Oct 29 23:59:39.695558 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 29 23:59:39.695565 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 29 23:59:39.695573 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 29 23:59:39.695580 systemd[1]: Reached target sockets.target - Socket Units. Oct 29 23:59:39.695587 systemd[1]: Starting afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments... Oct 29 23:59:39.695594 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Oct 29 23:59:39.695602 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 29 23:59:39.695611 systemd[1]: Finished network-cleanup.service - Network Cleanup. Oct 29 23:59:39.695618 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Oct 29 23:59:39.695626 systemd[1]: Starting systemd-fsck-usr.service... Oct 29 23:59:39.695633 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 29 23:59:39.695640 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 29 23:59:39.695647 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 29 23:59:39.695656 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Oct 29 23:59:39.695663 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 29 23:59:39.695670 systemd[1]: Finished systemd-fsck-usr.service. Oct 29 23:59:39.695678 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 29 23:59:39.695702 systemd-journald[331]: Collecting audit messages is disabled. Oct 29 23:59:39.695721 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 29 23:59:39.695728 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 29 23:59:39.695737 kernel: Bridge firewalling registered Oct 29 23:59:39.695745 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 29 23:59:39.695752 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 29 23:59:39.695760 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 29 23:59:39.695767 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 29 23:59:39.695774 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 29 23:59:39.695781 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 29 23:59:39.695790 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 29 23:59:39.695797 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 29 23:59:39.695804 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 29 23:59:39.695811 systemd[1]: Finished afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Oct 29 23:59:39.695819 systemd-journald[331]: Journal started Oct 29 23:59:39.695835 systemd-journald[331]: Runtime Journal (/run/log/journal/daabb029cf07450da0a65067b1c184c0) is 4.8M, max 38.5M, 33.7M free. Oct 29 23:59:39.623007 systemd-modules-load[334]: Inserted module 'br_netfilter' Oct 29 23:59:39.699043 systemd[1]: Started systemd-journald.service - Journal Service. Oct 29 23:59:39.702754 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Oct 29 23:59:39.705097 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 29 23:59:39.716043 systemd-tmpfiles[375]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Oct 29 23:59:39.718570 systemd-resolved[351]: Positive Trust Anchors: Oct 29 23:59:39.718575 systemd-resolved[351]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 29 23:59:39.718577 systemd-resolved[351]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Oct 29 23:59:39.718599 systemd-resolved[351]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 29 23:59:39.718887 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 29 23:59:39.732201 dracut-cmdline[374]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 ip=139.178.70.100::139.178.70.97:28::ens192:off:1.1.1.1:1.0.0.1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=56cc5d11e9ee9e328725323e5b298567de51aff19ad0756381062170c9c03796 Oct 29 23:59:39.740728 systemd-resolved[351]: Defaulting to hostname 'linux'. Oct 29 23:59:39.741548 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 29 23:59:39.741712 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 29 23:59:39.794049 kernel: Loading iSCSI transport class v2.0-870. Oct 29 23:59:39.806047 kernel: iscsi: registered transport (tcp) Oct 29 23:59:39.832054 kernel: iscsi: registered transport (qla4xxx) Oct 29 23:59:39.832101 kernel: QLogic iSCSI HBA Driver Oct 29 23:59:39.849408 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 29 23:59:39.863991 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 29 23:59:39.865137 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 29 23:59:39.889826 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Oct 29 23:59:39.890859 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Oct 29 23:59:39.892144 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Oct 29 23:59:39.915645 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Oct 29 23:59:39.916708 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 29 23:59:39.934548 systemd-udevd[616]: Using default interface naming scheme 'v257'. Oct 29 23:59:39.941530 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 29 23:59:39.942833 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Oct 29 23:59:39.958374 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 29 23:59:39.959772 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 29 23:59:39.961934 dracut-pre-trigger[691]: rd.md=0: removing MD RAID activation Oct 29 23:59:39.978473 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Oct 29 23:59:39.980097 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 29 23:59:39.991111 systemd-networkd[722]: lo: Link UP Oct 29 23:59:39.991412 systemd-networkd[722]: lo: Gained carrier Oct 29 23:59:39.992164 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 29 23:59:39.992466 systemd[1]: Reached target network.target - Network. Oct 29 23:59:40.067661 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 29 23:59:40.070319 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Oct 29 23:59:40.168267 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_disk EFI-SYSTEM. Oct 29 23:59:40.185305 kernel: VMware vmxnet3 virtual NIC driver - version 1.9.0.0-k-NAPI Oct 29 23:59:40.185340 kernel: vmxnet3 0000:0b:00.0: # of Tx queues : 2, # of Rx queues : 2 Oct 29 23:59:40.185186 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_disk ROOT. Oct 29 23:59:40.191043 kernel: vmxnet3 0000:0b:00.0 eth0: NIC Link is Up 10000 Mbps Oct 29 23:59:40.195727 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. Oct 29 23:59:40.203149 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_disk USR-A. Oct 29 23:59:40.208312 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Oct 29 23:59:40.220042 kernel: vmxnet3 0000:0b:00.0 ens192: renamed from eth0 Oct 29 23:59:40.222083 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input2 Oct 29 23:59:40.223847 systemd-networkd[722]: eth0: Interface name change detected, renamed to ens192. Oct 29 23:59:40.226038 kernel: cryptd: max_cpu_qlen set to 1000 Oct 29 23:59:40.226109 (udev-worker)[761]: id: Truncating stdout of 'dmi_memory_id' up to 16384 byte. Oct 29 23:59:40.229252 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 29 23:59:40.239322 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 29 23:59:40.239768 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 29 23:59:40.240656 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 29 23:59:40.248352 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Oct 29 23:59:40.248495 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Oct 29 23:59:40.248582 kernel: AES CTR mode by8 optimization enabled Oct 29 23:59:40.245333 systemd-networkd[722]: ens192: Configuring with /etc/systemd/network/10-dracut-cmdline-99.network. Oct 29 23:59:40.249432 systemd-networkd[722]: ens192: Link UP Oct 29 23:59:40.250590 systemd-networkd[722]: ens192: Gained carrier Oct 29 23:59:40.286152 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 29 23:59:40.332114 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Oct 29 23:59:40.332721 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Oct 29 23:59:40.332839 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 29 23:59:40.332947 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 29 23:59:40.334066 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Oct 29 23:59:40.346250 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Oct 29 23:59:40.605421 systemd-resolved[351]: Detected conflict on linux IN A 139.178.70.100 Oct 29 23:59:40.605435 systemd-resolved[351]: Hostname conflict, changing published hostname from 'linux' to 'linux9'. Oct 29 23:59:41.298671 disk-uuid[802]: Warning: The kernel is still using the old partition table. Oct 29 23:59:41.298671 disk-uuid[802]: The new table will be used at the next reboot or after you Oct 29 23:59:41.298671 disk-uuid[802]: run partprobe(8) or kpartx(8) Oct 29 23:59:41.298671 disk-uuid[802]: The operation has completed successfully. Oct 29 23:59:41.301531 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 29 23:59:41.301590 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Oct 29 23:59:41.302436 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Oct 29 23:59:41.331644 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (880) Oct 29 23:59:41.331680 kernel: BTRFS info (device sda6): first mount of filesystem 03993d8b-786f-4e51-be25-d341ee6662e9 Oct 29 23:59:41.331694 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Oct 29 23:59:41.335185 kernel: BTRFS info (device sda6): enabling ssd optimizations Oct 29 23:59:41.335206 kernel: BTRFS info (device sda6): enabling free space tree Oct 29 23:59:41.339838 systemd[1]: Finished ignition-setup.service - Ignition (setup). Oct 29 23:59:41.340082 kernel: BTRFS info (device sda6): last unmount of filesystem 03993d8b-786f-4e51-be25-d341ee6662e9 Oct 29 23:59:41.340859 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Oct 29 23:59:41.465213 ignition[899]: Ignition 2.22.0 Oct 29 23:59:41.465507 ignition[899]: Stage: fetch-offline Oct 29 23:59:41.465642 ignition[899]: no configs at "/usr/lib/ignition/base.d" Oct 29 23:59:41.465770 ignition[899]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Oct 29 23:59:41.465939 ignition[899]: parsed url from cmdline: "" Oct 29 23:59:41.465941 ignition[899]: no config URL provided Oct 29 23:59:41.465944 ignition[899]: reading system config file "/usr/lib/ignition/user.ign" Oct 29 23:59:41.465949 ignition[899]: no config at "/usr/lib/ignition/user.ign" Oct 29 23:59:41.466329 ignition[899]: config successfully fetched Oct 29 23:59:41.466348 ignition[899]: parsing config with SHA512: 5b7dca07b3922027bee907dd18b94324b90218c21d55a077471a668466355f9c21deca5999d3cb6593ba88997a70d4eede7d50aa10a41b8f0f6f1ef3a8867ef3 Oct 29 23:59:41.469753 unknown[899]: fetched base config from "system" Oct 29 23:59:41.469761 unknown[899]: fetched user config from "vmware" Oct 29 23:59:41.469960 ignition[899]: fetch-offline: fetch-offline passed Oct 29 23:59:41.469992 ignition[899]: Ignition finished successfully Oct 29 23:59:41.471161 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Oct 29 23:59:41.471611 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Oct 29 23:59:41.472323 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Oct 29 23:59:41.491517 ignition[906]: Ignition 2.22.0 Oct 29 23:59:41.491530 ignition[906]: Stage: kargs Oct 29 23:59:41.491615 ignition[906]: no configs at "/usr/lib/ignition/base.d" Oct 29 23:59:41.491620 ignition[906]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Oct 29 23:59:41.492218 ignition[906]: kargs: kargs passed Oct 29 23:59:41.492249 ignition[906]: Ignition finished successfully Oct 29 23:59:41.493507 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Oct 29 23:59:41.494399 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Oct 29 23:59:41.518504 ignition[912]: Ignition 2.22.0 Oct 29 23:59:41.518518 ignition[912]: Stage: disks Oct 29 23:59:41.518606 ignition[912]: no configs at "/usr/lib/ignition/base.d" Oct 29 23:59:41.518611 ignition[912]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Oct 29 23:59:41.519237 ignition[912]: disks: disks passed Oct 29 23:59:41.519267 ignition[912]: Ignition finished successfully Oct 29 23:59:41.520241 systemd[1]: Finished ignition-disks.service - Ignition (disks). Oct 29 23:59:41.520488 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Oct 29 23:59:41.520621 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 29 23:59:41.520819 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 29 23:59:41.521005 systemd[1]: Reached target sysinit.target - System Initialization. Oct 29 23:59:41.521187 systemd[1]: Reached target basic.target - Basic System. Oct 29 23:59:41.521927 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Oct 29 23:59:41.556542 systemd-fsck[920]: ROOT: clean, 15/1631200 files, 112378/1617920 blocks Oct 29 23:59:41.558202 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Oct 29 23:59:41.558908 systemd[1]: Mounting sysroot.mount - /sysroot... Oct 29 23:59:41.651260 kernel: EXT4-fs (sda9): mounted filesystem 357f8fb5-672c-465c-a10c-74ee57b7ef1c r/w with ordered data mode. Quota mode: none. Oct 29 23:59:41.650714 systemd[1]: Mounted sysroot.mount - /sysroot. Oct 29 23:59:41.651090 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Oct 29 23:59:41.660697 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 29 23:59:41.663070 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Oct 29 23:59:41.663505 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Oct 29 23:59:41.663719 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 29 23:59:41.663929 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Oct 29 23:59:41.672386 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Oct 29 23:59:41.673175 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Oct 29 23:59:41.679966 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (928) Oct 29 23:59:41.679996 kernel: BTRFS info (device sda6): first mount of filesystem 03993d8b-786f-4e51-be25-d341ee6662e9 Oct 29 23:59:41.680947 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Oct 29 23:59:41.687270 kernel: BTRFS info (device sda6): enabling ssd optimizations Oct 29 23:59:41.687320 kernel: BTRFS info (device sda6): enabling free space tree Oct 29 23:59:41.688379 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 29 23:59:41.709544 initrd-setup-root[952]: cut: /sysroot/etc/passwd: No such file or directory Oct 29 23:59:41.712354 initrd-setup-root[959]: cut: /sysroot/etc/group: No such file or directory Oct 29 23:59:41.714561 initrd-setup-root[966]: cut: /sysroot/etc/shadow: No such file or directory Oct 29 23:59:41.717223 initrd-setup-root[973]: cut: /sysroot/etc/gshadow: No such file or directory Oct 29 23:59:41.734336 systemd-networkd[722]: ens192: Gained IPv6LL Oct 29 23:59:41.778492 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Oct 29 23:59:41.779503 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Oct 29 23:59:41.781111 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Oct 29 23:59:41.794236 systemd[1]: sysroot-oem.mount: Deactivated successfully. Oct 29 23:59:41.796077 kernel: BTRFS info (device sda6): last unmount of filesystem 03993d8b-786f-4e51-be25-d341ee6662e9 Oct 29 23:59:41.810992 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Oct 29 23:59:41.818615 ignition[1041]: INFO : Ignition 2.22.0 Oct 29 23:59:41.818615 ignition[1041]: INFO : Stage: mount Oct 29 23:59:41.818973 ignition[1041]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 29 23:59:41.818973 ignition[1041]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Oct 29 23:59:41.819244 ignition[1041]: INFO : mount: mount passed Oct 29 23:59:41.819889 ignition[1041]: INFO : Ignition finished successfully Oct 29 23:59:41.820190 systemd[1]: Finished ignition-mount.service - Ignition (mount). Oct 29 23:59:41.821174 systemd[1]: Starting ignition-files.service - Ignition (files)... Oct 29 23:59:42.651773 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 29 23:59:42.668691 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (1053) Oct 29 23:59:42.668730 kernel: BTRFS info (device sda6): first mount of filesystem 03993d8b-786f-4e51-be25-d341ee6662e9 Oct 29 23:59:42.668739 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Oct 29 23:59:42.673123 kernel: BTRFS info (device sda6): enabling ssd optimizations Oct 29 23:59:42.673150 kernel: BTRFS info (device sda6): enabling free space tree Oct 29 23:59:42.674249 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 29 23:59:42.692918 ignition[1069]: INFO : Ignition 2.22.0 Oct 29 23:59:42.692918 ignition[1069]: INFO : Stage: files Oct 29 23:59:42.693337 ignition[1069]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 29 23:59:42.693337 ignition[1069]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Oct 29 23:59:42.693548 ignition[1069]: DEBUG : files: compiled without relabeling support, skipping Oct 29 23:59:42.693984 ignition[1069]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 29 23:59:42.693984 ignition[1069]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 29 23:59:42.696465 ignition[1069]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 29 23:59:42.696633 ignition[1069]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 29 23:59:42.696798 unknown[1069]: wrote ssh authorized keys file for user: core Oct 29 23:59:42.696996 ignition[1069]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 29 23:59:42.698228 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Oct 29 23:59:42.698436 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Oct 29 23:59:42.759731 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Oct 29 23:59:42.956293 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Oct 29 23:59:42.956293 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Oct 29 23:59:42.956697 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Oct 29 23:59:42.956697 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 29 23:59:42.956697 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 29 23:59:42.956697 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 29 23:59:42.956697 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 29 23:59:42.956697 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 29 23:59:42.956697 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 29 23:59:42.981972 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 29 23:59:42.982247 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 29 23:59:42.982247 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Oct 29 23:59:42.984319 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Oct 29 23:59:42.984319 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Oct 29 23:59:42.984717 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-x86-64.raw: attempt #1 Oct 29 23:59:43.453765 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Oct 29 23:59:43.765066 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Oct 29 23:59:43.765066 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/systemd/network/00-vmware.network" Oct 29 23:59:43.766418 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/systemd/network/00-vmware.network" Oct 29 23:59:43.766418 ignition[1069]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Oct 29 23:59:43.766847 ignition[1069]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 29 23:59:43.767269 ignition[1069]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 29 23:59:43.767269 ignition[1069]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Oct 29 23:59:43.767269 ignition[1069]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Oct 29 23:59:43.767269 ignition[1069]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 29 23:59:43.767269 ignition[1069]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 29 23:59:43.767269 ignition[1069]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Oct 29 23:59:43.767269 ignition[1069]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Oct 29 23:59:43.787678 ignition[1069]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Oct 29 23:59:43.789703 ignition[1069]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Oct 29 23:59:43.789860 ignition[1069]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Oct 29 23:59:43.789860 ignition[1069]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Oct 29 23:59:43.789860 ignition[1069]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Oct 29 23:59:43.790990 ignition[1069]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 29 23:59:43.790990 ignition[1069]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 29 23:59:43.790990 ignition[1069]: INFO : files: files passed Oct 29 23:59:43.790990 ignition[1069]: INFO : Ignition finished successfully Oct 29 23:59:43.791609 systemd[1]: Finished ignition-files.service - Ignition (files). Oct 29 23:59:43.792643 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Oct 29 23:59:43.794093 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Oct 29 23:59:43.803171 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 29 23:59:43.803334 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Oct 29 23:59:43.808509 initrd-setup-root-after-ignition[1103]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 29 23:59:43.808509 initrd-setup-root-after-ignition[1103]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Oct 29 23:59:43.809094 initrd-setup-root-after-ignition[1107]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 29 23:59:43.810121 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 29 23:59:43.810460 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Oct 29 23:59:43.811114 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Oct 29 23:59:43.836174 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 29 23:59:43.836246 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Oct 29 23:59:43.836521 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Oct 29 23:59:43.836647 systemd[1]: Reached target initrd.target - Initrd Default Target. Oct 29 23:59:43.836951 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Oct 29 23:59:43.837433 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Oct 29 23:59:43.847184 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 29 23:59:43.848105 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Oct 29 23:59:43.859591 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Oct 29 23:59:43.859678 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Oct 29 23:59:43.859866 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 29 23:59:43.860141 systemd[1]: Stopped target timers.target - Timer Units. Oct 29 23:59:43.860323 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 29 23:59:43.860387 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 29 23:59:43.860728 systemd[1]: Stopped target initrd.target - Initrd Default Target. Oct 29 23:59:43.860888 systemd[1]: Stopped target basic.target - Basic System. Oct 29 23:59:43.861066 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Oct 29 23:59:43.861260 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Oct 29 23:59:43.861460 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Oct 29 23:59:43.861667 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Oct 29 23:59:43.861864 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Oct 29 23:59:43.862078 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Oct 29 23:59:43.862283 systemd[1]: Stopped target sysinit.target - System Initialization. Oct 29 23:59:43.862486 systemd[1]: Stopped target local-fs.target - Local File Systems. Oct 29 23:59:43.862677 systemd[1]: Stopped target swap.target - Swaps. Oct 29 23:59:43.862838 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 29 23:59:43.862921 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Oct 29 23:59:43.863235 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Oct 29 23:59:43.863475 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 29 23:59:43.863665 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Oct 29 23:59:43.863705 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 29 23:59:43.863879 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 29 23:59:43.863986 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Oct 29 23:59:43.864261 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 29 23:59:43.864324 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Oct 29 23:59:43.864536 systemd[1]: Stopped target paths.target - Path Units. Oct 29 23:59:43.864673 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 29 23:59:43.864715 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 29 23:59:43.864899 systemd[1]: Stopped target slices.target - Slice Units. Oct 29 23:59:43.865108 systemd[1]: Stopped target sockets.target - Socket Units. Oct 29 23:59:43.865282 systemd[1]: iscsid.socket: Deactivated successfully. Oct 29 23:59:43.865328 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Oct 29 23:59:43.865483 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 29 23:59:43.865526 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 29 23:59:43.865694 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 29 23:59:43.865760 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 29 23:59:43.866013 systemd[1]: ignition-files.service: Deactivated successfully. Oct 29 23:59:43.866093 systemd[1]: Stopped ignition-files.service - Ignition (files). Oct 29 23:59:43.868113 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Oct 29 23:59:43.868600 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Oct 29 23:59:43.868709 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Oct 29 23:59:43.868776 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 29 23:59:43.868939 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 29 23:59:43.868997 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Oct 29 23:59:43.869166 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 29 23:59:43.869225 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Oct 29 23:59:43.872453 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 29 23:59:43.876514 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Oct 29 23:59:43.885866 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 29 23:59:43.891082 ignition[1127]: INFO : Ignition 2.22.0 Oct 29 23:59:43.891082 ignition[1127]: INFO : Stage: umount Oct 29 23:59:43.891082 ignition[1127]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 29 23:59:43.891082 ignition[1127]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Oct 29 23:59:43.891082 ignition[1127]: INFO : umount: umount passed Oct 29 23:59:43.891082 ignition[1127]: INFO : Ignition finished successfully Oct 29 23:59:43.893019 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 29 23:59:43.893247 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Oct 29 23:59:43.893548 systemd[1]: Stopped target network.target - Network. Oct 29 23:59:43.893744 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 29 23:59:43.893775 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Oct 29 23:59:43.894096 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 29 23:59:43.894122 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Oct 29 23:59:43.894453 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 29 23:59:43.894585 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Oct 29 23:59:43.894808 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Oct 29 23:59:43.894941 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Oct 29 23:59:43.895235 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Oct 29 23:59:43.895468 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Oct 29 23:59:43.903367 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 29 23:59:43.903584 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Oct 29 23:59:43.904496 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 29 23:59:43.904657 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Oct 29 23:59:43.905956 systemd[1]: Stopped target network-pre.target - Preparation for Network. Oct 29 23:59:43.906218 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 29 23:59:43.906356 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Oct 29 23:59:43.907047 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Oct 29 23:59:43.907266 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 29 23:59:43.907294 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 29 23:59:43.907643 systemd[1]: afterburn-network-kargs.service: Deactivated successfully. Oct 29 23:59:43.907667 systemd[1]: Stopped afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Oct 29 23:59:43.908145 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 29 23:59:43.908272 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 29 23:59:43.908658 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 29 23:59:43.908804 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Oct 29 23:59:43.909130 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 29 23:59:43.917724 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 29 23:59:43.917971 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 29 23:59:43.918394 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 29 23:59:43.918419 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Oct 29 23:59:43.919082 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 29 23:59:43.919102 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Oct 29 23:59:43.919215 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 29 23:59:43.919245 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Oct 29 23:59:43.919399 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 29 23:59:43.919422 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Oct 29 23:59:43.919559 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 29 23:59:43.919583 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 29 23:59:43.921094 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Oct 29 23:59:43.921220 systemd[1]: systemd-network-generator.service: Deactivated successfully. Oct 29 23:59:43.921250 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Oct 29 23:59:43.921440 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 29 23:59:43.921464 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 29 23:59:43.921624 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Oct 29 23:59:43.921649 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 29 23:59:43.921832 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 29 23:59:43.921855 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Oct 29 23:59:43.922015 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 29 23:59:43.922064 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 29 23:59:43.933489 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 29 23:59:43.933562 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Oct 29 23:59:43.952934 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 29 23:59:43.952997 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Oct 29 23:59:43.981805 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 29 23:59:43.981875 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Oct 29 23:59:43.982176 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Oct 29 23:59:43.982296 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 29 23:59:43.982324 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Oct 29 23:59:43.982893 systemd[1]: Starting initrd-switch-root.service - Switch Root... Oct 29 23:59:44.004468 systemd[1]: Switching root. Oct 29 23:59:44.035449 systemd-journald[331]: Journal stopped Oct 29 23:59:45.206955 systemd-journald[331]: Received SIGTERM from PID 1 (systemd). Oct 29 23:59:45.206981 kernel: SELinux: policy capability network_peer_controls=1 Oct 29 23:59:45.206990 kernel: SELinux: policy capability open_perms=1 Oct 29 23:59:45.206996 kernel: SELinux: policy capability extended_socket_class=1 Oct 29 23:59:45.207002 kernel: SELinux: policy capability always_check_network=0 Oct 29 23:59:45.207008 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 29 23:59:45.207016 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 29 23:59:45.207022 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 29 23:59:45.207061 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 29 23:59:45.207069 kernel: SELinux: policy capability userspace_initial_context=0 Oct 29 23:59:45.207076 kernel: audit: type=1403 audit(1761782384.652:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 29 23:59:45.207083 systemd[1]: Successfully loaded SELinux policy in 55.293ms. Oct 29 23:59:45.207093 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 3.801ms. Oct 29 23:59:45.207101 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Oct 29 23:59:45.207109 systemd[1]: Detected virtualization vmware. Oct 29 23:59:45.207117 systemd[1]: Detected architecture x86-64. Oct 29 23:59:45.207124 systemd[1]: Detected first boot. Oct 29 23:59:45.207132 systemd[1]: Initializing machine ID from random generator. Oct 29 23:59:45.207139 zram_generator::config[1170]: No configuration found. Oct 29 23:59:45.207238 kernel: vmw_vmci 0000:00:07.7: Using capabilities 0xc Oct 29 23:59:45.207251 kernel: Guest personality initialized and is active Oct 29 23:59:45.207258 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Oct 29 23:59:45.207265 kernel: Initialized host personality Oct 29 23:59:45.207272 kernel: NET: Registered PF_VSOCK protocol family Oct 29 23:59:45.207279 systemd[1]: Populated /etc with preset unit settings. Oct 29 23:59:45.207287 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Oct 29 23:59:45.207296 systemd[1]: COREOS_CUSTOM_PUBLIC_IPV4=$(ip addr show ens192 | grep -v "inet 10." | grep -Po "inet \K[\d.]+")" > ${OUTPUT}" Oct 29 23:59:45.207303 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 29 23:59:45.207311 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Oct 29 23:59:45.207318 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 29 23:59:45.207325 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Oct 29 23:59:45.207333 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Oct 29 23:59:45.207341 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Oct 29 23:59:45.207349 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Oct 29 23:59:45.207356 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Oct 29 23:59:45.207364 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Oct 29 23:59:45.207371 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Oct 29 23:59:45.207378 systemd[1]: Created slice user.slice - User and Session Slice. Oct 29 23:59:45.207388 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 29 23:59:45.207396 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 29 23:59:45.207405 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Oct 29 23:59:45.207413 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Oct 29 23:59:45.207420 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Oct 29 23:59:45.207428 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 29 23:59:45.207436 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Oct 29 23:59:45.207444 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 29 23:59:45.207452 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 29 23:59:45.207459 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Oct 29 23:59:45.207467 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Oct 29 23:59:45.207475 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Oct 29 23:59:45.207482 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Oct 29 23:59:45.207491 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 29 23:59:45.207498 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 29 23:59:45.207506 systemd[1]: Reached target slices.target - Slice Units. Oct 29 23:59:45.207513 systemd[1]: Reached target swap.target - Swaps. Oct 29 23:59:45.207520 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Oct 29 23:59:45.207528 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Oct 29 23:59:45.207537 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Oct 29 23:59:45.207545 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 29 23:59:45.207552 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 29 23:59:45.207560 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 29 23:59:45.207569 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Oct 29 23:59:45.207576 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Oct 29 23:59:45.207584 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Oct 29 23:59:45.207592 systemd[1]: Mounting media.mount - External Media Directory... Oct 29 23:59:45.207599 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 29 23:59:45.207607 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Oct 29 23:59:45.207614 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Oct 29 23:59:45.207623 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Oct 29 23:59:45.207631 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 29 23:59:45.207639 systemd[1]: Reached target machines.target - Containers. Oct 29 23:59:45.207646 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Oct 29 23:59:45.207654 systemd[1]: Starting ignition-delete-config.service - Ignition (delete config)... Oct 29 23:59:45.207662 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 29 23:59:45.207669 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Oct 29 23:59:45.207678 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 29 23:59:45.207686 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 29 23:59:45.207693 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 29 23:59:45.207701 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Oct 29 23:59:45.207709 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 29 23:59:45.207717 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 29 23:59:45.207725 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 29 23:59:45.207733 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Oct 29 23:59:45.207741 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 29 23:59:45.207748 systemd[1]: Stopped systemd-fsck-usr.service. Oct 29 23:59:45.207756 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 29 23:59:45.207764 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 29 23:59:45.207771 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 29 23:59:45.207780 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 29 23:59:45.207788 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Oct 29 23:59:45.207796 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Oct 29 23:59:45.207803 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 29 23:59:45.207811 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 29 23:59:45.207819 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Oct 29 23:59:45.207827 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Oct 29 23:59:45.207835 systemd[1]: Mounted media.mount - External Media Directory. Oct 29 23:59:45.207843 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Oct 29 23:59:45.207850 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Oct 29 23:59:45.207858 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Oct 29 23:59:45.207865 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 29 23:59:45.207873 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 29 23:59:45.207881 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 29 23:59:45.207889 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 29 23:59:45.207896 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 29 23:59:45.207904 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 29 23:59:45.207911 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Oct 29 23:59:45.207919 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 29 23:59:45.207927 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 29 23:59:45.207935 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Oct 29 23:59:45.207943 kernel: ACPI: bus type drm_connector registered Oct 29 23:59:45.207950 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Oct 29 23:59:45.207957 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 29 23:59:45.207965 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 29 23:59:45.207973 kernel: fuse: init (API version 7.41) Oct 29 23:59:45.207980 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Oct 29 23:59:45.207989 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 29 23:59:45.208008 systemd-journald[1258]: Collecting audit messages is disabled. Oct 29 23:59:45.208038 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 29 23:59:45.208053 systemd-journald[1258]: Journal started Oct 29 23:59:45.208069 systemd-journald[1258]: Runtime Journal (/run/log/journal/2258b70e21a44a818efd21adaf77ef1e) is 4.8M, max 38.5M, 33.7M free. Oct 29 23:59:45.023808 systemd[1]: Queued start job for default target multi-user.target. Oct 29 23:59:45.035863 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Oct 29 23:59:45.036152 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 29 23:59:45.208539 jq[1240]: true Oct 29 23:59:45.209080 jq[1272]: true Oct 29 23:59:45.210044 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Oct 29 23:59:45.212061 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 29 23:59:45.216040 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Oct 29 23:59:45.218105 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 29 23:59:45.218128 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Oct 29 23:59:45.220639 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 29 23:59:45.222037 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Oct 29 23:59:45.231021 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 29 23:59:45.231065 systemd[1]: Started systemd-journald.service - Journal Service. Oct 29 23:59:45.230933 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 29 23:59:45.231579 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Oct 29 23:59:45.234101 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 29 23:59:45.234459 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 29 23:59:45.242320 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 29 23:59:45.246271 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Oct 29 23:59:45.248211 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 29 23:59:45.269036 kernel: loop1: detected capacity change from 0 to 128048 Oct 29 23:59:45.272356 ignition[1278]: Ignition 2.22.0 Oct 29 23:59:45.272546 ignition[1278]: deleting config from guestinfo properties Oct 29 23:59:45.273214 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Oct 29 23:59:45.275552 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 29 23:59:45.276017 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Oct 29 23:59:45.280960 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Oct 29 23:59:45.286286 systemd-journald[1258]: Time spent on flushing to /var/log/journal/2258b70e21a44a818efd21adaf77ef1e is 49.028ms for 1751 entries. Oct 29 23:59:45.286286 systemd-journald[1258]: System Journal (/var/log/journal/2258b70e21a44a818efd21adaf77ef1e) is 8M, max 588.1M, 580.1M free. Oct 29 23:59:45.340100 systemd-journald[1258]: Received client request to flush runtime journal. Oct 29 23:59:45.340127 kernel: loop2: detected capacity change from 0 to 219144 Oct 29 23:59:45.295579 ignition[1278]: Successfully deleted config Oct 29 23:59:45.289163 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Oct 29 23:59:45.297687 systemd[1]: Finished ignition-delete-config.service - Ignition (delete config). Oct 29 23:59:45.311290 systemd-tmpfiles[1285]: ACLs are not supported, ignoring. Oct 29 23:59:45.311299 systemd-tmpfiles[1285]: ACLs are not supported, ignoring. Oct 29 23:59:45.318343 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Oct 29 23:59:45.320103 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 29 23:59:45.323389 systemd[1]: Starting systemd-sysusers.service - Create System Users... Oct 29 23:59:45.323684 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 29 23:59:45.342194 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Oct 29 23:59:45.352089 systemd[1]: Finished systemd-sysusers.service - Create System Users. Oct 29 23:59:45.355113 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 29 23:59:45.356126 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 29 23:59:45.358042 kernel: loop3: detected capacity change from 0 to 110976 Oct 29 23:59:45.368125 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Oct 29 23:59:45.379888 systemd-tmpfiles[1338]: ACLs are not supported, ignoring. Oct 29 23:59:45.380287 systemd-tmpfiles[1338]: ACLs are not supported, ignoring. Oct 29 23:59:45.385590 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 29 23:59:45.388041 kernel: loop4: detected capacity change from 0 to 2960 Oct 29 23:59:45.398846 systemd[1]: Started systemd-userdbd.service - User Database Manager. Oct 29 23:59:45.410476 kernel: loop5: detected capacity change from 0 to 128048 Oct 29 23:59:45.421041 kernel: loop6: detected capacity change from 0 to 219144 Oct 29 23:59:45.436037 kernel: loop7: detected capacity change from 0 to 110976 Oct 29 23:59:45.453194 systemd-resolved[1337]: Positive Trust Anchors: Oct 29 23:59:45.454050 systemd-resolved[1337]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 29 23:59:45.454091 systemd-resolved[1337]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Oct 29 23:59:45.454138 systemd-resolved[1337]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 29 23:59:45.455105 kernel: loop1: detected capacity change from 0 to 2960 Oct 29 23:59:45.458370 systemd-resolved[1337]: Defaulting to hostname 'linux'. Oct 29 23:59:45.459247 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 29 23:59:45.459421 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 29 23:59:45.467044 (sd-merge)[1348]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw', 'oem-vmware.raw'. Oct 29 23:59:45.469138 (sd-merge)[1348]: Merged extensions into '/usr'. Oct 29 23:59:45.471700 systemd[1]: Reload requested from client PID 1284 ('systemd-sysext') (unit systemd-sysext.service)... Oct 29 23:59:45.471709 systemd[1]: Reloading... Oct 29 23:59:45.504051 zram_generator::config[1375]: No configuration found. Oct 29 23:59:45.607020 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Oct 29 23:59:45.654221 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 29 23:59:45.654295 systemd[1]: Reloading finished in 182 ms. Oct 29 23:59:45.667007 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Oct 29 23:59:45.674959 systemd[1]: Starting ensure-sysext.service... Oct 29 23:59:45.675961 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 29 23:59:45.704788 systemd[1]: Reload requested from client PID 1433 ('systemctl') (unit ensure-sysext.service)... Oct 29 23:59:45.704800 systemd[1]: Reloading... Oct 29 23:59:45.737046 zram_generator::config[1463]: No configuration found. Oct 29 23:59:45.743235 systemd-tmpfiles[1434]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Oct 29 23:59:45.743267 systemd-tmpfiles[1434]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Oct 29 23:59:45.743441 systemd-tmpfiles[1434]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 29 23:59:45.743598 systemd-tmpfiles[1434]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Oct 29 23:59:45.745102 systemd-tmpfiles[1434]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 29 23:59:45.745304 systemd-tmpfiles[1434]: ACLs are not supported, ignoring. Oct 29 23:59:45.745372 systemd-tmpfiles[1434]: ACLs are not supported, ignoring. Oct 29 23:59:45.777553 systemd-tmpfiles[1434]: Detected autofs mount point /boot during canonicalization of boot. Oct 29 23:59:45.777731 systemd-tmpfiles[1434]: Skipping /boot Oct 29 23:59:45.781837 systemd-tmpfiles[1434]: Detected autofs mount point /boot during canonicalization of boot. Oct 29 23:59:45.781882 systemd-tmpfiles[1434]: Skipping /boot Oct 29 23:59:45.820605 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Oct 29 23:59:45.867641 systemd[1]: Reloading finished in 162 ms. Oct 29 23:59:45.891257 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Oct 29 23:59:45.897447 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 29 23:59:45.901791 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 29 23:59:45.905002 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Oct 29 23:59:45.907841 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Oct 29 23:59:45.910160 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Oct 29 23:59:45.911238 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 29 23:59:45.913556 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Oct 29 23:59:45.917008 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Oct 29 23:59:45.918314 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 29 23:59:45.922813 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 29 23:59:45.927201 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Oct 29 23:59:45.928135 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 29 23:59:45.930099 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 29 23:59:45.930175 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 29 23:59:45.931746 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 29 23:59:45.931804 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 29 23:59:45.945159 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 29 23:59:45.945358 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 29 23:59:45.945421 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 29 23:59:45.948243 systemd[1]: Finished ensure-sysext.service. Oct 29 23:59:45.952165 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Oct 29 23:59:45.959631 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Oct 29 23:59:45.965716 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 29 23:59:45.966431 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 29 23:59:45.972312 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Oct 29 23:59:45.980100 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 29 23:59:45.983792 systemd-udevd[1526]: Using default interface naming scheme 'v257'. Oct 29 23:59:45.984219 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Oct 29 23:59:45.986024 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 29 23:59:45.986968 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 29 23:59:45.987512 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 29 23:59:45.994088 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 29 23:59:45.994208 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 29 23:59:45.994385 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 29 23:59:45.994607 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 29 23:59:45.994710 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 29 23:59:45.995417 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 29 23:59:45.995533 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Oct 29 23:59:46.016226 augenrules[1565]: No rules Oct 29 23:59:46.017120 systemd[1]: audit-rules.service: Deactivated successfully. Oct 29 23:59:46.018062 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 29 23:59:46.025578 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Oct 29 23:59:46.025872 systemd[1]: Reached target time-set.target - System Time Set. Oct 29 23:59:46.033785 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Oct 29 23:59:46.034459 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 29 23:59:46.038444 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 29 23:59:46.038564 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 29 23:59:46.040962 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 29 23:59:46.042273 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Oct 29 23:59:46.043574 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Oct 29 23:59:46.043763 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 29 23:59:46.051798 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Oct 29 23:59:46.065399 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 29 23:59:46.065522 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Oct 29 23:59:46.067119 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Oct 29 23:59:46.068552 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Oct 29 23:59:46.080598 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Oct 29 23:59:46.160553 systemd-networkd[1584]: lo: Link UP Oct 29 23:59:46.160557 systemd-networkd[1584]: lo: Gained carrier Oct 29 23:59:46.162999 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 29 23:59:46.163171 systemd[1]: Reached target network.target - Network. Oct 29 23:59:46.164206 systemd-networkd[1584]: ens192: Configuring with /etc/systemd/network/00-vmware.network. Oct 29 23:59:46.166166 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Oct 29 23:59:46.171515 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Oct 29 23:59:46.171829 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Oct 29 23:59:46.171154 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Oct 29 23:59:46.174104 systemd-networkd[1584]: ens192: Link UP Oct 29 23:59:46.174232 systemd-networkd[1584]: ens192: Gained carrier Oct 29 23:59:46.177121 systemd-timesyncd[1542]: Network configuration changed, trying to establish connection. Oct 29 23:59:46.182223 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. Oct 29 23:59:46.183482 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Oct 29 23:59:46.197123 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Oct 29 23:59:46.198084 kernel: mousedev: PS/2 mouse device common for all mice Oct 29 23:59:46.204223 kernel: ACPI: button: Power Button [PWRF] Oct 29 23:59:46.212703 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Oct 29 23:59:46.213509 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Oct 29 23:59:46.277047 kernel: piix4_smbus 0000:00:07.3: SMBus Host Controller not enabled! Oct 29 23:59:46.375451 (udev-worker)[1589]: id: Truncating stdout of 'dmi_memory_id' up to 16384 byte. Oct 29 23:59:46.383912 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 29 23:59:46.468785 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 29 23:59:46.483589 ldconfig[1524]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 29 23:59:46.485385 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Oct 29 23:59:46.486502 systemd[1]: Starting systemd-update-done.service - Update is Completed... Oct 29 23:59:46.498984 systemd[1]: Finished systemd-update-done.service - Update is Completed. Oct 29 23:59:46.499261 systemd[1]: Reached target sysinit.target - System Initialization. Oct 29 23:59:46.499423 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Oct 29 23:59:46.499553 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Oct 29 23:59:46.499672 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Oct 29 23:59:46.499850 systemd[1]: Started logrotate.timer - Daily rotation of log files. Oct 29 23:59:46.500003 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Oct 29 23:59:46.500137 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Oct 29 23:59:46.500251 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 29 23:59:46.500270 systemd[1]: Reached target paths.target - Path Units. Oct 29 23:59:46.500365 systemd[1]: Reached target timers.target - Timer Units. Oct 29 23:59:46.500887 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Oct 29 23:59:46.501833 systemd[1]: Starting docker.socket - Docker Socket for the API... Oct 29 23:59:46.503238 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Oct 29 23:59:46.503423 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Oct 29 23:59:46.503548 systemd[1]: Reached target ssh-access.target - SSH Access Available. Oct 29 23:59:46.504961 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Oct 29 23:59:46.505216 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Oct 29 23:59:46.505656 systemd[1]: Listening on docker.socket - Docker Socket for the API. Oct 29 23:59:46.506186 systemd[1]: Reached target sockets.target - Socket Units. Oct 29 23:59:46.506288 systemd[1]: Reached target basic.target - Basic System. Oct 29 23:59:46.506411 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Oct 29 23:59:46.506433 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Oct 29 23:59:46.507099 systemd[1]: Starting containerd.service - containerd container runtime... Oct 29 23:59:46.509148 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Oct 29 23:59:46.511093 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Oct 29 23:59:46.512118 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Oct 29 23:59:46.513180 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Oct 29 23:59:46.513295 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Oct 29 23:59:46.517249 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Oct 29 23:59:46.517975 jq[1647]: false Oct 29 23:59:46.518243 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Oct 29 23:59:46.521551 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Oct 29 23:59:46.525140 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Oct 29 23:59:46.527840 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Oct 29 23:59:46.533308 systemd[1]: Starting systemd-logind.service - User Login Management... Oct 29 23:59:46.533500 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 29 23:59:46.535482 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 29 23:59:46.535775 systemd[1]: Starting update-engine.service - Update Engine... Oct 29 23:59:46.538067 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Oct 29 23:59:46.539501 systemd[1]: Starting vgauthd.service - VGAuth Service for open-vm-tools... Oct 29 23:59:46.545571 google_oslogin_nss_cache[1649]: oslogin_cache_refresh[1649]: Refreshing passwd entry cache Oct 29 23:59:46.545766 extend-filesystems[1648]: Found /dev/sda6 Oct 29 23:59:46.547078 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Oct 29 23:59:46.547390 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 29 23:59:46.547507 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Oct 29 23:59:46.548005 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 29 23:59:46.548183 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Oct 29 23:59:46.551058 oslogin_cache_refresh[1649]: Refreshing passwd entry cache Oct 29 23:59:46.558462 jq[1661]: true Oct 29 23:59:46.561257 google_oslogin_nss_cache[1649]: oslogin_cache_refresh[1649]: Failure getting users, quitting Oct 29 23:59:46.561257 google_oslogin_nss_cache[1649]: oslogin_cache_refresh[1649]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Oct 29 23:59:46.561257 google_oslogin_nss_cache[1649]: oslogin_cache_refresh[1649]: Refreshing group entry cache Oct 29 23:59:46.561163 oslogin_cache_refresh[1649]: Failure getting users, quitting Oct 29 23:59:46.561174 oslogin_cache_refresh[1649]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Oct 29 23:59:46.561197 oslogin_cache_refresh[1649]: Refreshing group entry cache Oct 29 23:59:46.563153 extend-filesystems[1648]: Found /dev/sda9 Oct 29 23:59:46.568407 google_oslogin_nss_cache[1649]: oslogin_cache_refresh[1649]: Failure getting groups, quitting Oct 29 23:59:46.568407 google_oslogin_nss_cache[1649]: oslogin_cache_refresh[1649]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Oct 29 23:59:46.567419 oslogin_cache_refresh[1649]: Failure getting groups, quitting Oct 29 23:59:46.567425 oslogin_cache_refresh[1649]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Oct 29 23:59:46.571432 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Oct 29 23:59:46.571773 extend-filesystems[1648]: Checking size of /dev/sda9 Oct 29 23:59:46.571617 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Oct 29 23:59:46.573723 update_engine[1660]: I20251029 23:59:46.573459 1660 main.cc:92] Flatcar Update Engine starting Oct 29 23:59:46.575878 systemd[1]: motdgen.service: Deactivated successfully. Oct 29 23:59:46.578285 (ntainerd)[1684]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Oct 29 23:59:46.579143 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Oct 29 23:59:46.580201 systemd[1]: Started vgauthd.service - VGAuth Service for open-vm-tools. Oct 29 23:59:46.581179 tar[1666]: linux-amd64/LICENSE Oct 29 23:59:46.583903 tar[1666]: linux-amd64/helm Oct 29 23:59:46.585731 dbus-daemon[1645]: [system] SELinux support is enabled Oct 29 23:59:46.589910 update_engine[1660]: I20251029 23:59:46.588487 1660 update_check_scheduler.cc:74] Next update check in 4m45s Oct 29 23:59:46.591881 systemd[1]: Starting vmtoolsd.service - Service for virtual machines hosted on VMware... Oct 29 23:59:46.592076 systemd[1]: Started dbus.service - D-Bus System Message Bus. Oct 29 23:59:46.594668 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 29 23:59:46.594684 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Oct 29 23:59:46.595111 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 29 23:59:46.595122 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Oct 29 23:59:46.596586 systemd[1]: Started update-engine.service - Update Engine. Oct 29 23:59:46.598032 systemd[1]: Started locksmithd.service - Cluster reboot manager. Oct 29 23:59:46.603864 jq[1685]: true Oct 29 23:59:46.617840 systemd[1]: Started vmtoolsd.service - Service for virtual machines hosted on VMware. Oct 29 23:59:46.624309 extend-filesystems[1648]: Resized partition /dev/sda9 Oct 29 23:59:46.633442 extend-filesystems[1711]: resize2fs 1.47.3 (8-Jul-2025) Oct 29 23:59:46.638055 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 1635323 blocks Oct 29 23:59:46.661436 kernel: EXT4-fs (sda9): resized filesystem to 1635323 Oct 29 23:59:46.647694 unknown[1694]: Pref_Init: Using '/etc/vmware-tools/vgauth.conf' as preferences filepath Oct 29 23:59:46.655989 unknown[1694]: Core dump limit set to -1 Oct 29 23:59:46.661853 systemd-logind[1659]: Watching system buttons on /dev/input/event2 (Power Button) Oct 29 23:59:46.661972 systemd-logind[1659]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Oct 29 23:59:46.662128 systemd-logind[1659]: New seat seat0. Oct 29 23:59:46.662433 extend-filesystems[1711]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Oct 29 23:59:46.662433 extend-filesystems[1711]: old_desc_blocks = 1, new_desc_blocks = 1 Oct 29 23:59:46.662433 extend-filesystems[1711]: The filesystem on /dev/sda9 is now 1635323 (4k) blocks long. Oct 29 23:59:46.663725 extend-filesystems[1648]: Resized filesystem in /dev/sda9 Oct 29 23:59:46.663243 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 29 23:59:46.664098 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Oct 29 23:59:46.669114 systemd[1]: Started systemd-logind.service - User Login Management. Oct 29 23:59:46.670747 bash[1721]: Updated "/home/core/.ssh/authorized_keys" Oct 29 23:59:46.671597 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Oct 29 23:59:46.672537 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Oct 29 23:59:46.819408 locksmithd[1698]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 29 23:59:46.907208 containerd[1684]: time="2025-10-29T23:59:46Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Oct 29 23:59:46.908062 containerd[1684]: time="2025-10-29T23:59:46.908046398Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Oct 29 23:59:46.928688 sshd_keygen[1688]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 29 23:59:46.929007 containerd[1684]: time="2025-10-29T23:59:46.928014688Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="5.562µs" Oct 29 23:59:46.929007 containerd[1684]: time="2025-10-29T23:59:46.928859833Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Oct 29 23:59:46.929007 containerd[1684]: time="2025-10-29T23:59:46.928874970Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Oct 29 23:59:46.929007 containerd[1684]: time="2025-10-29T23:59:46.928951425Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Oct 29 23:59:46.929007 containerd[1684]: time="2025-10-29T23:59:46.928961155Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Oct 29 23:59:46.929007 containerd[1684]: time="2025-10-29T23:59:46.928975404Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Oct 29 23:59:46.929183 containerd[1684]: time="2025-10-29T23:59:46.929041246Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Oct 29 23:59:46.929183 containerd[1684]: time="2025-10-29T23:59:46.929049584Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Oct 29 23:59:46.929209 containerd[1684]: time="2025-10-29T23:59:46.929185173Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Oct 29 23:59:46.929209 containerd[1684]: time="2025-10-29T23:59:46.929193059Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Oct 29 23:59:46.929209 containerd[1684]: time="2025-10-29T23:59:46.929199133Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Oct 29 23:59:46.929209 containerd[1684]: time="2025-10-29T23:59:46.929203456Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Oct 29 23:59:46.929261 containerd[1684]: time="2025-10-29T23:59:46.929245768Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Oct 29 23:59:46.929444 containerd[1684]: time="2025-10-29T23:59:46.929353948Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Oct 29 23:59:46.929444 containerd[1684]: time="2025-10-29T23:59:46.929371493Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Oct 29 23:59:46.929444 containerd[1684]: time="2025-10-29T23:59:46.929377341Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Oct 29 23:59:46.929444 containerd[1684]: time="2025-10-29T23:59:46.929395493Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Oct 29 23:59:46.929559 containerd[1684]: time="2025-10-29T23:59:46.929547720Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Oct 29 23:59:46.929692 containerd[1684]: time="2025-10-29T23:59:46.929579926Z" level=info msg="metadata content store policy set" policy=shared Oct 29 23:59:46.930863 containerd[1684]: time="2025-10-29T23:59:46.930846874Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Oct 29 23:59:46.930893 containerd[1684]: time="2025-10-29T23:59:46.930878136Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Oct 29 23:59:46.930909 containerd[1684]: time="2025-10-29T23:59:46.930892767Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Oct 29 23:59:46.930909 containerd[1684]: time="2025-10-29T23:59:46.930900228Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Oct 29 23:59:46.930909 containerd[1684]: time="2025-10-29T23:59:46.930907418Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Oct 29 23:59:46.930955 containerd[1684]: time="2025-10-29T23:59:46.930913442Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Oct 29 23:59:46.931095 containerd[1684]: time="2025-10-29T23:59:46.931079748Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Oct 29 23:59:46.931095 containerd[1684]: time="2025-10-29T23:59:46.931093369Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Oct 29 23:59:46.931131 containerd[1684]: time="2025-10-29T23:59:46.931099781Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Oct 29 23:59:46.931131 containerd[1684]: time="2025-10-29T23:59:46.931106133Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Oct 29 23:59:46.931131 containerd[1684]: time="2025-10-29T23:59:46.931111556Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Oct 29 23:59:46.931131 containerd[1684]: time="2025-10-29T23:59:46.931118572Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Oct 29 23:59:46.931196 containerd[1684]: time="2025-10-29T23:59:46.931184273Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Oct 29 23:59:46.931212 containerd[1684]: time="2025-10-29T23:59:46.931201945Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Oct 29 23:59:46.931226 containerd[1684]: time="2025-10-29T23:59:46.931216736Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Oct 29 23:59:46.931239 containerd[1684]: time="2025-10-29T23:59:46.931228431Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Oct 29 23:59:46.931239 containerd[1684]: time="2025-10-29T23:59:46.931235071Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Oct 29 23:59:46.931267 containerd[1684]: time="2025-10-29T23:59:46.931241135Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Oct 29 23:59:46.931267 containerd[1684]: time="2025-10-29T23:59:46.931247153Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Oct 29 23:59:46.931267 containerd[1684]: time="2025-10-29T23:59:46.931252517Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Oct 29 23:59:46.931267 containerd[1684]: time="2025-10-29T23:59:46.931258581Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Oct 29 23:59:46.931267 containerd[1684]: time="2025-10-29T23:59:46.931264183Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Oct 29 23:59:46.931332 containerd[1684]: time="2025-10-29T23:59:46.931269308Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Oct 29 23:59:46.931332 containerd[1684]: time="2025-10-29T23:59:46.931306089Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Oct 29 23:59:46.931332 containerd[1684]: time="2025-10-29T23:59:46.931316523Z" level=info msg="Start snapshots syncer" Oct 29 23:59:46.933045 containerd[1684]: time="2025-10-29T23:59:46.932826949Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Oct 29 23:59:46.933080 containerd[1684]: time="2025-10-29T23:59:46.933018001Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Oct 29 23:59:46.933080 containerd[1684]: time="2025-10-29T23:59:46.933075384Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Oct 29 23:59:46.933160 containerd[1684]: time="2025-10-29T23:59:46.933128969Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Oct 29 23:59:46.933227 containerd[1684]: time="2025-10-29T23:59:46.933215602Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Oct 29 23:59:46.933246 containerd[1684]: time="2025-10-29T23:59:46.933230577Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Oct 29 23:59:46.933246 containerd[1684]: time="2025-10-29T23:59:46.933237095Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Oct 29 23:59:46.933276 containerd[1684]: time="2025-10-29T23:59:46.933244686Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Oct 29 23:59:46.933276 containerd[1684]: time="2025-10-29T23:59:46.933255161Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Oct 29 23:59:46.933276 containerd[1684]: time="2025-10-29T23:59:46.933261774Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Oct 29 23:59:46.933276 containerd[1684]: time="2025-10-29T23:59:46.933267823Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Oct 29 23:59:46.933325 containerd[1684]: time="2025-10-29T23:59:46.933281067Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Oct 29 23:59:46.933325 containerd[1684]: time="2025-10-29T23:59:46.933287211Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Oct 29 23:59:46.933325 containerd[1684]: time="2025-10-29T23:59:46.933295440Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Oct 29 23:59:46.933364 containerd[1684]: time="2025-10-29T23:59:46.933323813Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Oct 29 23:59:46.933364 containerd[1684]: time="2025-10-29T23:59:46.933344222Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Oct 29 23:59:46.933364 containerd[1684]: time="2025-10-29T23:59:46.933350132Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Oct 29 23:59:46.933364 containerd[1684]: time="2025-10-29T23:59:46.933355391Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Oct 29 23:59:46.933364 containerd[1684]: time="2025-10-29T23:59:46.933359808Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Oct 29 23:59:46.933364 containerd[1684]: time="2025-10-29T23:59:46.933364684Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Oct 29 23:59:46.933441 containerd[1684]: time="2025-10-29T23:59:46.933395292Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Oct 29 23:59:46.933441 containerd[1684]: time="2025-10-29T23:59:46.933406358Z" level=info msg="runtime interface created" Oct 29 23:59:46.933441 containerd[1684]: time="2025-10-29T23:59:46.933409532Z" level=info msg="created NRI interface" Oct 29 23:59:46.933441 containerd[1684]: time="2025-10-29T23:59:46.933414412Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Oct 29 23:59:46.933441 containerd[1684]: time="2025-10-29T23:59:46.933420115Z" level=info msg="Connect containerd service" Oct 29 23:59:46.933441 containerd[1684]: time="2025-10-29T23:59:46.933434973Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Oct 29 23:59:46.934254 containerd[1684]: time="2025-10-29T23:59:46.933894042Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 29 23:59:46.970277 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Oct 29 23:59:46.973152 systemd[1]: Starting issuegen.service - Generate /run/issue... Oct 29 23:59:46.987799 systemd[1]: issuegen.service: Deactivated successfully. Oct 29 23:59:46.987938 systemd[1]: Finished issuegen.service - Generate /run/issue. Oct 29 23:59:47.004083 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Oct 29 23:59:47.011884 tar[1666]: linux-amd64/README.md Oct 29 23:59:47.020424 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Oct 29 23:59:47.022177 systemd[1]: Started getty@tty1.service - Getty on tty1. Oct 29 23:59:47.024163 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Oct 29 23:59:47.024336 systemd[1]: Reached target getty.target - Login Prompts. Oct 29 23:59:47.034158 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Oct 29 23:59:47.054149 containerd[1684]: time="2025-10-29T23:59:47.054126314Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 29 23:59:47.054206 containerd[1684]: time="2025-10-29T23:59:47.054166361Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 29 23:59:47.054206 containerd[1684]: time="2025-10-29T23:59:47.054180972Z" level=info msg="Start subscribing containerd event" Oct 29 23:59:47.054250 containerd[1684]: time="2025-10-29T23:59:47.054198965Z" level=info msg="Start recovering state" Oct 29 23:59:47.054250 containerd[1684]: time="2025-10-29T23:59:47.054247530Z" level=info msg="Start event monitor" Oct 29 23:59:47.054275 containerd[1684]: time="2025-10-29T23:59:47.054254419Z" level=info msg="Start cni network conf syncer for default" Oct 29 23:59:47.054275 containerd[1684]: time="2025-10-29T23:59:47.054264938Z" level=info msg="Start streaming server" Oct 29 23:59:47.054275 containerd[1684]: time="2025-10-29T23:59:47.054272626Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Oct 29 23:59:47.054311 containerd[1684]: time="2025-10-29T23:59:47.054276685Z" level=info msg="runtime interface starting up..." Oct 29 23:59:47.054311 containerd[1684]: time="2025-10-29T23:59:47.054279665Z" level=info msg="starting plugins..." Oct 29 23:59:47.054311 containerd[1684]: time="2025-10-29T23:59:47.054286682Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Oct 29 23:59:47.054523 containerd[1684]: time="2025-10-29T23:59:47.054344216Z" level=info msg="containerd successfully booted in 0.147573s" Oct 29 23:59:47.054399 systemd[1]: Started containerd.service - containerd container runtime. Oct 29 23:59:47.878215 systemd-networkd[1584]: ens192: Gained IPv6LL Oct 29 23:59:47.878594 systemd-timesyncd[1542]: Network configuration changed, trying to establish connection. Oct 29 23:59:47.880312 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Oct 29 23:59:47.880911 systemd[1]: Reached target network-online.target - Network is Online. Oct 29 23:59:47.882852 systemd[1]: Starting coreos-metadata.service - VMware metadata agent... Oct 29 23:59:47.885119 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 29 23:59:47.893143 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Oct 29 23:59:47.912485 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Oct 29 23:59:47.923368 systemd[1]: coreos-metadata.service: Deactivated successfully. Oct 29 23:59:47.923523 systemd[1]: Finished coreos-metadata.service - VMware metadata agent. Oct 29 23:59:47.923827 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Oct 29 23:59:48.771082 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 29 23:59:48.771485 systemd[1]: Reached target multi-user.target - Multi-User System. Oct 29 23:59:48.772108 systemd[1]: Startup finished in 2.261s (kernel) + 5.271s (initrd) + 4.173s (userspace) = 11.707s. Oct 29 23:59:48.781310 (kubelet)[1852]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 29 23:59:49.182888 login[1814]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Oct 29 23:59:49.183787 login[1815]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Oct 29 23:59:49.188345 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Oct 29 23:59:49.188962 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Oct 29 23:59:49.195820 systemd-logind[1659]: New session 1 of user core. Oct 29 23:59:49.198492 systemd-logind[1659]: New session 2 of user core. Oct 29 23:59:49.208037 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Oct 29 23:59:49.210963 systemd[1]: Starting user@500.service - User Manager for UID 500... Oct 29 23:59:49.220300 (systemd)[1863]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 29 23:59:49.221699 systemd-logind[1659]: New session c1 of user core. Oct 29 23:59:49.284807 kubelet[1852]: E1029 23:59:49.284785 1852 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 29 23:59:49.286144 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 29 23:59:49.286234 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 29 23:59:49.287087 systemd[1]: kubelet.service: Consumed 607ms CPU time, 255.9M memory peak. Oct 29 23:59:49.308069 systemd[1863]: Queued start job for default target default.target. Oct 29 23:59:49.316024 systemd[1863]: Created slice app.slice - User Application Slice. Oct 29 23:59:49.316051 systemd[1863]: Reached target paths.target - Paths. Oct 29 23:59:49.316074 systemd[1863]: Reached target timers.target - Timers. Oct 29 23:59:49.316733 systemd[1863]: Starting dbus.socket - D-Bus User Message Bus Socket... Oct 29 23:59:49.322904 systemd[1863]: Listening on dbus.socket - D-Bus User Message Bus Socket. Oct 29 23:59:49.322951 systemd[1863]: Reached target sockets.target - Sockets. Oct 29 23:59:49.322989 systemd[1863]: Reached target basic.target - Basic System. Oct 29 23:59:49.323012 systemd[1863]: Reached target default.target - Main User Target. Oct 29 23:59:49.323039 systemd[1863]: Startup finished in 97ms. Oct 29 23:59:49.323111 systemd[1]: Started user@500.service - User Manager for UID 500. Oct 29 23:59:49.323970 systemd[1]: Started session-1.scope - Session 1 of User core. Oct 29 23:59:49.324670 systemd[1]: Started session-2.scope - Session 2 of User core. Oct 29 23:59:49.732146 systemd-timesyncd[1542]: Network configuration changed, trying to establish connection. Oct 29 23:59:59.536945 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Oct 29 23:59:59.538665 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 30 00:00:00.116876 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 30 00:00:00.120019 (kubelet)[1904]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 30 00:00:00.154672 kubelet[1904]: E1030 00:00:00.154636 1904 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 30 00:00:00.157164 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 30 00:00:00.157326 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 30 00:00:00.157753 systemd[1]: kubelet.service: Consumed 106ms CPU time, 109.7M memory peak. Oct 30 00:00:10.407559 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Oct 30 00:00:10.408873 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 30 00:00:10.750954 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 30 00:00:10.758181 (kubelet)[1918]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 30 00:00:10.791082 kubelet[1918]: E1030 00:00:10.791050 1918 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 30 00:00:10.792504 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 30 00:00:10.792640 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 30 00:00:10.792999 systemd[1]: kubelet.service: Consumed 89ms CPU time, 109.8M memory peak. Oct 30 00:00:16.731298 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Oct 30 00:00:16.732547 systemd[1]: Started sshd@0-139.178.70.100:22-139.178.89.65:55866.service - OpenSSH per-connection server daemon (139.178.89.65:55866). Oct 30 00:00:16.818207 sshd[1926]: Accepted publickey for core from 139.178.89.65 port 55866 ssh2: RSA SHA256:uxUF/q85/qzc+1rZf4OxvPLwxJ/jaHnP3TYKM4pHLgM Oct 30 00:00:16.819068 sshd-session[1926]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:00:16.821912 systemd-logind[1659]: New session 3 of user core. Oct 30 00:00:16.829303 systemd[1]: Started session-3.scope - Session 3 of User core. Oct 30 00:00:16.883294 systemd[1]: Started sshd@1-139.178.70.100:22-139.178.89.65:55868.service - OpenSSH per-connection server daemon (139.178.89.65:55868). Oct 30 00:00:16.929321 sshd[1932]: Accepted publickey for core from 139.178.89.65 port 55868 ssh2: RSA SHA256:uxUF/q85/qzc+1rZf4OxvPLwxJ/jaHnP3TYKM4pHLgM Oct 30 00:00:16.930159 sshd-session[1932]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:00:16.933246 systemd-logind[1659]: New session 4 of user core. Oct 30 00:00:16.938156 systemd[1]: Started session-4.scope - Session 4 of User core. Oct 30 00:00:16.987130 sshd[1935]: Connection closed by 139.178.89.65 port 55868 Oct 30 00:00:16.987483 sshd-session[1932]: pam_unix(sshd:session): session closed for user core Oct 30 00:00:16.996370 systemd[1]: sshd@1-139.178.70.100:22-139.178.89.65:55868.service: Deactivated successfully. Oct 30 00:00:16.997434 systemd[1]: session-4.scope: Deactivated successfully. Oct 30 00:00:16.997929 systemd-logind[1659]: Session 4 logged out. Waiting for processes to exit. Oct 30 00:00:16.999408 systemd[1]: Started sshd@2-139.178.70.100:22-139.178.89.65:55876.service - OpenSSH per-connection server daemon (139.178.89.65:55876). Oct 30 00:00:17.000121 systemd-logind[1659]: Removed session 4. Oct 30 00:00:17.041312 sshd[1941]: Accepted publickey for core from 139.178.89.65 port 55876 ssh2: RSA SHA256:uxUF/q85/qzc+1rZf4OxvPLwxJ/jaHnP3TYKM4pHLgM Oct 30 00:00:17.042215 sshd-session[1941]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:00:17.045643 systemd-logind[1659]: New session 5 of user core. Oct 30 00:00:17.055204 systemd[1]: Started session-5.scope - Session 5 of User core. Oct 30 00:00:17.102175 sshd[1944]: Connection closed by 139.178.89.65 port 55876 Oct 30 00:00:17.102466 sshd-session[1941]: pam_unix(sshd:session): session closed for user core Oct 30 00:00:17.108094 systemd[1]: sshd@2-139.178.70.100:22-139.178.89.65:55876.service: Deactivated successfully. Oct 30 00:00:17.108993 systemd[1]: session-5.scope: Deactivated successfully. Oct 30 00:00:17.109702 systemd-logind[1659]: Session 5 logged out. Waiting for processes to exit. Oct 30 00:00:17.110702 systemd[1]: Started sshd@3-139.178.70.100:22-139.178.89.65:55884.service - OpenSSH per-connection server daemon (139.178.89.65:55884). Oct 30 00:00:17.113245 systemd-logind[1659]: Removed session 5. Oct 30 00:00:17.145271 sshd[1950]: Accepted publickey for core from 139.178.89.65 port 55884 ssh2: RSA SHA256:uxUF/q85/qzc+1rZf4OxvPLwxJ/jaHnP3TYKM4pHLgM Oct 30 00:00:17.146366 sshd-session[1950]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:00:17.151311 systemd-logind[1659]: New session 6 of user core. Oct 30 00:00:17.156252 systemd[1]: Started session-6.scope - Session 6 of User core. Oct 30 00:00:17.205439 sshd[1953]: Connection closed by 139.178.89.65 port 55884 Oct 30 00:00:17.205799 sshd-session[1950]: pam_unix(sshd:session): session closed for user core Oct 30 00:00:17.217001 systemd[1]: sshd@3-139.178.70.100:22-139.178.89.65:55884.service: Deactivated successfully. Oct 30 00:00:17.218012 systemd[1]: session-6.scope: Deactivated successfully. Oct 30 00:00:17.218817 systemd-logind[1659]: Session 6 logged out. Waiting for processes to exit. Oct 30 00:00:17.219924 systemd-logind[1659]: Removed session 6. Oct 30 00:00:17.221392 systemd[1]: Started sshd@4-139.178.70.100:22-139.178.89.65:55886.service - OpenSSH per-connection server daemon (139.178.89.65:55886). Oct 30 00:00:17.256287 sshd[1959]: Accepted publickey for core from 139.178.89.65 port 55886 ssh2: RSA SHA256:uxUF/q85/qzc+1rZf4OxvPLwxJ/jaHnP3TYKM4pHLgM Oct 30 00:00:17.256682 sshd-session[1959]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:00:17.259772 systemd-logind[1659]: New session 7 of user core. Oct 30 00:00:17.266190 systemd[1]: Started session-7.scope - Session 7 of User core. Oct 30 00:00:17.332149 sudo[1963]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 30 00:00:17.332377 sudo[1963]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 30 00:00:17.345835 sudo[1963]: pam_unix(sudo:session): session closed for user root Oct 30 00:00:17.347117 sshd[1962]: Connection closed by 139.178.89.65 port 55886 Oct 30 00:00:17.347663 sshd-session[1959]: pam_unix(sshd:session): session closed for user core Oct 30 00:00:17.360610 systemd[1]: sshd@4-139.178.70.100:22-139.178.89.65:55886.service: Deactivated successfully. Oct 30 00:00:17.361996 systemd[1]: session-7.scope: Deactivated successfully. Oct 30 00:00:17.362749 systemd-logind[1659]: Session 7 logged out. Waiting for processes to exit. Oct 30 00:00:17.364915 systemd[1]: Started sshd@5-139.178.70.100:22-139.178.89.65:55892.service - OpenSSH per-connection server daemon (139.178.89.65:55892). Oct 30 00:00:17.365894 systemd-logind[1659]: Removed session 7. Oct 30 00:00:17.416819 sshd[1969]: Accepted publickey for core from 139.178.89.65 port 55892 ssh2: RSA SHA256:uxUF/q85/qzc+1rZf4OxvPLwxJ/jaHnP3TYKM4pHLgM Oct 30 00:00:17.417607 sshd-session[1969]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:00:17.420712 systemd-logind[1659]: New session 8 of user core. Oct 30 00:00:17.429152 systemd[1]: Started session-8.scope - Session 8 of User core. Oct 30 00:00:17.477860 sudo[1974]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 30 00:00:17.478009 sudo[1974]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 30 00:00:17.485594 sudo[1974]: pam_unix(sudo:session): session closed for user root Oct 30 00:00:17.489119 sudo[1973]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Oct 30 00:00:17.489257 sudo[1973]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 30 00:00:17.495244 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 30 00:00:17.524134 augenrules[1996]: No rules Oct 30 00:00:17.524682 systemd[1]: audit-rules.service: Deactivated successfully. Oct 30 00:00:17.524863 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 30 00:00:17.526406 sudo[1973]: pam_unix(sudo:session): session closed for user root Oct 30 00:00:17.528084 sshd[1972]: Connection closed by 139.178.89.65 port 55892 Oct 30 00:00:17.528333 sshd-session[1969]: pam_unix(sshd:session): session closed for user core Oct 30 00:00:17.534994 systemd[1]: sshd@5-139.178.70.100:22-139.178.89.65:55892.service: Deactivated successfully. Oct 30 00:00:17.537564 systemd[1]: session-8.scope: Deactivated successfully. Oct 30 00:00:17.538712 systemd-logind[1659]: Session 8 logged out. Waiting for processes to exit. Oct 30 00:00:17.539952 systemd[1]: Started sshd@6-139.178.70.100:22-139.178.89.65:55894.service - OpenSSH per-connection server daemon (139.178.89.65:55894). Oct 30 00:00:17.543489 systemd-logind[1659]: Removed session 8. Oct 30 00:00:17.586065 sshd[2005]: Accepted publickey for core from 139.178.89.65 port 55894 ssh2: RSA SHA256:uxUF/q85/qzc+1rZf4OxvPLwxJ/jaHnP3TYKM4pHLgM Oct 30 00:00:17.586303 sshd-session[2005]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:00:17.589996 systemd-logind[1659]: New session 9 of user core. Oct 30 00:00:17.595110 systemd[1]: Started session-9.scope - Session 9 of User core. Oct 30 00:00:17.644854 sudo[2009]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 30 00:00:17.645241 sudo[2009]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 30 00:00:18.340272 systemd[1]: Starting docker.service - Docker Application Container Engine... Oct 30 00:00:18.350299 (dockerd)[2028]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Oct 30 00:00:18.680896 dockerd[2028]: time="2025-10-30T00:00:18.680707260Z" level=info msg="Starting up" Oct 30 00:00:18.681179 dockerd[2028]: time="2025-10-30T00:00:18.681101039Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Oct 30 00:00:18.688336 dockerd[2028]: time="2025-10-30T00:00:18.688306630Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Oct 30 00:00:18.731486 dockerd[2028]: time="2025-10-30T00:00:18.731309926Z" level=info msg="Loading containers: start." Oct 30 00:00:18.743045 kernel: Initializing XFRM netlink socket Oct 30 00:00:19.010085 systemd-timesyncd[1542]: Network configuration changed, trying to establish connection. Oct 30 00:00:19.067000 systemd-networkd[1584]: docker0: Link UP Oct 30 00:01:57.212988 systemd-resolved[1337]: Clock change detected. Flushing caches. Oct 30 00:01:57.213276 systemd-timesyncd[1542]: Contacted time server 141.11.89.193:123 (2.flatcar.pool.ntp.org). Oct 30 00:01:57.213315 systemd-timesyncd[1542]: Initial clock synchronization to Thu 2025-10-30 00:01:57.212904 UTC. Oct 30 00:01:57.216218 dockerd[2028]: time="2025-10-30T00:01:57.216193965Z" level=info msg="Loading containers: done." Oct 30 00:01:57.224343 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2019433412-merged.mount: Deactivated successfully. Oct 30 00:01:57.231062 dockerd[2028]: time="2025-10-30T00:01:57.231026516Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Oct 30 00:01:57.231175 dockerd[2028]: time="2025-10-30T00:01:57.231095553Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Oct 30 00:01:57.231175 dockerd[2028]: time="2025-10-30T00:01:57.231164938Z" level=info msg="Initializing buildkit" Oct 30 00:01:57.244401 dockerd[2028]: time="2025-10-30T00:01:57.244329610Z" level=info msg="Completed buildkit initialization" Oct 30 00:01:57.250550 dockerd[2028]: time="2025-10-30T00:01:57.250515391Z" level=info msg="Daemon has completed initialization" Oct 30 00:01:57.251202 dockerd[2028]: time="2025-10-30T00:01:57.250602000Z" level=info msg="API listen on /run/docker.sock" Oct 30 00:01:57.251385 systemd[1]: Started docker.service - Docker Application Container Engine. Oct 30 00:01:58.565999 containerd[1684]: time="2025-10-30T00:01:58.565974157Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\"" Oct 30 00:01:59.169125 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Oct 30 00:01:59.170297 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 30 00:01:59.401530 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 30 00:01:59.407417 (kubelet)[2248]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 30 00:01:59.429326 kubelet[2248]: E1030 00:01:59.429258 2248 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 30 00:01:59.430642 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 30 00:01:59.430778 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 30 00:01:59.431175 systemd[1]: kubelet.service: Consumed 105ms CPU time, 110.2M memory peak. Oct 30 00:01:59.462015 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2567022218.mount: Deactivated successfully. Oct 30 00:02:00.703050 containerd[1684]: time="2025-10-30T00:02:00.703006721Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:02:00.704127 containerd[1684]: time="2025-10-30T00:02:00.703660242Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.1: active requests=0, bytes read=27065392" Oct 30 00:02:00.704127 containerd[1684]: time="2025-10-30T00:02:00.703816516Z" level=info msg="ImageCreate event name:\"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:02:00.705782 containerd[1684]: time="2025-10-30T00:02:00.705763981Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:02:00.706978 containerd[1684]: time="2025-10-30T00:02:00.706602357Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.1\" with image id \"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.1\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902\", size \"27061991\" in 2.140604128s" Oct 30 00:02:00.707053 containerd[1684]: time="2025-10-30T00:02:00.707039813Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\" returns image reference \"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\"" Oct 30 00:02:00.707617 containerd[1684]: time="2025-10-30T00:02:00.707588447Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\"" Oct 30 00:02:01.982562 containerd[1684]: time="2025-10-30T00:02:01.982297941Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:02:01.983365 containerd[1684]: time="2025-10-30T00:02:01.983334240Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.1: active requests=0, bytes read=21159757" Oct 30 00:02:01.985754 containerd[1684]: time="2025-10-30T00:02:01.985720124Z" level=info msg="ImageCreate event name:\"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:02:01.988123 containerd[1684]: time="2025-10-30T00:02:01.987869348Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:02:01.991051 containerd[1684]: time="2025-10-30T00:02:01.990889985Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.1\" with image id \"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.1\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89\", size \"22820214\" in 1.28319048s" Oct 30 00:02:01.991051 containerd[1684]: time="2025-10-30T00:02:01.990911785Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\" returns image reference \"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\"" Oct 30 00:02:01.991366 containerd[1684]: time="2025-10-30T00:02:01.991340499Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\"" Oct 30 00:02:03.530988 containerd[1684]: time="2025-10-30T00:02:03.530956149Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:02:03.531574 containerd[1684]: time="2025-10-30T00:02:03.531554824Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.1: active requests=0, bytes read=15725093" Oct 30 00:02:03.532073 containerd[1684]: time="2025-10-30T00:02:03.531847692Z" level=info msg="ImageCreate event name:\"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:02:03.533537 containerd[1684]: time="2025-10-30T00:02:03.533523069Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:02:03.534332 containerd[1684]: time="2025-10-30T00:02:03.534316419Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.1\" with image id \"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.1\", repo digest \"registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500\", size \"17385568\" in 1.542898161s" Oct 30 00:02:03.534364 containerd[1684]: time="2025-10-30T00:02:03.534334665Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\" returns image reference \"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\"" Oct 30 00:02:03.534629 containerd[1684]: time="2025-10-30T00:02:03.534611106Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\"" Oct 30 00:02:04.767384 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1605690305.mount: Deactivated successfully. Oct 30 00:02:05.041849 containerd[1684]: time="2025-10-30T00:02:05.041240016Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:02:05.050798 containerd[1684]: time="2025-10-30T00:02:05.050765531Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.1: active requests=0, bytes read=25964699" Oct 30 00:02:05.061438 containerd[1684]: time="2025-10-30T00:02:05.061403645Z" level=info msg="ImageCreate event name:\"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:02:05.070962 containerd[1684]: time="2025-10-30T00:02:05.070936345Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:02:05.071521 containerd[1684]: time="2025-10-30T00:02:05.071355747Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.1\" with image id \"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\", repo tag \"registry.k8s.io/kube-proxy:v1.34.1\", repo digest \"registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a\", size \"25963718\" in 1.536539729s" Oct 30 00:02:05.071644 containerd[1684]: time="2025-10-30T00:02:05.071569762Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\" returns image reference \"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\"" Oct 30 00:02:05.071925 containerd[1684]: time="2025-10-30T00:02:05.071843587Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Oct 30 00:02:05.761697 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount695863251.mount: Deactivated successfully. Oct 30 00:02:07.180124 containerd[1684]: time="2025-10-30T00:02:07.180080100Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:02:07.180655 containerd[1684]: time="2025-10-30T00:02:07.180625455Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22388007" Oct 30 00:02:07.181083 containerd[1684]: time="2025-10-30T00:02:07.181064327Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:02:07.183035 containerd[1684]: time="2025-10-30T00:02:07.183017791Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:02:07.183821 containerd[1684]: time="2025-10-30T00:02:07.183798874Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 2.111934633s" Oct 30 00:02:07.183862 containerd[1684]: time="2025-10-30T00:02:07.183825017Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Oct 30 00:02:07.184268 containerd[1684]: time="2025-10-30T00:02:07.184241108Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Oct 30 00:02:07.789359 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3057701500.mount: Deactivated successfully. Oct 30 00:02:07.895440 containerd[1684]: time="2025-10-30T00:02:07.895399812Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:02:07.900073 containerd[1684]: time="2025-10-30T00:02:07.900020674Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321218" Oct 30 00:02:07.903396 containerd[1684]: time="2025-10-30T00:02:07.903325546Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:02:07.905083 containerd[1684]: time="2025-10-30T00:02:07.905060088Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:02:07.906147 containerd[1684]: time="2025-10-30T00:02:07.906099814Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 721.838638ms" Oct 30 00:02:07.906191 containerd[1684]: time="2025-10-30T00:02:07.906149795Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Oct 30 00:02:07.906668 containerd[1684]: time="2025-10-30T00:02:07.906649068Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Oct 30 00:02:09.434636 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Oct 30 00:02:09.436924 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 30 00:02:09.592773 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 30 00:02:09.603307 (kubelet)[2432]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 30 00:02:09.790848 kubelet[2432]: E1030 00:02:09.790787 2432 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 30 00:02:09.792909 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 30 00:02:09.792991 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 30 00:02:09.793349 systemd[1]: kubelet.service: Consumed 100ms CPU time, 104.6M memory peak. Oct 30 00:02:10.191251 update_engine[1660]: I20251030 00:02:10.191221 1660 update_attempter.cc:509] Updating boot flags... Oct 30 00:02:10.861805 containerd[1684]: time="2025-10-30T00:02:10.861766743Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:02:10.862455 containerd[1684]: time="2025-10-30T00:02:10.862443769Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=73514593" Oct 30 00:02:10.863098 containerd[1684]: time="2025-10-30T00:02:10.863086124Z" level=info msg="ImageCreate event name:\"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:02:10.866175 containerd[1684]: time="2025-10-30T00:02:10.866162751Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:02:10.866663 containerd[1684]: time="2025-10-30T00:02:10.866564500Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"74311308\" in 2.959895022s" Oct 30 00:02:10.866743 containerd[1684]: time="2025-10-30T00:02:10.866733374Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\"" Oct 30 00:02:12.990595 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 30 00:02:12.990697 systemd[1]: kubelet.service: Consumed 100ms CPU time, 104.6M memory peak. Oct 30 00:02:12.992200 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 30 00:02:13.015206 systemd[1]: Reload requested from client PID 2492 ('systemctl') (unit session-9.scope)... Oct 30 00:02:13.015217 systemd[1]: Reloading... Oct 30 00:02:13.079130 zram_generator::config[2536]: No configuration found. Oct 30 00:02:13.152056 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Oct 30 00:02:13.218871 systemd[1]: Reloading finished in 203 ms. Oct 30 00:02:13.256591 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Oct 30 00:02:13.256632 systemd[1]: kubelet.service: Failed with result 'signal'. Oct 30 00:02:13.256779 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 30 00:02:13.258313 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 30 00:02:13.565974 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 30 00:02:13.577478 (kubelet)[2604]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 30 00:02:13.615715 kubelet[2604]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 30 00:02:13.615715 kubelet[2604]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 30 00:02:13.630620 kubelet[2604]: I1030 00:02:13.630530 2604 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 30 00:02:14.192993 kubelet[2604]: I1030 00:02:14.192900 2604 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Oct 30 00:02:14.192993 kubelet[2604]: I1030 00:02:14.192917 2604 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 30 00:02:14.195110 kubelet[2604]: I1030 00:02:14.194207 2604 watchdog_linux.go:95] "Systemd watchdog is not enabled" Oct 30 00:02:14.195110 kubelet[2604]: I1030 00:02:14.194216 2604 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 30 00:02:14.195110 kubelet[2604]: I1030 00:02:14.194344 2604 server.go:956] "Client rotation is on, will bootstrap in background" Oct 30 00:02:14.205405 kubelet[2604]: I1030 00:02:14.205393 2604 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 30 00:02:14.206107 kubelet[2604]: E1030 00:02:14.206087 2604 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://139.178.70.100:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 139.178.70.100:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Oct 30 00:02:14.215533 kubelet[2604]: I1030 00:02:14.215511 2604 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 30 00:02:14.223190 kubelet[2604]: I1030 00:02:14.223178 2604 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Oct 30 00:02:14.223894 kubelet[2604]: I1030 00:02:14.223877 2604 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 30 00:02:14.225023 kubelet[2604]: I1030 00:02:14.223894 2604 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 30 00:02:14.225023 kubelet[2604]: I1030 00:02:14.225022 2604 topology_manager.go:138] "Creating topology manager with none policy" Oct 30 00:02:14.225124 kubelet[2604]: I1030 00:02:14.225030 2604 container_manager_linux.go:306] "Creating device plugin manager" Oct 30 00:02:14.225124 kubelet[2604]: I1030 00:02:14.225082 2604 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Oct 30 00:02:14.225841 kubelet[2604]: I1030 00:02:14.225831 2604 state_mem.go:36] "Initialized new in-memory state store" Oct 30 00:02:14.225984 kubelet[2604]: I1030 00:02:14.225941 2604 kubelet.go:475] "Attempting to sync node with API server" Oct 30 00:02:14.225984 kubelet[2604]: I1030 00:02:14.225950 2604 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 30 00:02:14.225984 kubelet[2604]: I1030 00:02:14.225964 2604 kubelet.go:387] "Adding apiserver pod source" Oct 30 00:02:14.226930 kubelet[2604]: I1030 00:02:14.226629 2604 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 30 00:02:14.229402 kubelet[2604]: E1030 00:02:14.229251 2604 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://139.178.70.100:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 139.178.70.100:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Oct 30 00:02:14.229402 kubelet[2604]: I1030 00:02:14.229303 2604 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Oct 30 00:02:14.233603 kubelet[2604]: I1030 00:02:14.233591 2604 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Oct 30 00:02:14.233666 kubelet[2604]: I1030 00:02:14.233660 2604 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Oct 30 00:02:14.235734 kubelet[2604]: W1030 00:02:14.235725 2604 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 30 00:02:14.238098 kubelet[2604]: E1030 00:02:14.238069 2604 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://139.178.70.100:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.70.100:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Oct 30 00:02:14.240605 kubelet[2604]: I1030 00:02:14.240548 2604 server.go:1262] "Started kubelet" Oct 30 00:02:14.241139 kubelet[2604]: I1030 00:02:14.241132 2604 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 30 00:02:14.241530 kubelet[2604]: I1030 00:02:14.241517 2604 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Oct 30 00:02:14.246119 kubelet[2604]: E1030 00:02:14.243949 2604 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://139.178.70.100:6443/api/v1/namespaces/default/events\": dial tcp 139.178.70.100:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18731be16ce4f01a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-10-30 00:02:14.240530458 +0000 UTC m=+0.660655369,LastTimestamp:2025-10-30 00:02:14.240530458 +0000 UTC m=+0.660655369,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Oct 30 00:02:14.280755 kubelet[2604]: I1030 00:02:14.280445 2604 volume_manager.go:313] "Starting Kubelet Volume Manager" Oct 30 00:02:14.280755 kubelet[2604]: E1030 00:02:14.280624 2604 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 30 00:02:14.281815 kubelet[2604]: I1030 00:02:14.281797 2604 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 30 00:02:14.303068 kubelet[2604]: I1030 00:02:14.302306 2604 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Oct 30 00:02:14.303068 kubelet[2604]: I1030 00:02:14.302382 2604 reconciler.go:29] "Reconciler: start to sync state" Oct 30 00:02:14.303068 kubelet[2604]: I1030 00:02:14.302697 2604 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 30 00:02:14.303068 kubelet[2604]: I1030 00:02:14.302769 2604 server_v1.go:49] "podresources" method="list" useActivePods=true Oct 30 00:02:14.303068 kubelet[2604]: I1030 00:02:14.302926 2604 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 30 00:02:14.308225 kubelet[2604]: I1030 00:02:14.305985 2604 server.go:310] "Adding debug handlers to kubelet server" Oct 30 00:02:14.321694 kubelet[2604]: E1030 00:02:14.321671 2604 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://139.178.70.100:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.70.100:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Oct 30 00:02:14.323901 kubelet[2604]: E1030 00:02:14.322424 2604 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.100:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.100:6443: connect: connection refused" interval="200ms" Oct 30 00:02:14.323901 kubelet[2604]: I1030 00:02:14.323295 2604 factory.go:223] Registration of the systemd container factory successfully Oct 30 00:02:14.323901 kubelet[2604]: I1030 00:02:14.323354 2604 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 30 00:02:14.324303 kubelet[2604]: I1030 00:02:14.324289 2604 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Oct 30 00:02:14.325053 kubelet[2604]: I1030 00:02:14.325038 2604 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Oct 30 00:02:14.325053 kubelet[2604]: I1030 00:02:14.325052 2604 status_manager.go:244] "Starting to sync pod status with apiserver" Oct 30 00:02:14.325123 kubelet[2604]: I1030 00:02:14.325066 2604 kubelet.go:2427] "Starting kubelet main sync loop" Oct 30 00:02:14.325123 kubelet[2604]: E1030 00:02:14.325088 2604 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 30 00:02:14.325228 kubelet[2604]: I1030 00:02:14.325215 2604 factory.go:223] Registration of the containerd container factory successfully Oct 30 00:02:14.330633 kubelet[2604]: E1030 00:02:14.330488 2604 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 30 00:02:14.345266 kubelet[2604]: E1030 00:02:14.335540 2604 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://139.178.70.100:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 139.178.70.100:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Oct 30 00:02:14.371164 kubelet[2604]: I1030 00:02:14.371147 2604 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 30 00:02:14.371164 kubelet[2604]: I1030 00:02:14.371158 2604 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 30 00:02:14.371164 kubelet[2604]: I1030 00:02:14.371173 2604 state_mem.go:36] "Initialized new in-memory state store" Oct 30 00:02:14.381464 kubelet[2604]: E1030 00:02:14.381451 2604 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 30 00:02:14.385708 kubelet[2604]: I1030 00:02:14.385687 2604 policy_none.go:49] "None policy: Start" Oct 30 00:02:14.385738 kubelet[2604]: I1030 00:02:14.385713 2604 memory_manager.go:187] "Starting memorymanager" policy="None" Oct 30 00:02:14.385738 kubelet[2604]: I1030 00:02:14.385720 2604 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Oct 30 00:02:14.417565 kubelet[2604]: I1030 00:02:14.417549 2604 policy_none.go:47] "Start" Oct 30 00:02:14.421205 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Oct 30 00:02:14.426059 kubelet[2604]: E1030 00:02:14.426048 2604 kubelet.go:2451] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Oct 30 00:02:14.434637 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Oct 30 00:02:14.437269 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Oct 30 00:02:14.454591 kubelet[2604]: E1030 00:02:14.454548 2604 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Oct 30 00:02:14.455927 kubelet[2604]: I1030 00:02:14.455114 2604 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 30 00:02:14.455927 kubelet[2604]: I1030 00:02:14.455126 2604 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 30 00:02:14.455927 kubelet[2604]: I1030 00:02:14.455653 2604 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 30 00:02:14.455927 kubelet[2604]: E1030 00:02:14.455716 2604 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 30 00:02:14.455927 kubelet[2604]: E1030 00:02:14.455738 2604 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Oct 30 00:02:14.523116 kubelet[2604]: E1030 00:02:14.523081 2604 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.100:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.100:6443: connect: connection refused" interval="400ms" Oct 30 00:02:14.572216 kubelet[2604]: I1030 00:02:14.556824 2604 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 30 00:02:14.572216 kubelet[2604]: E1030 00:02:14.556988 2604 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.70.100:6443/api/v1/nodes\": dial tcp 139.178.70.100:6443: connect: connection refused" node="localhost" Oct 30 00:02:14.635238 systemd[1]: Created slice kubepods-burstable-pod46ccd9020aa0213fef67fa6a9333de16.slice - libcontainer container kubepods-burstable-pod46ccd9020aa0213fef67fa6a9333de16.slice. Oct 30 00:02:14.660058 kubelet[2604]: E1030 00:02:14.659963 2604 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 30 00:02:14.661876 systemd[1]: Created slice kubepods-burstable-podce161b3b11c90b0b844f2e4f86b4e8cd.slice - libcontainer container kubepods-burstable-podce161b3b11c90b0b844f2e4f86b4e8cd.slice. Oct 30 00:02:14.663391 kubelet[2604]: E1030 00:02:14.663180 2604 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 30 00:02:14.670721 systemd[1]: Created slice kubepods-burstable-pod72ae43bf624d285361487631af8a6ba6.slice - libcontainer container kubepods-burstable-pod72ae43bf624d285361487631af8a6ba6.slice. Oct 30 00:02:14.671942 kubelet[2604]: E1030 00:02:14.671934 2604 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 30 00:02:14.703422 kubelet[2604]: I1030 00:02:14.703310 2604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/46ccd9020aa0213fef67fa6a9333de16-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"46ccd9020aa0213fef67fa6a9333de16\") " pod="kube-system/kube-apiserver-localhost" Oct 30 00:02:14.703422 kubelet[2604]: I1030 00:02:14.703336 2604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/46ccd9020aa0213fef67fa6a9333de16-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"46ccd9020aa0213fef67fa6a9333de16\") " pod="kube-system/kube-apiserver-localhost" Oct 30 00:02:14.703422 kubelet[2604]: I1030 00:02:14.703348 2604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 30 00:02:14.703422 kubelet[2604]: I1030 00:02:14.703357 2604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 30 00:02:14.703422 kubelet[2604]: I1030 00:02:14.703366 2604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72ae43bf624d285361487631af8a6ba6-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72ae43bf624d285361487631af8a6ba6\") " pod="kube-system/kube-scheduler-localhost" Oct 30 00:02:14.703570 kubelet[2604]: I1030 00:02:14.703374 2604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/46ccd9020aa0213fef67fa6a9333de16-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"46ccd9020aa0213fef67fa6a9333de16\") " pod="kube-system/kube-apiserver-localhost" Oct 30 00:02:14.703570 kubelet[2604]: I1030 00:02:14.703392 2604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 30 00:02:14.703570 kubelet[2604]: I1030 00:02:14.703424 2604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 30 00:02:14.703570 kubelet[2604]: I1030 00:02:14.703434 2604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 30 00:02:14.758641 kubelet[2604]: I1030 00:02:14.758562 2604 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 30 00:02:14.759386 kubelet[2604]: E1030 00:02:14.759306 2604 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.70.100:6443/api/v1/nodes\": dial tcp 139.178.70.100:6443: connect: connection refused" node="localhost" Oct 30 00:02:14.924234 kubelet[2604]: E1030 00:02:14.924206 2604 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.100:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.100:6443: connect: connection refused" interval="800ms" Oct 30 00:02:14.962890 containerd[1684]: time="2025-10-30T00:02:14.962824156Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:46ccd9020aa0213fef67fa6a9333de16,Namespace:kube-system,Attempt:0,}" Oct 30 00:02:14.970734 containerd[1684]: time="2025-10-30T00:02:14.970627889Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:ce161b3b11c90b0b844f2e4f86b4e8cd,Namespace:kube-system,Attempt:0,}" Oct 30 00:02:14.973255 containerd[1684]: time="2025-10-30T00:02:14.973113916Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72ae43bf624d285361487631af8a6ba6,Namespace:kube-system,Attempt:0,}" Oct 30 00:02:15.051297 kubelet[2604]: E1030 00:02:15.051222 2604 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://139.178.70.100:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.70.100:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Oct 30 00:02:15.160441 kubelet[2604]: I1030 00:02:15.160272 2604 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 30 00:02:15.160441 kubelet[2604]: E1030 00:02:15.160408 2604 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.70.100:6443/api/v1/nodes\": dial tcp 139.178.70.100:6443: connect: connection refused" node="localhost" Oct 30 00:02:15.280676 kubelet[2604]: E1030 00:02:15.280658 2604 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://139.178.70.100:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 139.178.70.100:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Oct 30 00:02:15.404410 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4045897004.mount: Deactivated successfully. Oct 30 00:02:15.407117 containerd[1684]: time="2025-10-30T00:02:15.406795767Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 30 00:02:15.407769 containerd[1684]: time="2025-10-30T00:02:15.407713436Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 30 00:02:15.407826 containerd[1684]: time="2025-10-30T00:02:15.407815995Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Oct 30 00:02:15.408544 containerd[1684]: time="2025-10-30T00:02:15.408517754Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 30 00:02:15.408803 containerd[1684]: time="2025-10-30T00:02:15.408791493Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Oct 30 00:02:15.409646 containerd[1684]: time="2025-10-30T00:02:15.409633200Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 30 00:02:15.410054 containerd[1684]: time="2025-10-30T00:02:15.410042635Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 30 00:02:15.410146 containerd[1684]: time="2025-10-30T00:02:15.410132418Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Oct 30 00:02:15.410684 containerd[1684]: time="2025-10-30T00:02:15.410671990Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 434.497821ms" Oct 30 00:02:15.411519 containerd[1684]: time="2025-10-30T00:02:15.411410597Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 435.703832ms" Oct 30 00:02:15.413947 containerd[1684]: time="2025-10-30T00:02:15.413934098Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 437.983501ms" Oct 30 00:02:15.487890 containerd[1684]: time="2025-10-30T00:02:15.487859421Z" level=info msg="connecting to shim 2bee32f0fea3188dd25b97bd17a4627f01a9a591ae8888653c4d58ee2b436b78" address="unix:///run/containerd/s/6242b2e7f6564fe8d1b095c1cbfecdee23745a612f499160aef0d0455807c99c" namespace=k8s.io protocol=ttrpc version=3 Oct 30 00:02:15.493127 containerd[1684]: time="2025-10-30T00:02:15.493086402Z" level=info msg="connecting to shim f9815cf01ee3d852baa202d2c31366a20728778c23a0cb7730e07c512bac23a1" address="unix:///run/containerd/s/4a8d70ae9c2b72ee2d09bae0e46c5d1c6fb340635641d1f6a22feff84990cb5c" namespace=k8s.io protocol=ttrpc version=3 Oct 30 00:02:15.493557 containerd[1684]: time="2025-10-30T00:02:15.493529035Z" level=info msg="connecting to shim 1a2b3469ee806041415ab70ff7c3fad6f60c32af553ec406d96ce013053ff7e9" address="unix:///run/containerd/s/eaccc00392d8b16f058a791e6583eec50ffa234dd7a1cba076d34f82448558dc" namespace=k8s.io protocol=ttrpc version=3 Oct 30 00:02:15.500247 kubelet[2604]: E1030 00:02:15.500224 2604 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://139.178.70.100:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.70.100:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Oct 30 00:02:15.568216 systemd[1]: Started cri-containerd-f9815cf01ee3d852baa202d2c31366a20728778c23a0cb7730e07c512bac23a1.scope - libcontainer container f9815cf01ee3d852baa202d2c31366a20728778c23a0cb7730e07c512bac23a1. Oct 30 00:02:15.573428 systemd[1]: Started cri-containerd-1a2b3469ee806041415ab70ff7c3fad6f60c32af553ec406d96ce013053ff7e9.scope - libcontainer container 1a2b3469ee806041415ab70ff7c3fad6f60c32af553ec406d96ce013053ff7e9. Oct 30 00:02:15.575799 systemd[1]: Started cri-containerd-2bee32f0fea3188dd25b97bd17a4627f01a9a591ae8888653c4d58ee2b436b78.scope - libcontainer container 2bee32f0fea3188dd25b97bd17a4627f01a9a591ae8888653c4d58ee2b436b78. Oct 30 00:02:15.614009 containerd[1684]: time="2025-10-30T00:02:15.613909061Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72ae43bf624d285361487631af8a6ba6,Namespace:kube-system,Attempt:0,} returns sandbox id \"1a2b3469ee806041415ab70ff7c3fad6f60c32af553ec406d96ce013053ff7e9\"" Oct 30 00:02:15.619008 containerd[1684]: time="2025-10-30T00:02:15.618990704Z" level=info msg="CreateContainer within sandbox \"1a2b3469ee806041415ab70ff7c3fad6f60c32af553ec406d96ce013053ff7e9\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Oct 30 00:02:15.623710 containerd[1684]: time="2025-10-30T00:02:15.623694825Z" level=info msg="Container b7edf12cbceccf3b86b42da65ea77eb20f833024f83d8c4d134dad79ae30f8a3: CDI devices from CRI Config.CDIDevices: []" Oct 30 00:02:15.633362 containerd[1684]: time="2025-10-30T00:02:15.633212040Z" level=info msg="CreateContainer within sandbox \"1a2b3469ee806041415ab70ff7c3fad6f60c32af553ec406d96ce013053ff7e9\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"b7edf12cbceccf3b86b42da65ea77eb20f833024f83d8c4d134dad79ae30f8a3\"" Oct 30 00:02:15.634897 containerd[1684]: time="2025-10-30T00:02:15.633750388Z" level=info msg="StartContainer for \"b7edf12cbceccf3b86b42da65ea77eb20f833024f83d8c4d134dad79ae30f8a3\"" Oct 30 00:02:15.640394 containerd[1684]: time="2025-10-30T00:02:15.640373410Z" level=info msg="connecting to shim b7edf12cbceccf3b86b42da65ea77eb20f833024f83d8c4d134dad79ae30f8a3" address="unix:///run/containerd/s/eaccc00392d8b16f058a791e6583eec50ffa234dd7a1cba076d34f82448558dc" protocol=ttrpc version=3 Oct 30 00:02:15.643347 containerd[1684]: time="2025-10-30T00:02:15.643324212Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:ce161b3b11c90b0b844f2e4f86b4e8cd,Namespace:kube-system,Attempt:0,} returns sandbox id \"f9815cf01ee3d852baa202d2c31366a20728778c23a0cb7730e07c512bac23a1\"" Oct 30 00:02:15.646951 containerd[1684]: time="2025-10-30T00:02:15.646929914Z" level=info msg="CreateContainer within sandbox \"f9815cf01ee3d852baa202d2c31366a20728778c23a0cb7730e07c512bac23a1\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Oct 30 00:02:15.652165 containerd[1684]: time="2025-10-30T00:02:15.651808658Z" level=info msg="Container 34f12ae8fecde0d760d4c4f9b5a46b9441ca0421632c44d7eda4983c8c9ce239: CDI devices from CRI Config.CDIDevices: []" Oct 30 00:02:15.661369 containerd[1684]: time="2025-10-30T00:02:15.660318702Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:46ccd9020aa0213fef67fa6a9333de16,Namespace:kube-system,Attempt:0,} returns sandbox id \"2bee32f0fea3188dd25b97bd17a4627f01a9a591ae8888653c4d58ee2b436b78\"" Oct 30 00:02:15.660653 systemd[1]: Started cri-containerd-b7edf12cbceccf3b86b42da65ea77eb20f833024f83d8c4d134dad79ae30f8a3.scope - libcontainer container b7edf12cbceccf3b86b42da65ea77eb20f833024f83d8c4d134dad79ae30f8a3. Oct 30 00:02:15.672769 containerd[1684]: time="2025-10-30T00:02:15.672670040Z" level=info msg="CreateContainer within sandbox \"2bee32f0fea3188dd25b97bd17a4627f01a9a591ae8888653c4d58ee2b436b78\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Oct 30 00:02:15.685908 containerd[1684]: time="2025-10-30T00:02:15.685858991Z" level=info msg="CreateContainer within sandbox \"f9815cf01ee3d852baa202d2c31366a20728778c23a0cb7730e07c512bac23a1\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"34f12ae8fecde0d760d4c4f9b5a46b9441ca0421632c44d7eda4983c8c9ce239\"" Oct 30 00:02:15.686172 containerd[1684]: time="2025-10-30T00:02:15.686161487Z" level=info msg="StartContainer for \"34f12ae8fecde0d760d4c4f9b5a46b9441ca0421632c44d7eda4983c8c9ce239\"" Oct 30 00:02:15.686837 containerd[1684]: time="2025-10-30T00:02:15.686795826Z" level=info msg="connecting to shim 34f12ae8fecde0d760d4c4f9b5a46b9441ca0421632c44d7eda4983c8c9ce239" address="unix:///run/containerd/s/4a8d70ae9c2b72ee2d09bae0e46c5d1c6fb340635641d1f6a22feff84990cb5c" protocol=ttrpc version=3 Oct 30 00:02:15.712200 systemd[1]: Started cri-containerd-34f12ae8fecde0d760d4c4f9b5a46b9441ca0421632c44d7eda4983c8c9ce239.scope - libcontainer container 34f12ae8fecde0d760d4c4f9b5a46b9441ca0421632c44d7eda4983c8c9ce239. Oct 30 00:02:15.715708 containerd[1684]: time="2025-10-30T00:02:15.715688550Z" level=info msg="Container 4863ff5a5456acae94c1f70e8bb2e5f2a22b3831ef1feb72b12a45ac588c8955: CDI devices from CRI Config.CDIDevices: []" Oct 30 00:02:15.721181 containerd[1684]: time="2025-10-30T00:02:15.721160583Z" level=info msg="StartContainer for \"b7edf12cbceccf3b86b42da65ea77eb20f833024f83d8c4d134dad79ae30f8a3\" returns successfully" Oct 30 00:02:15.725379 kubelet[2604]: E1030 00:02:15.725272 2604 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.100:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.100:6443: connect: connection refused" interval="1.6s" Oct 30 00:02:15.730237 kubelet[2604]: E1030 00:02:15.730224 2604 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://139.178.70.100:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 139.178.70.100:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Oct 30 00:02:15.736270 containerd[1684]: time="2025-10-30T00:02:15.736247176Z" level=info msg="CreateContainer within sandbox \"2bee32f0fea3188dd25b97bd17a4627f01a9a591ae8888653c4d58ee2b436b78\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"4863ff5a5456acae94c1f70e8bb2e5f2a22b3831ef1feb72b12a45ac588c8955\"" Oct 30 00:02:15.736598 containerd[1684]: time="2025-10-30T00:02:15.736583808Z" level=info msg="StartContainer for \"4863ff5a5456acae94c1f70e8bb2e5f2a22b3831ef1feb72b12a45ac588c8955\"" Oct 30 00:02:15.737302 containerd[1684]: time="2025-10-30T00:02:15.737288655Z" level=info msg="connecting to shim 4863ff5a5456acae94c1f70e8bb2e5f2a22b3831ef1feb72b12a45ac588c8955" address="unix:///run/containerd/s/6242b2e7f6564fe8d1b095c1cbfecdee23745a612f499160aef0d0455807c99c" protocol=ttrpc version=3 Oct 30 00:02:15.753306 systemd[1]: Started cri-containerd-4863ff5a5456acae94c1f70e8bb2e5f2a22b3831ef1feb72b12a45ac588c8955.scope - libcontainer container 4863ff5a5456acae94c1f70e8bb2e5f2a22b3831ef1feb72b12a45ac588c8955. Oct 30 00:02:15.763146 containerd[1684]: time="2025-10-30T00:02:15.763060722Z" level=info msg="StartContainer for \"34f12ae8fecde0d760d4c4f9b5a46b9441ca0421632c44d7eda4983c8c9ce239\" returns successfully" Oct 30 00:02:15.797679 containerd[1684]: time="2025-10-30T00:02:15.797657079Z" level=info msg="StartContainer for \"4863ff5a5456acae94c1f70e8bb2e5f2a22b3831ef1feb72b12a45ac588c8955\" returns successfully" Oct 30 00:02:15.962399 kubelet[2604]: I1030 00:02:15.962261 2604 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 30 00:02:15.962469 kubelet[2604]: E1030 00:02:15.962433 2604 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.70.100:6443/api/v1/nodes\": dial tcp 139.178.70.100:6443: connect: connection refused" node="localhost" Oct 30 00:02:16.351360 kubelet[2604]: E1030 00:02:16.351302 2604 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 30 00:02:16.352895 kubelet[2604]: E1030 00:02:16.352884 2604 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 30 00:02:16.354087 kubelet[2604]: E1030 00:02:16.354078 2604 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 30 00:02:17.184440 kubelet[2604]: E1030 00:02:17.184292 2604 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.18731be16ce4f01a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-10-30 00:02:14.240530458 +0000 UTC m=+0.660655369,LastTimestamp:2025-10-30 00:02:14.240530458 +0000 UTC m=+0.660655369,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Oct 30 00:02:17.329261 kubelet[2604]: E1030 00:02:17.329236 2604 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Oct 30 00:02:17.367395 kubelet[2604]: E1030 00:02:17.367061 2604 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 30 00:02:17.367395 kubelet[2604]: E1030 00:02:17.367236 2604 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 30 00:02:17.367395 kubelet[2604]: E1030 00:02:17.367341 2604 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 30 00:02:17.409382 kubelet[2604]: E1030 00:02:17.409357 2604 csi_plugin.go:399] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Oct 30 00:02:17.563996 kubelet[2604]: I1030 00:02:17.563929 2604 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 30 00:02:17.567731 kubelet[2604]: I1030 00:02:17.567678 2604 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Oct 30 00:02:17.567987 kubelet[2604]: E1030 00:02:17.567787 2604 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Oct 30 00:02:17.573584 kubelet[2604]: E1030 00:02:17.573564 2604 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 30 00:02:17.673986 kubelet[2604]: E1030 00:02:17.673950 2604 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 30 00:02:17.774374 kubelet[2604]: E1030 00:02:17.774347 2604 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 30 00:02:17.875478 kubelet[2604]: E1030 00:02:17.875395 2604 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 30 00:02:17.976108 kubelet[2604]: E1030 00:02:17.976069 2604 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 30 00:02:18.076642 kubelet[2604]: E1030 00:02:18.076614 2604 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 30 00:02:18.176809 kubelet[2604]: E1030 00:02:18.176727 2604 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 30 00:02:18.277663 kubelet[2604]: E1030 00:02:18.277629 2604 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 30 00:02:18.367567 kubelet[2604]: E1030 00:02:18.367486 2604 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 30 00:02:18.378775 kubelet[2604]: E1030 00:02:18.378744 2604 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 30 00:02:18.402777 kubelet[2604]: I1030 00:02:18.402629 2604 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 30 00:02:18.407899 kubelet[2604]: I1030 00:02:18.407869 2604 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 30 00:02:18.411494 kubelet[2604]: I1030 00:02:18.411379 2604 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 30 00:02:19.207399 systemd[1]: Reload requested from client PID 2888 ('systemctl') (unit session-9.scope)... Oct 30 00:02:19.207560 systemd[1]: Reloading... Oct 30 00:02:19.240857 kubelet[2604]: I1030 00:02:19.240837 2604 apiserver.go:52] "Watching apiserver" Oct 30 00:02:19.264150 zram_generator::config[2934]: No configuration found. Oct 30 00:02:19.303586 kubelet[2604]: I1030 00:02:19.303564 2604 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Oct 30 00:02:19.345170 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Oct 30 00:02:19.422683 systemd[1]: Reloading finished in 214 ms. Oct 30 00:02:19.450984 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 30 00:02:19.466940 systemd[1]: kubelet.service: Deactivated successfully. Oct 30 00:02:19.467136 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 30 00:02:19.467174 systemd[1]: kubelet.service: Consumed 817ms CPU time, 124.6M memory peak. Oct 30 00:02:19.468555 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 30 00:02:19.791220 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 30 00:02:19.792830 (kubelet)[2999]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 30 00:02:19.883177 kubelet[2999]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 30 00:02:19.883476 kubelet[2999]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 30 00:02:19.883973 kubelet[2999]: I1030 00:02:19.883956 2999 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 30 00:02:19.892466 kubelet[2999]: I1030 00:02:19.892450 2999 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Oct 30 00:02:19.892466 kubelet[2999]: I1030 00:02:19.892463 2999 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 30 00:02:19.902028 kubelet[2999]: I1030 00:02:19.902009 2999 watchdog_linux.go:95] "Systemd watchdog is not enabled" Oct 30 00:02:19.902028 kubelet[2999]: I1030 00:02:19.902021 2999 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 30 00:02:19.902180 kubelet[2999]: I1030 00:02:19.902167 2999 server.go:956] "Client rotation is on, will bootstrap in background" Oct 30 00:02:19.902806 kubelet[2999]: I1030 00:02:19.902794 2999 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Oct 30 00:02:19.948541 kubelet[2999]: I1030 00:02:19.948441 2999 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 30 00:02:19.990278 kubelet[2999]: I1030 00:02:19.990261 2999 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 30 00:02:19.993118 kubelet[2999]: I1030 00:02:19.992774 2999 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Oct 30 00:02:19.993118 kubelet[2999]: I1030 00:02:19.992918 2999 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 30 00:02:19.993118 kubelet[2999]: I1030 00:02:19.992935 2999 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 30 00:02:19.993118 kubelet[2999]: I1030 00:02:19.993050 2999 topology_manager.go:138] "Creating topology manager with none policy" Oct 30 00:02:19.993283 kubelet[2999]: I1030 00:02:19.993058 2999 container_manager_linux.go:306] "Creating device plugin manager" Oct 30 00:02:19.993283 kubelet[2999]: I1030 00:02:19.993076 2999 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Oct 30 00:02:19.996225 kubelet[2999]: I1030 00:02:19.996215 2999 state_mem.go:36] "Initialized new in-memory state store" Oct 30 00:02:19.996401 kubelet[2999]: I1030 00:02:19.996394 2999 kubelet.go:475] "Attempting to sync node with API server" Oct 30 00:02:19.996447 kubelet[2999]: I1030 00:02:19.996441 2999 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 30 00:02:19.996499 kubelet[2999]: I1030 00:02:19.996492 2999 kubelet.go:387] "Adding apiserver pod source" Oct 30 00:02:19.996539 kubelet[2999]: I1030 00:02:19.996535 2999 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 30 00:02:20.002413 kubelet[2999]: I1030 00:02:20.002389 2999 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Oct 30 00:02:20.003386 kubelet[2999]: I1030 00:02:20.003369 2999 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Oct 30 00:02:20.003448 kubelet[2999]: I1030 00:02:20.003395 2999 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Oct 30 00:02:20.007176 kubelet[2999]: I1030 00:02:20.006930 2999 server.go:1262] "Started kubelet" Oct 30 00:02:20.008505 kubelet[2999]: I1030 00:02:20.008489 2999 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Oct 30 00:02:20.009242 kubelet[2999]: I1030 00:02:20.009234 2999 server.go:310] "Adding debug handlers to kubelet server" Oct 30 00:02:20.012134 kubelet[2999]: I1030 00:02:20.011978 2999 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 30 00:02:20.012185 kubelet[2999]: I1030 00:02:20.012148 2999 server_v1.go:49] "podresources" method="list" useActivePods=true Oct 30 00:02:20.012414 kubelet[2999]: I1030 00:02:20.012401 2999 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 30 00:02:20.013437 kubelet[2999]: I1030 00:02:20.013429 2999 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 30 00:02:20.014993 kubelet[2999]: I1030 00:02:20.014984 2999 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 30 00:02:20.017010 kubelet[2999]: I1030 00:02:20.016998 2999 volume_manager.go:313] "Starting Kubelet Volume Manager" Oct 30 00:02:20.019149 kubelet[2999]: I1030 00:02:20.019136 2999 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Oct 30 00:02:20.019439 kubelet[2999]: I1030 00:02:20.019432 2999 reconciler.go:29] "Reconciler: start to sync state" Oct 30 00:02:20.020426 kubelet[2999]: I1030 00:02:20.020409 2999 factory.go:223] Registration of the systemd container factory successfully Oct 30 00:02:20.020581 kubelet[2999]: I1030 00:02:20.020555 2999 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 30 00:02:20.022279 kubelet[2999]: I1030 00:02:20.022200 2999 factory.go:223] Registration of the containerd container factory successfully Oct 30 00:02:20.022325 kubelet[2999]: E1030 00:02:20.022278 2999 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 30 00:02:20.030896 kubelet[2999]: I1030 00:02:20.030873 2999 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Oct 30 00:02:20.032671 kubelet[2999]: I1030 00:02:20.032630 2999 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Oct 30 00:02:20.032671 kubelet[2999]: I1030 00:02:20.032641 2999 status_manager.go:244] "Starting to sync pod status with apiserver" Oct 30 00:02:20.032671 kubelet[2999]: I1030 00:02:20.032655 2999 kubelet.go:2427] "Starting kubelet main sync loop" Oct 30 00:02:20.032808 kubelet[2999]: E1030 00:02:20.032791 2999 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 30 00:02:20.047910 kubelet[2999]: I1030 00:02:20.047744 2999 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 30 00:02:20.047910 kubelet[2999]: I1030 00:02:20.047756 2999 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 30 00:02:20.048500 kubelet[2999]: I1030 00:02:20.048216 2999 state_mem.go:36] "Initialized new in-memory state store" Oct 30 00:02:20.048500 kubelet[2999]: I1030 00:02:20.048293 2999 state_mem.go:88] "Updated default CPUSet" cpuSet="" Oct 30 00:02:20.048500 kubelet[2999]: I1030 00:02:20.048299 2999 state_mem.go:96] "Updated CPUSet assignments" assignments={} Oct 30 00:02:20.048500 kubelet[2999]: I1030 00:02:20.048310 2999 policy_none.go:49] "None policy: Start" Oct 30 00:02:20.048500 kubelet[2999]: I1030 00:02:20.048315 2999 memory_manager.go:187] "Starting memorymanager" policy="None" Oct 30 00:02:20.048500 kubelet[2999]: I1030 00:02:20.048321 2999 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Oct 30 00:02:20.048500 kubelet[2999]: I1030 00:02:20.048384 2999 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Oct 30 00:02:20.048500 kubelet[2999]: I1030 00:02:20.048391 2999 policy_none.go:47] "Start" Oct 30 00:02:20.051679 kubelet[2999]: E1030 00:02:20.051661 2999 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Oct 30 00:02:20.052147 kubelet[2999]: I1030 00:02:20.051920 2999 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 30 00:02:20.052147 kubelet[2999]: I1030 00:02:20.051929 2999 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 30 00:02:20.053418 kubelet[2999]: I1030 00:02:20.053190 2999 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 30 00:02:20.055114 kubelet[2999]: E1030 00:02:20.054318 2999 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 30 00:02:20.133488 kubelet[2999]: I1030 00:02:20.133462 2999 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 30 00:02:20.133653 kubelet[2999]: I1030 00:02:20.133602 2999 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 30 00:02:20.133772 kubelet[2999]: I1030 00:02:20.133537 2999 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 30 00:02:20.138270 kubelet[2999]: E1030 00:02:20.138183 2999 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Oct 30 00:02:20.138383 kubelet[2999]: E1030 00:02:20.138096 2999 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Oct 30 00:02:20.138428 kubelet[2999]: E1030 00:02:20.138410 2999 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Oct 30 00:02:20.157301 kubelet[2999]: I1030 00:02:20.157285 2999 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 30 00:02:20.161363 kubelet[2999]: I1030 00:02:20.161192 2999 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Oct 30 00:02:20.161363 kubelet[2999]: I1030 00:02:20.161248 2999 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Oct 30 00:02:20.221009 kubelet[2999]: I1030 00:02:20.220985 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 30 00:02:20.221706 kubelet[2999]: I1030 00:02:20.221134 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 30 00:02:20.221706 kubelet[2999]: I1030 00:02:20.221161 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 30 00:02:20.221706 kubelet[2999]: I1030 00:02:20.221183 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 30 00:02:20.221706 kubelet[2999]: I1030 00:02:20.221195 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72ae43bf624d285361487631af8a6ba6-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72ae43bf624d285361487631af8a6ba6\") " pod="kube-system/kube-scheduler-localhost" Oct 30 00:02:20.221706 kubelet[2999]: I1030 00:02:20.221208 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 30 00:02:20.221820 kubelet[2999]: I1030 00:02:20.221218 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/46ccd9020aa0213fef67fa6a9333de16-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"46ccd9020aa0213fef67fa6a9333de16\") " pod="kube-system/kube-apiserver-localhost" Oct 30 00:02:20.221820 kubelet[2999]: I1030 00:02:20.221226 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/46ccd9020aa0213fef67fa6a9333de16-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"46ccd9020aa0213fef67fa6a9333de16\") " pod="kube-system/kube-apiserver-localhost" Oct 30 00:02:20.221820 kubelet[2999]: I1030 00:02:20.221234 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/46ccd9020aa0213fef67fa6a9333de16-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"46ccd9020aa0213fef67fa6a9333de16\") " pod="kube-system/kube-apiserver-localhost" Oct 30 00:02:20.997311 kubelet[2999]: I1030 00:02:20.997280 2999 apiserver.go:52] "Watching apiserver" Oct 30 00:02:21.019981 kubelet[2999]: I1030 00:02:21.019952 2999 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Oct 30 00:02:21.042081 kubelet[2999]: I1030 00:02:21.042063 2999 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 30 00:02:21.042465 kubelet[2999]: I1030 00:02:21.042445 2999 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 30 00:02:21.046534 kubelet[2999]: E1030 00:02:21.046510 2999 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Oct 30 00:02:21.046865 kubelet[2999]: E1030 00:02:21.046849 2999 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Oct 30 00:02:21.058588 kubelet[2999]: I1030 00:02:21.058510 2999 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=3.058500665 podStartE2EDuration="3.058500665s" podCreationTimestamp="2025-10-30 00:02:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-30 00:02:21.054967481 +0000 UTC m=+1.204612860" watchObservedRunningTime="2025-10-30 00:02:21.058500665 +0000 UTC m=+1.208146044" Oct 30 00:02:21.062284 kubelet[2999]: I1030 00:02:21.061959 2999 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.061950478 podStartE2EDuration="3.061950478s" podCreationTimestamp="2025-10-30 00:02:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-30 00:02:21.058836512 +0000 UTC m=+1.208481891" watchObservedRunningTime="2025-10-30 00:02:21.061950478 +0000 UTC m=+1.211595850" Oct 30 00:02:21.066229 kubelet[2999]: I1030 00:02:21.066202 2999 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.066194705 podStartE2EDuration="3.066194705s" podCreationTimestamp="2025-10-30 00:02:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-30 00:02:21.062479039 +0000 UTC m=+1.212124418" watchObservedRunningTime="2025-10-30 00:02:21.066194705 +0000 UTC m=+1.215840084" Oct 30 00:02:24.646689 kubelet[2999]: I1030 00:02:24.646668 2999 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Oct 30 00:02:24.648211 containerd[1684]: time="2025-10-30T00:02:24.648188870Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 30 00:02:24.648373 kubelet[2999]: I1030 00:02:24.648301 2999 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Oct 30 00:02:25.806748 systemd[1]: Created slice kubepods-besteffort-pod0fb2f63d_6bcc_425d_9dc1_38abf3eefe60.slice - libcontainer container kubepods-besteffort-pod0fb2f63d_6bcc_425d_9dc1_38abf3eefe60.slice. Oct 30 00:02:25.859454 kubelet[2999]: I1030 00:02:25.859407 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/0fb2f63d-6bcc-425d-9dc1-38abf3eefe60-kube-proxy\") pod \"kube-proxy-xc25d\" (UID: \"0fb2f63d-6bcc-425d-9dc1-38abf3eefe60\") " pod="kube-system/kube-proxy-xc25d" Oct 30 00:02:25.859745 kubelet[2999]: I1030 00:02:25.859482 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8s5sj\" (UniqueName: \"kubernetes.io/projected/0fb2f63d-6bcc-425d-9dc1-38abf3eefe60-kube-api-access-8s5sj\") pod \"kube-proxy-xc25d\" (UID: \"0fb2f63d-6bcc-425d-9dc1-38abf3eefe60\") " pod="kube-system/kube-proxy-xc25d" Oct 30 00:02:25.859745 kubelet[2999]: I1030 00:02:25.859510 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0fb2f63d-6bcc-425d-9dc1-38abf3eefe60-xtables-lock\") pod \"kube-proxy-xc25d\" (UID: \"0fb2f63d-6bcc-425d-9dc1-38abf3eefe60\") " pod="kube-system/kube-proxy-xc25d" Oct 30 00:02:25.859745 kubelet[2999]: I1030 00:02:25.859522 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0fb2f63d-6bcc-425d-9dc1-38abf3eefe60-lib-modules\") pod \"kube-proxy-xc25d\" (UID: \"0fb2f63d-6bcc-425d-9dc1-38abf3eefe60\") " pod="kube-system/kube-proxy-xc25d" Oct 30 00:02:25.920511 systemd[1]: Created slice kubepods-besteffort-podeaaa2b71_9af3_4c48_b762_6ef4447c47cb.slice - libcontainer container kubepods-besteffort-podeaaa2b71_9af3_4c48_b762_6ef4447c47cb.slice. Oct 30 00:02:25.923547 kubelet[2999]: E1030 00:02:25.923500 2999 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"kubernetes-services-endpoint\" is forbidden: User \"system:node:localhost\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"tigera-operator\": no relationship found between node 'localhost' and this object" logger="UnhandledError" reflector="object-\"tigera-operator\"/\"kubernetes-services-endpoint\"" type="*v1.ConfigMap" Oct 30 00:02:25.923618 kubelet[2999]: E1030 00:02:25.923566 2999 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:localhost\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"tigera-operator\": no relationship found between node 'localhost' and this object" logger="UnhandledError" reflector="object-\"tigera-operator\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap" Oct 30 00:02:25.923640 kubelet[2999]: E1030 00:02:25.923359 2999 status_manager.go:1018] "Failed to get status for pod" err="pods \"tigera-operator-65cdcdfd6d-g5b2d\" is forbidden: User \"system:node:localhost\" cannot get resource \"pods\" in API group \"\" in the namespace \"tigera-operator\": no relationship found between node 'localhost' and this object" podUID="eaaa2b71-9af3-4c48-b762-6ef4447c47cb" pod="tigera-operator/tigera-operator-65cdcdfd6d-g5b2d" Oct 30 00:02:25.960272 kubelet[2999]: I1030 00:02:25.960188 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzgcr\" (UniqueName: \"kubernetes.io/projected/eaaa2b71-9af3-4c48-b762-6ef4447c47cb-kube-api-access-wzgcr\") pod \"tigera-operator-65cdcdfd6d-g5b2d\" (UID: \"eaaa2b71-9af3-4c48-b762-6ef4447c47cb\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-g5b2d" Oct 30 00:02:25.960272 kubelet[2999]: I1030 00:02:25.960244 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/eaaa2b71-9af3-4c48-b762-6ef4447c47cb-var-lib-calico\") pod \"tigera-operator-65cdcdfd6d-g5b2d\" (UID: \"eaaa2b71-9af3-4c48-b762-6ef4447c47cb\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-g5b2d" Oct 30 00:02:26.116511 containerd[1684]: time="2025-10-30T00:02:26.116192753Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xc25d,Uid:0fb2f63d-6bcc-425d-9dc1-38abf3eefe60,Namespace:kube-system,Attempt:0,}" Oct 30 00:02:26.128805 containerd[1684]: time="2025-10-30T00:02:26.128740400Z" level=info msg="connecting to shim 2c3d9fe867e7626f27f61aff01322775b387a9370b6cd047be126d74c2c86e8f" address="unix:///run/containerd/s/d0b30e929899511add3c22d222da737c2727a681b6de1471750334fe4fe4bece" namespace=k8s.io protocol=ttrpc version=3 Oct 30 00:02:26.153199 systemd[1]: Started cri-containerd-2c3d9fe867e7626f27f61aff01322775b387a9370b6cd047be126d74c2c86e8f.scope - libcontainer container 2c3d9fe867e7626f27f61aff01322775b387a9370b6cd047be126d74c2c86e8f. Oct 30 00:02:26.168815 containerd[1684]: time="2025-10-30T00:02:26.168789390Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xc25d,Uid:0fb2f63d-6bcc-425d-9dc1-38abf3eefe60,Namespace:kube-system,Attempt:0,} returns sandbox id \"2c3d9fe867e7626f27f61aff01322775b387a9370b6cd047be126d74c2c86e8f\"" Oct 30 00:02:26.172852 containerd[1684]: time="2025-10-30T00:02:26.172824018Z" level=info msg="CreateContainer within sandbox \"2c3d9fe867e7626f27f61aff01322775b387a9370b6cd047be126d74c2c86e8f\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 30 00:02:26.179114 containerd[1684]: time="2025-10-30T00:02:26.178988644Z" level=info msg="Container 7966573c5ad4b95ad0db889af69f9621e42dee0e55134453b9b3cd1dde30c17e: CDI devices from CRI Config.CDIDevices: []" Oct 30 00:02:26.179210 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1021625873.mount: Deactivated successfully. Oct 30 00:02:26.183002 containerd[1684]: time="2025-10-30T00:02:26.182984798Z" level=info msg="CreateContainer within sandbox \"2c3d9fe867e7626f27f61aff01322775b387a9370b6cd047be126d74c2c86e8f\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"7966573c5ad4b95ad0db889af69f9621e42dee0e55134453b9b3cd1dde30c17e\"" Oct 30 00:02:26.184149 containerd[1684]: time="2025-10-30T00:02:26.183380065Z" level=info msg="StartContainer for \"7966573c5ad4b95ad0db889af69f9621e42dee0e55134453b9b3cd1dde30c17e\"" Oct 30 00:02:26.185230 containerd[1684]: time="2025-10-30T00:02:26.185205548Z" level=info msg="connecting to shim 7966573c5ad4b95ad0db889af69f9621e42dee0e55134453b9b3cd1dde30c17e" address="unix:///run/containerd/s/d0b30e929899511add3c22d222da737c2727a681b6de1471750334fe4fe4bece" protocol=ttrpc version=3 Oct 30 00:02:26.198213 systemd[1]: Started cri-containerd-7966573c5ad4b95ad0db889af69f9621e42dee0e55134453b9b3cd1dde30c17e.scope - libcontainer container 7966573c5ad4b95ad0db889af69f9621e42dee0e55134453b9b3cd1dde30c17e. Oct 30 00:02:26.225402 containerd[1684]: time="2025-10-30T00:02:26.225339730Z" level=info msg="StartContainer for \"7966573c5ad4b95ad0db889af69f9621e42dee0e55134453b9b3cd1dde30c17e\" returns successfully" Oct 30 00:02:27.064230 kubelet[2999]: E1030 00:02:27.064178 2999 projected.go:291] Couldn't get configMap tigera-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Oct 30 00:02:27.064230 kubelet[2999]: E1030 00:02:27.064202 2999 projected.go:196] Error preparing data for projected volume kube-api-access-wzgcr for pod tigera-operator/tigera-operator-65cdcdfd6d-g5b2d: failed to sync configmap cache: timed out waiting for the condition Oct 30 00:02:27.064902 kubelet[2999]: E1030 00:02:27.064397 2999 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/eaaa2b71-9af3-4c48-b762-6ef4447c47cb-kube-api-access-wzgcr podName:eaaa2b71-9af3-4c48-b762-6ef4447c47cb nodeName:}" failed. No retries permitted until 2025-10-30 00:02:27.564377828 +0000 UTC m=+7.714023207 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-wzgcr" (UniqueName: "kubernetes.io/projected/eaaa2b71-9af3-4c48-b762-6ef4447c47cb-kube-api-access-wzgcr") pod "tigera-operator-65cdcdfd6d-g5b2d" (UID: "eaaa2b71-9af3-4c48-b762-6ef4447c47cb") : failed to sync configmap cache: timed out waiting for the condition Oct 30 00:02:27.065375 kubelet[2999]: I1030 00:02:27.065194 2999 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-xc25d" podStartSLOduration=2.0651829409999998 podStartE2EDuration="2.065182941s" podCreationTimestamp="2025-10-30 00:02:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-30 00:02:27.064416181 +0000 UTC m=+7.214061568" watchObservedRunningTime="2025-10-30 00:02:27.065182941 +0000 UTC m=+7.214828321" Oct 30 00:02:27.724127 containerd[1684]: time="2025-10-30T00:02:27.724085219Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-g5b2d,Uid:eaaa2b71-9af3-4c48-b762-6ef4447c47cb,Namespace:tigera-operator,Attempt:0,}" Oct 30 00:02:27.736581 containerd[1684]: time="2025-10-30T00:02:27.736550721Z" level=info msg="connecting to shim 3c0ccebc138fb42ce5f6ec3b6907a100f6016be6b224ea3f46ca8cdcf1344962" address="unix:///run/containerd/s/ab71a3aeaac51e0d6e367bd6240bf88a07dfcd95937685e9a629144cb6709eaa" namespace=k8s.io protocol=ttrpc version=3 Oct 30 00:02:27.755287 systemd[1]: Started cri-containerd-3c0ccebc138fb42ce5f6ec3b6907a100f6016be6b224ea3f46ca8cdcf1344962.scope - libcontainer container 3c0ccebc138fb42ce5f6ec3b6907a100f6016be6b224ea3f46ca8cdcf1344962. Oct 30 00:02:27.861567 containerd[1684]: time="2025-10-30T00:02:27.861487945Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-g5b2d,Uid:eaaa2b71-9af3-4c48-b762-6ef4447c47cb,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"3c0ccebc138fb42ce5f6ec3b6907a100f6016be6b224ea3f46ca8cdcf1344962\"" Oct 30 00:02:27.863483 containerd[1684]: time="2025-10-30T00:02:27.863457665Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Oct 30 00:02:29.308718 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1711518917.mount: Deactivated successfully. Oct 30 00:02:29.727735 containerd[1684]: time="2025-10-30T00:02:29.727634826Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:02:29.728259 containerd[1684]: time="2025-10-30T00:02:29.728049367Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Oct 30 00:02:29.728518 containerd[1684]: time="2025-10-30T00:02:29.728495659Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:02:29.731170 containerd[1684]: time="2025-10-30T00:02:29.730866085Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:02:29.731170 containerd[1684]: time="2025-10-30T00:02:29.731096790Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 1.867612947s" Oct 30 00:02:29.731170 containerd[1684]: time="2025-10-30T00:02:29.731121177Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Oct 30 00:02:29.737817 containerd[1684]: time="2025-10-30T00:02:29.737804804Z" level=info msg="CreateContainer within sandbox \"3c0ccebc138fb42ce5f6ec3b6907a100f6016be6b224ea3f46ca8cdcf1344962\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Oct 30 00:02:29.759939 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2932303162.mount: Deactivated successfully. Oct 30 00:02:29.760800 containerd[1684]: time="2025-10-30T00:02:29.760449688Z" level=info msg="Container 29562b8bc85b9f70643471baa509825fd711cf3a0b76653091dd4d7884496cbf: CDI devices from CRI Config.CDIDevices: []" Oct 30 00:02:29.769431 containerd[1684]: time="2025-10-30T00:02:29.769407159Z" level=info msg="CreateContainer within sandbox \"3c0ccebc138fb42ce5f6ec3b6907a100f6016be6b224ea3f46ca8cdcf1344962\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"29562b8bc85b9f70643471baa509825fd711cf3a0b76653091dd4d7884496cbf\"" Oct 30 00:02:29.770127 containerd[1684]: time="2025-10-30T00:02:29.770082103Z" level=info msg="StartContainer for \"29562b8bc85b9f70643471baa509825fd711cf3a0b76653091dd4d7884496cbf\"" Oct 30 00:02:29.770549 containerd[1684]: time="2025-10-30T00:02:29.770532799Z" level=info msg="connecting to shim 29562b8bc85b9f70643471baa509825fd711cf3a0b76653091dd4d7884496cbf" address="unix:///run/containerd/s/ab71a3aeaac51e0d6e367bd6240bf88a07dfcd95937685e9a629144cb6709eaa" protocol=ttrpc version=3 Oct 30 00:02:29.784200 systemd[1]: Started cri-containerd-29562b8bc85b9f70643471baa509825fd711cf3a0b76653091dd4d7884496cbf.scope - libcontainer container 29562b8bc85b9f70643471baa509825fd711cf3a0b76653091dd4d7884496cbf. Oct 30 00:02:29.810118 containerd[1684]: time="2025-10-30T00:02:29.810077857Z" level=info msg="StartContainer for \"29562b8bc85b9f70643471baa509825fd711cf3a0b76653091dd4d7884496cbf\" returns successfully" Oct 30 00:02:30.067586 kubelet[2999]: I1030 00:02:30.067136 2999 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-65cdcdfd6d-g5b2d" podStartSLOduration=3.197683348 podStartE2EDuration="5.067125033s" podCreationTimestamp="2025-10-30 00:02:25 +0000 UTC" firstStartedPulling="2025-10-30 00:02:27.862227216 +0000 UTC m=+8.011872595" lastFinishedPulling="2025-10-30 00:02:29.731668913 +0000 UTC m=+9.881314280" observedRunningTime="2025-10-30 00:02:30.066322097 +0000 UTC m=+10.215967488" watchObservedRunningTime="2025-10-30 00:02:30.067125033 +0000 UTC m=+10.216770419" Oct 30 00:02:33.594318 sudo[2009]: pam_unix(sudo:session): session closed for user root Oct 30 00:02:33.608621 sshd-session[2005]: pam_unix(sshd:session): session closed for user core Oct 30 00:02:33.612716 sshd[2008]: Connection closed by 139.178.89.65 port 55894 Oct 30 00:02:33.613087 systemd[1]: sshd@6-139.178.70.100:22-139.178.89.65:55894.service: Deactivated successfully. Oct 30 00:02:33.614791 systemd[1]: session-9.scope: Deactivated successfully. Oct 30 00:02:33.615858 systemd[1]: session-9.scope: Consumed 3.369s CPU time, 156.2M memory peak. Oct 30 00:02:33.616869 systemd-logind[1659]: Session 9 logged out. Waiting for processes to exit. Oct 30 00:02:33.617931 systemd-logind[1659]: Removed session 9. Oct 30 00:02:37.655748 systemd[1]: Created slice kubepods-besteffort-podbf47b833_dbb7_46fa_9887_d6c44abca5e2.slice - libcontainer container kubepods-besteffort-podbf47b833_dbb7_46fa_9887_d6c44abca5e2.slice. Oct 30 00:02:37.834947 kubelet[2999]: I1030 00:02:37.834848 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/bf47b833-dbb7-46fa-9887-d6c44abca5e2-typha-certs\") pod \"calico-typha-7cbf4bd4bc-p27nx\" (UID: \"bf47b833-dbb7-46fa-9887-d6c44abca5e2\") " pod="calico-system/calico-typha-7cbf4bd4bc-p27nx" Oct 30 00:02:37.834947 kubelet[2999]: I1030 00:02:37.834893 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5nj84\" (UniqueName: \"kubernetes.io/projected/bf47b833-dbb7-46fa-9887-d6c44abca5e2-kube-api-access-5nj84\") pod \"calico-typha-7cbf4bd4bc-p27nx\" (UID: \"bf47b833-dbb7-46fa-9887-d6c44abca5e2\") " pod="calico-system/calico-typha-7cbf4bd4bc-p27nx" Oct 30 00:02:37.834947 kubelet[2999]: I1030 00:02:37.834916 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bf47b833-dbb7-46fa-9887-d6c44abca5e2-tigera-ca-bundle\") pod \"calico-typha-7cbf4bd4bc-p27nx\" (UID: \"bf47b833-dbb7-46fa-9887-d6c44abca5e2\") " pod="calico-system/calico-typha-7cbf4bd4bc-p27nx" Oct 30 00:02:37.837030 systemd[1]: Created slice kubepods-besteffort-pod64cb6422_f782_4814_9e26_47ed295517b1.slice - libcontainer container kubepods-besteffort-pod64cb6422_f782_4814_9e26_47ed295517b1.slice. Oct 30 00:02:37.935566 kubelet[2999]: I1030 00:02:37.935544 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/64cb6422-f782-4814-9e26-47ed295517b1-tigera-ca-bundle\") pod \"calico-node-rhf5k\" (UID: \"64cb6422-f782-4814-9e26-47ed295517b1\") " pod="calico-system/calico-node-rhf5k" Oct 30 00:02:37.935652 kubelet[2999]: I1030 00:02:37.935570 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/64cb6422-f782-4814-9e26-47ed295517b1-xtables-lock\") pod \"calico-node-rhf5k\" (UID: \"64cb6422-f782-4814-9e26-47ed295517b1\") " pod="calico-system/calico-node-rhf5k" Oct 30 00:02:37.935652 kubelet[2999]: I1030 00:02:37.935583 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/64cb6422-f782-4814-9e26-47ed295517b1-node-certs\") pod \"calico-node-rhf5k\" (UID: \"64cb6422-f782-4814-9e26-47ed295517b1\") " pod="calico-system/calico-node-rhf5k" Oct 30 00:02:37.935652 kubelet[2999]: I1030 00:02:37.935593 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/64cb6422-f782-4814-9e26-47ed295517b1-cni-log-dir\") pod \"calico-node-rhf5k\" (UID: \"64cb6422-f782-4814-9e26-47ed295517b1\") " pod="calico-system/calico-node-rhf5k" Oct 30 00:02:37.935652 kubelet[2999]: I1030 00:02:37.935617 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/64cb6422-f782-4814-9e26-47ed295517b1-flexvol-driver-host\") pod \"calico-node-rhf5k\" (UID: \"64cb6422-f782-4814-9e26-47ed295517b1\") " pod="calico-system/calico-node-rhf5k" Oct 30 00:02:37.935652 kubelet[2999]: I1030 00:02:37.935627 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/64cb6422-f782-4814-9e26-47ed295517b1-var-lib-calico\") pod \"calico-node-rhf5k\" (UID: \"64cb6422-f782-4814-9e26-47ed295517b1\") " pod="calico-system/calico-node-rhf5k" Oct 30 00:02:37.935744 kubelet[2999]: I1030 00:02:37.935640 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/64cb6422-f782-4814-9e26-47ed295517b1-cni-bin-dir\") pod \"calico-node-rhf5k\" (UID: \"64cb6422-f782-4814-9e26-47ed295517b1\") " pod="calico-system/calico-node-rhf5k" Oct 30 00:02:37.935744 kubelet[2999]: I1030 00:02:37.935651 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/64cb6422-f782-4814-9e26-47ed295517b1-cni-net-dir\") pod \"calico-node-rhf5k\" (UID: \"64cb6422-f782-4814-9e26-47ed295517b1\") " pod="calico-system/calico-node-rhf5k" Oct 30 00:02:37.935744 kubelet[2999]: I1030 00:02:37.935659 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhgcc\" (UniqueName: \"kubernetes.io/projected/64cb6422-f782-4814-9e26-47ed295517b1-kube-api-access-nhgcc\") pod \"calico-node-rhf5k\" (UID: \"64cb6422-f782-4814-9e26-47ed295517b1\") " pod="calico-system/calico-node-rhf5k" Oct 30 00:02:37.935744 kubelet[2999]: I1030 00:02:37.935670 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/64cb6422-f782-4814-9e26-47ed295517b1-policysync\") pod \"calico-node-rhf5k\" (UID: \"64cb6422-f782-4814-9e26-47ed295517b1\") " pod="calico-system/calico-node-rhf5k" Oct 30 00:02:37.935744 kubelet[2999]: I1030 00:02:37.935678 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/64cb6422-f782-4814-9e26-47ed295517b1-lib-modules\") pod \"calico-node-rhf5k\" (UID: \"64cb6422-f782-4814-9e26-47ed295517b1\") " pod="calico-system/calico-node-rhf5k" Oct 30 00:02:37.935854 kubelet[2999]: I1030 00:02:37.935686 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/64cb6422-f782-4814-9e26-47ed295517b1-var-run-calico\") pod \"calico-node-rhf5k\" (UID: \"64cb6422-f782-4814-9e26-47ed295517b1\") " pod="calico-system/calico-node-rhf5k" Oct 30 00:02:37.983116 containerd[1684]: time="2025-10-30T00:02:37.983076447Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7cbf4bd4bc-p27nx,Uid:bf47b833-dbb7-46fa-9887-d6c44abca5e2,Namespace:calico-system,Attempt:0,}" Oct 30 00:02:38.034325 kubelet[2999]: E1030 00:02:38.034273 2999 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7pkzg" podUID="b56fb7b2-ab30-4c17-b3d4-f41ec039c361" Oct 30 00:02:38.058720 kubelet[2999]: E1030 00:02:38.055923 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:02:38.058720 kubelet[2999]: W1030 00:02:38.055947 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:02:38.060111 kubelet[2999]: E1030 00:02:38.059622 2999 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:02:38.060611 kubelet[2999]: E1030 00:02:38.060599 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:02:38.060652 kubelet[2999]: W1030 00:02:38.060621 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:02:38.060652 kubelet[2999]: E1030 00:02:38.060637 2999 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:02:38.121502 containerd[1684]: time="2025-10-30T00:02:38.121349811Z" level=info msg="connecting to shim 1effb5140c4c65131e29cf7e7b799a43ef5fc5cdf9191672bb05144966866574" address="unix:///run/containerd/s/9fb4ae757c0ff91d19e82006a6030a4f1c41cf6ab756595e647dee3acd30a43c" namespace=k8s.io protocol=ttrpc version=3 Oct 30 00:02:38.137796 kubelet[2999]: E1030 00:02:38.137703 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:02:38.137796 kubelet[2999]: W1030 00:02:38.137722 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:02:38.137796 kubelet[2999]: E1030 00:02:38.137742 2999 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:02:38.137796 kubelet[2999]: I1030 00:02:38.137767 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9tbk\" (UniqueName: \"kubernetes.io/projected/b56fb7b2-ab30-4c17-b3d4-f41ec039c361-kube-api-access-h9tbk\") pod \"csi-node-driver-7pkzg\" (UID: \"b56fb7b2-ab30-4c17-b3d4-f41ec039c361\") " pod="calico-system/csi-node-driver-7pkzg" Oct 30 00:02:38.138867 kubelet[2999]: E1030 00:02:38.138181 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:02:38.138867 kubelet[2999]: W1030 00:02:38.138191 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:02:38.138867 kubelet[2999]: E1030 00:02:38.138201 2999 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:02:38.138867 kubelet[2999]: E1030 00:02:38.138542 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:02:38.138867 kubelet[2999]: W1030 00:02:38.138548 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:02:38.138867 kubelet[2999]: E1030 00:02:38.138557 2999 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:02:38.138867 kubelet[2999]: E1030 00:02:38.138747 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:02:38.138867 kubelet[2999]: W1030 00:02:38.138753 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:02:38.138867 kubelet[2999]: E1030 00:02:38.138760 2999 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:02:38.139048 kubelet[2999]: I1030 00:02:38.138779 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/b56fb7b2-ab30-4c17-b3d4-f41ec039c361-varrun\") pod \"csi-node-driver-7pkzg\" (UID: \"b56fb7b2-ab30-4c17-b3d4-f41ec039c361\") " pod="calico-system/csi-node-driver-7pkzg" Oct 30 00:02:38.139463 kubelet[2999]: E1030 00:02:38.139270 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:02:38.139463 kubelet[2999]: W1030 00:02:38.139290 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:02:38.139463 kubelet[2999]: E1030 00:02:38.139297 2999 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:02:38.139463 kubelet[2999]: I1030 00:02:38.139311 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b56fb7b2-ab30-4c17-b3d4-f41ec039c361-registration-dir\") pod \"csi-node-driver-7pkzg\" (UID: \"b56fb7b2-ab30-4c17-b3d4-f41ec039c361\") " pod="calico-system/csi-node-driver-7pkzg" Oct 30 00:02:38.139463 kubelet[2999]: E1030 00:02:38.139407 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:02:38.139463 kubelet[2999]: W1030 00:02:38.139413 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:02:38.139463 kubelet[2999]: E1030 00:02:38.139420 2999 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:02:38.140942 kubelet[2999]: E1030 00:02:38.140757 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:02:38.140942 kubelet[2999]: W1030 00:02:38.140766 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:02:38.140942 kubelet[2999]: E1030 00:02:38.140774 2999 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:02:38.140942 kubelet[2999]: E1030 00:02:38.140891 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:02:38.140942 kubelet[2999]: W1030 00:02:38.140903 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:02:38.140942 kubelet[2999]: E1030 00:02:38.140911 2999 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:02:38.140942 kubelet[2999]: I1030 00:02:38.140926 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b56fb7b2-ab30-4c17-b3d4-f41ec039c361-kubelet-dir\") pod \"csi-node-driver-7pkzg\" (UID: \"b56fb7b2-ab30-4c17-b3d4-f41ec039c361\") " pod="calico-system/csi-node-driver-7pkzg" Oct 30 00:02:38.150667 kubelet[2999]: E1030 00:02:38.141596 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:02:38.150667 kubelet[2999]: W1030 00:02:38.141605 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:02:38.150667 kubelet[2999]: E1030 00:02:38.141615 2999 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:02:38.150667 kubelet[2999]: I1030 00:02:38.141627 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b56fb7b2-ab30-4c17-b3d4-f41ec039c361-socket-dir\") pod \"csi-node-driver-7pkzg\" (UID: \"b56fb7b2-ab30-4c17-b3d4-f41ec039c361\") " pod="calico-system/csi-node-driver-7pkzg" Oct 30 00:02:38.150667 kubelet[2999]: E1030 00:02:38.141733 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:02:38.150667 kubelet[2999]: W1030 00:02:38.141740 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:02:38.150667 kubelet[2999]: E1030 00:02:38.141748 2999 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:02:38.150667 kubelet[2999]: E1030 00:02:38.141823 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:02:38.150667 kubelet[2999]: W1030 00:02:38.141828 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:02:38.149262 systemd[1]: Started cri-containerd-1effb5140c4c65131e29cf7e7b799a43ef5fc5cdf9191672bb05144966866574.scope - libcontainer container 1effb5140c4c65131e29cf7e7b799a43ef5fc5cdf9191672bb05144966866574. Oct 30 00:02:38.151153 kubelet[2999]: E1030 00:02:38.141835 2999 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:02:38.151153 kubelet[2999]: E1030 00:02:38.141920 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:02:38.151153 kubelet[2999]: W1030 00:02:38.141924 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:02:38.151153 kubelet[2999]: E1030 00:02:38.141929 2999 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:02:38.151153 kubelet[2999]: E1030 00:02:38.142072 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:02:38.151153 kubelet[2999]: W1030 00:02:38.142078 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:02:38.151153 kubelet[2999]: E1030 00:02:38.142086 2999 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:02:38.151153 kubelet[2999]: E1030 00:02:38.142324 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:02:38.151153 kubelet[2999]: W1030 00:02:38.142329 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:02:38.151153 kubelet[2999]: E1030 00:02:38.142335 2999 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:02:38.151569 kubelet[2999]: E1030 00:02:38.143284 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:02:38.151569 kubelet[2999]: W1030 00:02:38.143290 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:02:38.151569 kubelet[2999]: E1030 00:02:38.143299 2999 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:02:38.155134 containerd[1684]: time="2025-10-30T00:02:38.155082717Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-rhf5k,Uid:64cb6422-f782-4814-9e26-47ed295517b1,Namespace:calico-system,Attempt:0,}" Oct 30 00:02:38.233935 containerd[1684]: time="2025-10-30T00:02:38.232553698Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7cbf4bd4bc-p27nx,Uid:bf47b833-dbb7-46fa-9887-d6c44abca5e2,Namespace:calico-system,Attempt:0,} returns sandbox id \"1effb5140c4c65131e29cf7e7b799a43ef5fc5cdf9191672bb05144966866574\"" Oct 30 00:02:38.242344 kubelet[2999]: E1030 00:02:38.242330 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:02:38.242344 kubelet[2999]: W1030 00:02:38.242340 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:02:38.242416 kubelet[2999]: E1030 00:02:38.242349 2999 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:02:38.242455 kubelet[2999]: E1030 00:02:38.242442 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:02:38.242455 kubelet[2999]: W1030 00:02:38.242448 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:02:38.242455 kubelet[2999]: E1030 00:02:38.242453 2999 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:02:38.242596 kubelet[2999]: E1030 00:02:38.242545 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:02:38.242596 kubelet[2999]: W1030 00:02:38.242550 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:02:38.242596 kubelet[2999]: E1030 00:02:38.242556 2999 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:02:38.242672 kubelet[2999]: E1030 00:02:38.242642 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:02:38.242672 kubelet[2999]: W1030 00:02:38.242646 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:02:38.242672 kubelet[2999]: E1030 00:02:38.242650 2999 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:02:38.243002 kubelet[2999]: E1030 00:02:38.242748 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:02:38.243002 kubelet[2999]: W1030 00:02:38.242753 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:02:38.243002 kubelet[2999]: E1030 00:02:38.242758 2999 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:02:38.243002 kubelet[2999]: E1030 00:02:38.242841 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:02:38.243002 kubelet[2999]: W1030 00:02:38.242845 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:02:38.243002 kubelet[2999]: E1030 00:02:38.242849 2999 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:02:38.243002 kubelet[2999]: E1030 00:02:38.242928 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:02:38.243002 kubelet[2999]: W1030 00:02:38.242933 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:02:38.243002 kubelet[2999]: E1030 00:02:38.242937 2999 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:02:38.243262 kubelet[2999]: E1030 00:02:38.243012 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:02:38.243262 kubelet[2999]: W1030 00:02:38.243016 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:02:38.243262 kubelet[2999]: E1030 00:02:38.243020 2999 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:02:38.243262 kubelet[2999]: E1030 00:02:38.243158 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:02:38.243262 kubelet[2999]: W1030 00:02:38.243163 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:02:38.243262 kubelet[2999]: E1030 00:02:38.243167 2999 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:02:38.243262 kubelet[2999]: E1030 00:02:38.243257 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:02:38.243262 kubelet[2999]: W1030 00:02:38.243261 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:02:38.243395 kubelet[2999]: E1030 00:02:38.243278 2999 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:02:38.243395 kubelet[2999]: E1030 00:02:38.243362 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:02:38.243395 kubelet[2999]: W1030 00:02:38.243366 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:02:38.243395 kubelet[2999]: E1030 00:02:38.243370 2999 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:02:38.243453 kubelet[2999]: E1030 00:02:38.243448 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:02:38.243453 kubelet[2999]: W1030 00:02:38.243452 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:02:38.243487 kubelet[2999]: E1030 00:02:38.243456 2999 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:02:38.244403 kubelet[2999]: E1030 00:02:38.243560 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:02:38.244403 kubelet[2999]: W1030 00:02:38.243578 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:02:38.244403 kubelet[2999]: E1030 00:02:38.243583 2999 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:02:38.244403 kubelet[2999]: E1030 00:02:38.243666 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:02:38.244403 kubelet[2999]: W1030 00:02:38.243670 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:02:38.244403 kubelet[2999]: E1030 00:02:38.243676 2999 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:02:38.244403 kubelet[2999]: E1030 00:02:38.243757 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:02:38.244403 kubelet[2999]: W1030 00:02:38.243761 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:02:38.244403 kubelet[2999]: E1030 00:02:38.243766 2999 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:02:38.244403 kubelet[2999]: E1030 00:02:38.243847 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:02:38.244612 containerd[1684]: time="2025-10-30T00:02:38.244251872Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Oct 30 00:02:38.244646 kubelet[2999]: W1030 00:02:38.243851 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:02:38.244646 kubelet[2999]: E1030 00:02:38.243856 2999 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:02:38.244646 kubelet[2999]: E1030 00:02:38.243936 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:02:38.244646 kubelet[2999]: W1030 00:02:38.243940 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:02:38.244646 kubelet[2999]: E1030 00:02:38.243945 2999 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:02:38.244646 kubelet[2999]: E1030 00:02:38.244026 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:02:38.244646 kubelet[2999]: W1030 00:02:38.244030 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:02:38.244646 kubelet[2999]: E1030 00:02:38.244034 2999 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:02:38.244646 kubelet[2999]: E1030 00:02:38.244138 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:02:38.244646 kubelet[2999]: W1030 00:02:38.244143 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:02:38.245206 kubelet[2999]: E1030 00:02:38.244147 2999 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:02:38.245206 kubelet[2999]: E1030 00:02:38.244256 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:02:38.245206 kubelet[2999]: W1030 00:02:38.244274 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:02:38.245206 kubelet[2999]: E1030 00:02:38.244280 2999 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:02:38.245206 kubelet[2999]: E1030 00:02:38.244413 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:02:38.245206 kubelet[2999]: W1030 00:02:38.244429 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:02:38.245206 kubelet[2999]: E1030 00:02:38.244433 2999 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:02:38.245206 kubelet[2999]: E1030 00:02:38.244593 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:02:38.245206 kubelet[2999]: W1030 00:02:38.244613 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:02:38.245206 kubelet[2999]: E1030 00:02:38.244618 2999 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:02:38.245386 kubelet[2999]: E1030 00:02:38.244730 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:02:38.245386 kubelet[2999]: W1030 00:02:38.244735 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:02:38.245386 kubelet[2999]: E1030 00:02:38.244740 2999 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:02:38.245386 kubelet[2999]: E1030 00:02:38.244827 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:02:38.245386 kubelet[2999]: W1030 00:02:38.244842 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:02:38.245386 kubelet[2999]: E1030 00:02:38.244847 2999 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:02:38.252510 kubelet[2999]: E1030 00:02:38.252496 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:02:38.252510 kubelet[2999]: W1030 00:02:38.252504 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:02:38.252510 kubelet[2999]: E1030 00:02:38.252512 2999 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:02:38.255676 kubelet[2999]: E1030 00:02:38.255662 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:02:38.255676 kubelet[2999]: W1030 00:02:38.255672 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:02:38.255735 kubelet[2999]: E1030 00:02:38.255685 2999 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:02:38.284985 containerd[1684]: time="2025-10-30T00:02:38.284956771Z" level=info msg="connecting to shim a6042e06e61cd7d678f09c13e0382eec58ebeccca83242bacf47707364f66888" address="unix:///run/containerd/s/df057aa2d296eff8b2db22a36750326695687e676f549ff51d13d90d1a05d71d" namespace=k8s.io protocol=ttrpc version=3 Oct 30 00:02:38.305257 systemd[1]: Started cri-containerd-a6042e06e61cd7d678f09c13e0382eec58ebeccca83242bacf47707364f66888.scope - libcontainer container a6042e06e61cd7d678f09c13e0382eec58ebeccca83242bacf47707364f66888. Oct 30 00:02:38.344578 containerd[1684]: time="2025-10-30T00:02:38.343970676Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-rhf5k,Uid:64cb6422-f782-4814-9e26-47ed295517b1,Namespace:calico-system,Attempt:0,} returns sandbox id \"a6042e06e61cd7d678f09c13e0382eec58ebeccca83242bacf47707364f66888\"" Oct 30 00:02:40.031607 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3246942157.mount: Deactivated successfully. Oct 30 00:02:40.033909 kubelet[2999]: E1030 00:02:40.033878 2999 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7pkzg" podUID="b56fb7b2-ab30-4c17-b3d4-f41ec039c361" Oct 30 00:02:41.048645 containerd[1684]: time="2025-10-30T00:02:41.048607130Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:02:41.071954 containerd[1684]: time="2025-10-30T00:02:41.061230494Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Oct 30 00:02:41.078364 containerd[1684]: time="2025-10-30T00:02:41.078347095Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:02:41.090063 containerd[1684]: time="2025-10-30T00:02:41.090045464Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 2.845770823s" Oct 30 00:02:41.090211 containerd[1684]: time="2025-10-30T00:02:41.090198476Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Oct 30 00:02:41.090336 containerd[1684]: time="2025-10-30T00:02:41.090176848Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:02:41.092127 containerd[1684]: time="2025-10-30T00:02:41.092094829Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Oct 30 00:02:41.119330 containerd[1684]: time="2025-10-30T00:02:41.119296342Z" level=info msg="CreateContainer within sandbox \"1effb5140c4c65131e29cf7e7b799a43ef5fc5cdf9191672bb05144966866574\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Oct 30 00:02:41.183117 containerd[1684]: time="2025-10-30T00:02:41.181240953Z" level=info msg="Container b3ff4d2f3ecb3b83e3506a768623350121b2a4a4cf2c1dd28c50ef23c167c2a4: CDI devices from CRI Config.CDIDevices: []" Oct 30 00:02:41.185270 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3690198323.mount: Deactivated successfully. Oct 30 00:02:41.225253 containerd[1684]: time="2025-10-30T00:02:41.225162921Z" level=info msg="CreateContainer within sandbox \"1effb5140c4c65131e29cf7e7b799a43ef5fc5cdf9191672bb05144966866574\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"b3ff4d2f3ecb3b83e3506a768623350121b2a4a4cf2c1dd28c50ef23c167c2a4\"" Oct 30 00:02:41.226248 containerd[1684]: time="2025-10-30T00:02:41.226139968Z" level=info msg="StartContainer for \"b3ff4d2f3ecb3b83e3506a768623350121b2a4a4cf2c1dd28c50ef23c167c2a4\"" Oct 30 00:02:41.227117 containerd[1684]: time="2025-10-30T00:02:41.227074483Z" level=info msg="connecting to shim b3ff4d2f3ecb3b83e3506a768623350121b2a4a4cf2c1dd28c50ef23c167c2a4" address="unix:///run/containerd/s/9fb4ae757c0ff91d19e82006a6030a4f1c41cf6ab756595e647dee3acd30a43c" protocol=ttrpc version=3 Oct 30 00:02:41.260232 systemd[1]: Started cri-containerd-b3ff4d2f3ecb3b83e3506a768623350121b2a4a4cf2c1dd28c50ef23c167c2a4.scope - libcontainer container b3ff4d2f3ecb3b83e3506a768623350121b2a4a4cf2c1dd28c50ef23c167c2a4. Oct 30 00:02:41.303197 containerd[1684]: time="2025-10-30T00:02:41.303044288Z" level=info msg="StartContainer for \"b3ff4d2f3ecb3b83e3506a768623350121b2a4a4cf2c1dd28c50ef23c167c2a4\" returns successfully" Oct 30 00:02:42.034120 kubelet[2999]: E1030 00:02:42.033864 2999 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7pkzg" podUID="b56fb7b2-ab30-4c17-b3d4-f41ec039c361" Oct 30 00:02:42.141120 kubelet[2999]: I1030 00:02:42.140658 2999 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7cbf4bd4bc-p27nx" podStartSLOduration=2.292206935 podStartE2EDuration="5.140638365s" podCreationTimestamp="2025-10-30 00:02:37 +0000 UTC" firstStartedPulling="2025-10-30 00:02:38.242791658 +0000 UTC m=+18.392437028" lastFinishedPulling="2025-10-30 00:02:41.091223083 +0000 UTC m=+21.240868458" observedRunningTime="2025-10-30 00:02:42.116597226 +0000 UTC m=+22.266242613" watchObservedRunningTime="2025-10-30 00:02:42.140638365 +0000 UTC m=+22.290283745" Oct 30 00:02:42.172281 kubelet[2999]: E1030 00:02:42.172219 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:02:42.172281 kubelet[2999]: W1030 00:02:42.172238 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:02:42.172281 kubelet[2999]: E1030 00:02:42.172254 2999 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:02:42.172599 kubelet[2999]: E1030 00:02:42.172567 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:02:42.172599 kubelet[2999]: W1030 00:02:42.172575 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:02:42.172599 kubelet[2999]: E1030 00:02:42.172583 2999 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:02:42.172877 kubelet[2999]: E1030 00:02:42.172833 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:02:42.172877 kubelet[2999]: W1030 00:02:42.172842 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:02:42.172877 kubelet[2999]: E1030 00:02:42.172849 2999 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:02:42.173111 kubelet[2999]: E1030 00:02:42.173091 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:02:42.173197 kubelet[2999]: W1030 00:02:42.173100 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:02:42.173197 kubelet[2999]: E1030 00:02:42.173165 2999 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:02:42.173423 kubelet[2999]: E1030 00:02:42.173374 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:02:42.173423 kubelet[2999]: W1030 00:02:42.173382 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:02:42.173423 kubelet[2999]: E1030 00:02:42.173389 2999 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:02:42.173668 kubelet[2999]: E1030 00:02:42.173611 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:02:42.173668 kubelet[2999]: W1030 00:02:42.173618 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:02:42.173668 kubelet[2999]: E1030 00:02:42.173625 2999 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:02:42.178159 kubelet[2999]: E1030 00:02:42.173946 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:02:42.178159 kubelet[2999]: W1030 00:02:42.173953 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:02:42.178159 kubelet[2999]: E1030 00:02:42.173960 2999 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:02:42.178159 kubelet[2999]: E1030 00:02:42.174130 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:02:42.178159 kubelet[2999]: W1030 00:02:42.174136 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:02:42.178159 kubelet[2999]: E1030 00:02:42.174143 2999 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:02:42.178159 kubelet[2999]: E1030 00:02:42.174292 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:02:42.178159 kubelet[2999]: W1030 00:02:42.174299 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:02:42.178159 kubelet[2999]: E1030 00:02:42.174305 2999 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:02:42.178159 kubelet[2999]: E1030 00:02:42.174451 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:02:42.178378 kubelet[2999]: W1030 00:02:42.174457 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:02:42.178378 kubelet[2999]: E1030 00:02:42.174464 2999 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:02:42.178378 kubelet[2999]: E1030 00:02:42.174581 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:02:42.178378 kubelet[2999]: W1030 00:02:42.174603 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:02:42.178378 kubelet[2999]: E1030 00:02:42.174610 2999 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:02:42.178378 kubelet[2999]: E1030 00:02:42.174769 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:02:42.178378 kubelet[2999]: W1030 00:02:42.174777 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:02:42.178378 kubelet[2999]: E1030 00:02:42.174785 2999 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:02:42.178378 kubelet[2999]: E1030 00:02:42.174932 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:02:42.178378 kubelet[2999]: W1030 00:02:42.174938 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:02:42.178587 kubelet[2999]: E1030 00:02:42.174945 2999 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:02:42.178587 kubelet[2999]: E1030 00:02:42.175068 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:02:42.178587 kubelet[2999]: W1030 00:02:42.175074 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:02:42.178587 kubelet[2999]: E1030 00:02:42.175092 2999 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:02:42.178587 kubelet[2999]: E1030 00:02:42.175250 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:02:42.178587 kubelet[2999]: W1030 00:02:42.175257 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:02:42.178587 kubelet[2999]: E1030 00:02:42.175270 2999 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:02:42.272671 kubelet[2999]: E1030 00:02:42.272645 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:02:42.272671 kubelet[2999]: W1030 00:02:42.272663 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:02:42.272671 kubelet[2999]: E1030 00:02:42.272678 2999 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:02:42.272912 kubelet[2999]: E1030 00:02:42.272797 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:02:42.272912 kubelet[2999]: W1030 00:02:42.272803 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:02:42.272912 kubelet[2999]: E1030 00:02:42.272809 2999 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:02:42.273120 kubelet[2999]: E1030 00:02:42.273093 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:02:42.273161 kubelet[2999]: W1030 00:02:42.273154 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:02:42.273202 kubelet[2999]: E1030 00:02:42.273196 2999 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:02:42.273349 kubelet[2999]: E1030 00:02:42.273343 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:02:42.273398 kubelet[2999]: W1030 00:02:42.273387 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:02:42.273477 kubelet[2999]: E1030 00:02:42.273431 2999 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:02:42.273546 kubelet[2999]: E1030 00:02:42.273540 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:02:42.273581 kubelet[2999]: W1030 00:02:42.273576 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:02:42.273628 kubelet[2999]: E1030 00:02:42.273603 2999 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:02:42.273809 kubelet[2999]: E1030 00:02:42.273789 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:02:42.273809 kubelet[2999]: W1030 00:02:42.273796 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:02:42.273809 kubelet[2999]: E1030 00:02:42.273801 2999 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:02:42.274926 kubelet[2999]: E1030 00:02:42.274916 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:02:42.276239 kubelet[2999]: W1030 00:02:42.276152 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:02:42.276239 kubelet[2999]: E1030 00:02:42.276167 2999 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:02:42.276761 kubelet[2999]: E1030 00:02:42.276525 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:02:42.276761 kubelet[2999]: W1030 00:02:42.276533 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:02:42.276761 kubelet[2999]: E1030 00:02:42.276544 2999 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:02:42.276761 kubelet[2999]: E1030 00:02:42.276645 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:02:42.276761 kubelet[2999]: W1030 00:02:42.276656 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:02:42.276761 kubelet[2999]: E1030 00:02:42.276671 2999 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:02:42.277077 kubelet[2999]: E1030 00:02:42.276960 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:02:42.277077 kubelet[2999]: W1030 00:02:42.276971 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:02:42.277077 kubelet[2999]: E1030 00:02:42.276977 2999 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:02:42.277408 kubelet[2999]: E1030 00:02:42.277205 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:02:42.277408 kubelet[2999]: W1030 00:02:42.277214 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:02:42.277408 kubelet[2999]: E1030 00:02:42.277221 2999 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:02:42.277575 kubelet[2999]: E1030 00:02:42.277567 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:02:42.277616 kubelet[2999]: W1030 00:02:42.277608 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:02:42.277661 kubelet[2999]: E1030 00:02:42.277654 2999 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:02:42.278340 kubelet[2999]: E1030 00:02:42.278271 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:02:42.278340 kubelet[2999]: W1030 00:02:42.278280 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:02:42.278340 kubelet[2999]: E1030 00:02:42.278289 2999 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:02:42.278548 kubelet[2999]: E1030 00:02:42.278477 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:02:42.278548 kubelet[2999]: W1030 00:02:42.278490 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:02:42.278548 kubelet[2999]: E1030 00:02:42.278504 2999 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:02:42.278750 kubelet[2999]: E1030 00:02:42.278681 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:02:42.278750 kubelet[2999]: W1030 00:02:42.278689 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:02:42.278750 kubelet[2999]: E1030 00:02:42.278698 2999 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:02:42.279070 kubelet[2999]: E1030 00:02:42.278899 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:02:42.279070 kubelet[2999]: W1030 00:02:42.278912 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:02:42.279070 kubelet[2999]: E1030 00:02:42.278925 2999 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:02:42.279240 kubelet[2999]: E1030 00:02:42.279232 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:02:42.279291 kubelet[2999]: W1030 00:02:42.279283 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:02:42.279349 kubelet[2999]: E1030 00:02:42.279340 2999 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:02:42.280039 kubelet[2999]: E1030 00:02:42.280015 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:02:42.280039 kubelet[2999]: W1030 00:02:42.280025 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:02:42.280160 kubelet[2999]: E1030 00:02:42.280141 2999 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:02:43.097099 containerd[1684]: time="2025-10-30T00:02:43.097023328Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:02:43.097813 containerd[1684]: time="2025-10-30T00:02:43.097677545Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Oct 30 00:02:43.098439 containerd[1684]: time="2025-10-30T00:02:43.098312317Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:02:43.101545 containerd[1684]: time="2025-10-30T00:02:43.101520830Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:02:43.104934 containerd[1684]: time="2025-10-30T00:02:43.102182799Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 2.009994387s" Oct 30 00:02:43.104934 containerd[1684]: time="2025-10-30T00:02:43.102210147Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Oct 30 00:02:43.108111 containerd[1684]: time="2025-10-30T00:02:43.105368933Z" level=info msg="CreateContainer within sandbox \"a6042e06e61cd7d678f09c13e0382eec58ebeccca83242bacf47707364f66888\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Oct 30 00:02:43.115914 containerd[1684]: time="2025-10-30T00:02:43.115886196Z" level=info msg="Container 174e422e5332e2075b35db33176019cbda0267e866395f3ddc63d7e3a05c4eca: CDI devices from CRI Config.CDIDevices: []" Oct 30 00:02:43.119082 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount271284497.mount: Deactivated successfully. Oct 30 00:02:43.126359 containerd[1684]: time="2025-10-30T00:02:43.126321053Z" level=info msg="CreateContainer within sandbox \"a6042e06e61cd7d678f09c13e0382eec58ebeccca83242bacf47707364f66888\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"174e422e5332e2075b35db33176019cbda0267e866395f3ddc63d7e3a05c4eca\"" Oct 30 00:02:43.127002 containerd[1684]: time="2025-10-30T00:02:43.126985506Z" level=info msg="StartContainer for \"174e422e5332e2075b35db33176019cbda0267e866395f3ddc63d7e3a05c4eca\"" Oct 30 00:02:43.129652 containerd[1684]: time="2025-10-30T00:02:43.129569438Z" level=info msg="connecting to shim 174e422e5332e2075b35db33176019cbda0267e866395f3ddc63d7e3a05c4eca" address="unix:///run/containerd/s/df057aa2d296eff8b2db22a36750326695687e676f549ff51d13d90d1a05d71d" protocol=ttrpc version=3 Oct 30 00:02:43.155273 systemd[1]: Started cri-containerd-174e422e5332e2075b35db33176019cbda0267e866395f3ddc63d7e3a05c4eca.scope - libcontainer container 174e422e5332e2075b35db33176019cbda0267e866395f3ddc63d7e3a05c4eca. Oct 30 00:02:43.180863 kubelet[2999]: E1030 00:02:43.180841 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:02:43.181195 kubelet[2999]: W1030 00:02:43.181125 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:02:43.186939 kubelet[2999]: E1030 00:02:43.186878 2999 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:02:43.187191 kubelet[2999]: E1030 00:02:43.187146 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:02:43.187191 kubelet[2999]: W1030 00:02:43.187157 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:02:43.187191 kubelet[2999]: E1030 00:02:43.187170 2999 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:02:43.187359 kubelet[2999]: E1030 00:02:43.187325 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:02:43.187359 kubelet[2999]: W1030 00:02:43.187331 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:02:43.187359 kubelet[2999]: E1030 00:02:43.187336 2999 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:02:43.187736 kubelet[2999]: E1030 00:02:43.187729 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:02:43.187847 kubelet[2999]: W1030 00:02:43.187797 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:02:43.187847 kubelet[2999]: E1030 00:02:43.187808 2999 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:02:43.187989 kubelet[2999]: E1030 00:02:43.187955 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:02:43.187989 kubelet[2999]: W1030 00:02:43.187961 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:02:43.187989 kubelet[2999]: E1030 00:02:43.187966 2999 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:02:43.188513 kubelet[2999]: E1030 00:02:43.188236 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:02:43.188513 kubelet[2999]: W1030 00:02:43.188243 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:02:43.188513 kubelet[2999]: E1030 00:02:43.188248 2999 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:02:43.188701 kubelet[2999]: E1030 00:02:43.188664 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:02:43.188701 kubelet[2999]: W1030 00:02:43.188671 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:02:43.188701 kubelet[2999]: E1030 00:02:43.188677 2999 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:02:43.188886 kubelet[2999]: E1030 00:02:43.188826 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:02:43.188886 kubelet[2999]: W1030 00:02:43.188832 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:02:43.188886 kubelet[2999]: E1030 00:02:43.188837 2999 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:02:43.189041 kubelet[2999]: E1030 00:02:43.189013 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:02:43.189041 kubelet[2999]: W1030 00:02:43.189019 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:02:43.189041 kubelet[2999]: E1030 00:02:43.189024 2999 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:02:43.189217 kubelet[2999]: E1030 00:02:43.189176 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:02:43.189217 kubelet[2999]: W1030 00:02:43.189182 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:02:43.189217 kubelet[2999]: E1030 00:02:43.189187 2999 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:02:43.189355 kubelet[2999]: E1030 00:02:43.189326 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:02:43.189355 kubelet[2999]: W1030 00:02:43.189331 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:02:43.189355 kubelet[2999]: E1030 00:02:43.189336 2999 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:02:43.189528 kubelet[2999]: E1030 00:02:43.189491 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:02:43.189528 kubelet[2999]: W1030 00:02:43.189496 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:02:43.189528 kubelet[2999]: E1030 00:02:43.189502 2999 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:02:43.189698 kubelet[2999]: E1030 00:02:43.189650 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:02:43.189698 kubelet[2999]: W1030 00:02:43.189655 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:02:43.189698 kubelet[2999]: E1030 00:02:43.189661 2999 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:02:43.189799 kubelet[2999]: E1030 00:02:43.189793 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:02:43.189840 kubelet[2999]: W1030 00:02:43.189834 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:02:43.189912 kubelet[2999]: E1030 00:02:43.189876 2999 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:02:43.190007 kubelet[2999]: E1030 00:02:43.190002 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:02:43.190153 kubelet[2999]: W1030 00:02:43.190037 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:02:43.190153 kubelet[2999]: E1030 00:02:43.190044 2999 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:02:43.190243 kubelet[2999]: E1030 00:02:43.190237 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:02:43.190284 kubelet[2999]: W1030 00:02:43.190279 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:02:43.190358 kubelet[2999]: E1030 00:02:43.190315 2999 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:02:43.190474 kubelet[2999]: E1030 00:02:43.190421 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:02:43.190474 kubelet[2999]: W1030 00:02:43.190426 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:02:43.190474 kubelet[2999]: E1030 00:02:43.190431 2999 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:02:43.190631 kubelet[2999]: E1030 00:02:43.190571 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:02:43.190631 kubelet[2999]: W1030 00:02:43.190576 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:02:43.190631 kubelet[2999]: E1030 00:02:43.190581 2999 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:02:43.190774 kubelet[2999]: E1030 00:02:43.190728 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:02:43.190774 kubelet[2999]: W1030 00:02:43.190734 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:02:43.190774 kubelet[2999]: E1030 00:02:43.190739 2999 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:02:43.190905 kubelet[2999]: E1030 00:02:43.190865 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:02:43.190905 kubelet[2999]: W1030 00:02:43.190870 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:02:43.190905 kubelet[2999]: E1030 00:02:43.190874 2999 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:02:43.191080 kubelet[2999]: E1030 00:02:43.191002 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:02:43.191080 kubelet[2999]: W1030 00:02:43.191025 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:02:43.191080 kubelet[2999]: E1030 00:02:43.191030 2999 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:02:43.191218 kubelet[2999]: E1030 00:02:43.191201 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:02:43.191218 kubelet[2999]: W1030 00:02:43.191206 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:02:43.191218 kubelet[2999]: E1030 00:02:43.191211 2999 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:02:43.191553 kubelet[2999]: E1030 00:02:43.191503 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:02:43.191553 kubelet[2999]: W1030 00:02:43.191510 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:02:43.191553 kubelet[2999]: E1030 00:02:43.191516 2999 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:02:43.191718 kubelet[2999]: E1030 00:02:43.191675 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:02:43.191718 kubelet[2999]: W1030 00:02:43.191680 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:02:43.191718 kubelet[2999]: E1030 00:02:43.191685 2999 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:02:43.191940 kubelet[2999]: E1030 00:02:43.191872 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:02:43.191940 kubelet[2999]: W1030 00:02:43.191879 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:02:43.191940 kubelet[2999]: E1030 00:02:43.191888 2999 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:02:43.192121 kubelet[2999]: E1030 00:02:43.192085 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:02:43.192121 kubelet[2999]: W1030 00:02:43.192090 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:02:43.192121 kubelet[2999]: E1030 00:02:43.192095 2999 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:02:43.192509 kubelet[2999]: E1030 00:02:43.192495 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:02:43.192544 kubelet[2999]: W1030 00:02:43.192509 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:02:43.192544 kubelet[2999]: E1030 00:02:43.192519 2999 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:02:43.192661 kubelet[2999]: E1030 00:02:43.192625 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:02:43.192661 kubelet[2999]: W1030 00:02:43.192636 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:02:43.192661 kubelet[2999]: E1030 00:02:43.192644 2999 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:02:43.192815 kubelet[2999]: E1030 00:02:43.192734 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:02:43.192815 kubelet[2999]: W1030 00:02:43.192739 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:02:43.192815 kubelet[2999]: E1030 00:02:43.192744 2999 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:02:43.192910 kubelet[2999]: E1030 00:02:43.192825 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:02:43.192910 kubelet[2999]: W1030 00:02:43.192829 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:02:43.192910 kubelet[2999]: E1030 00:02:43.192834 2999 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:02:43.192979 kubelet[2999]: E1030 00:02:43.192922 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:02:43.192979 kubelet[2999]: W1030 00:02:43.192926 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:02:43.192979 kubelet[2999]: E1030 00:02:43.192931 2999 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:02:43.193064 kubelet[2999]: E1030 00:02:43.193026 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:02:43.193064 kubelet[2999]: W1030 00:02:43.193032 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:02:43.193064 kubelet[2999]: E1030 00:02:43.193037 2999 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:02:43.193213 kubelet[2999]: E1030 00:02:43.193204 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:02:43.193213 kubelet[2999]: W1030 00:02:43.193211 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:02:43.193256 kubelet[2999]: E1030 00:02:43.193216 2999 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:02:43.219984 containerd[1684]: time="2025-10-30T00:02:43.219951800Z" level=info msg="StartContainer for \"174e422e5332e2075b35db33176019cbda0267e866395f3ddc63d7e3a05c4eca\" returns successfully" Oct 30 00:02:43.228244 systemd[1]: cri-containerd-174e422e5332e2075b35db33176019cbda0267e866395f3ddc63d7e3a05c4eca.scope: Deactivated successfully. Oct 30 00:02:43.238940 containerd[1684]: time="2025-10-30T00:02:43.238912598Z" level=info msg="received exit event container_id:\"174e422e5332e2075b35db33176019cbda0267e866395f3ddc63d7e3a05c4eca\" id:\"174e422e5332e2075b35db33176019cbda0267e866395f3ddc63d7e3a05c4eca\" pid:3650 exited_at:{seconds:1761782563 nanos:231451635}" Oct 30 00:02:43.258411 containerd[1684]: time="2025-10-30T00:02:43.257891585Z" level=info msg="TaskExit event in podsandbox handler container_id:\"174e422e5332e2075b35db33176019cbda0267e866395f3ddc63d7e3a05c4eca\" id:\"174e422e5332e2075b35db33176019cbda0267e866395f3ddc63d7e3a05c4eca\" pid:3650 exited_at:{seconds:1761782563 nanos:231451635}" Oct 30 00:02:43.268053 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-174e422e5332e2075b35db33176019cbda0267e866395f3ddc63d7e3a05c4eca-rootfs.mount: Deactivated successfully. Oct 30 00:02:44.035758 kubelet[2999]: E1030 00:02:44.035150 2999 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7pkzg" podUID="b56fb7b2-ab30-4c17-b3d4-f41ec039c361" Oct 30 00:02:44.098086 containerd[1684]: time="2025-10-30T00:02:44.097261682Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Oct 30 00:02:46.061099 kubelet[2999]: E1030 00:02:46.061015 2999 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7pkzg" podUID="b56fb7b2-ab30-4c17-b3d4-f41ec039c361" Oct 30 00:02:46.796382 containerd[1684]: time="2025-10-30T00:02:46.796358920Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:02:46.797987 containerd[1684]: time="2025-10-30T00:02:46.797972438Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Oct 30 00:02:46.809116 containerd[1684]: time="2025-10-30T00:02:46.808994965Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:02:46.809960 containerd[1684]: time="2025-10-30T00:02:46.809947911Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:02:46.811228 containerd[1684]: time="2025-10-30T00:02:46.811212332Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 2.712641083s" Oct 30 00:02:46.812121 containerd[1684]: time="2025-10-30T00:02:46.811287558Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Oct 30 00:02:46.817725 containerd[1684]: time="2025-10-30T00:02:46.817710720Z" level=info msg="CreateContainer within sandbox \"a6042e06e61cd7d678f09c13e0382eec58ebeccca83242bacf47707364f66888\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Oct 30 00:02:46.825701 containerd[1684]: time="2025-10-30T00:02:46.825677701Z" level=info msg="Container 1f94a7bd9bc039ef6f71e9e09761f6cae4e9a1c355343528c207463298554686: CDI devices from CRI Config.CDIDevices: []" Oct 30 00:02:46.828782 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2563917979.mount: Deactivated successfully. Oct 30 00:02:46.845692 containerd[1684]: time="2025-10-30T00:02:46.845668638Z" level=info msg="CreateContainer within sandbox \"a6042e06e61cd7d678f09c13e0382eec58ebeccca83242bacf47707364f66888\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"1f94a7bd9bc039ef6f71e9e09761f6cae4e9a1c355343528c207463298554686\"" Oct 30 00:02:46.846176 containerd[1684]: time="2025-10-30T00:02:46.846165566Z" level=info msg="StartContainer for \"1f94a7bd9bc039ef6f71e9e09761f6cae4e9a1c355343528c207463298554686\"" Oct 30 00:02:46.847236 containerd[1684]: time="2025-10-30T00:02:46.847217445Z" level=info msg="connecting to shim 1f94a7bd9bc039ef6f71e9e09761f6cae4e9a1c355343528c207463298554686" address="unix:///run/containerd/s/df057aa2d296eff8b2db22a36750326695687e676f549ff51d13d90d1a05d71d" protocol=ttrpc version=3 Oct 30 00:02:46.863237 systemd[1]: Started cri-containerd-1f94a7bd9bc039ef6f71e9e09761f6cae4e9a1c355343528c207463298554686.scope - libcontainer container 1f94a7bd9bc039ef6f71e9e09761f6cae4e9a1c355343528c207463298554686. Oct 30 00:02:46.890800 containerd[1684]: time="2025-10-30T00:02:46.890778987Z" level=info msg="StartContainer for \"1f94a7bd9bc039ef6f71e9e09761f6cae4e9a1c355343528c207463298554686\" returns successfully" Oct 30 00:02:48.033715 kubelet[2999]: E1030 00:02:48.033606 2999 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7pkzg" podUID="b56fb7b2-ab30-4c17-b3d4-f41ec039c361" Oct 30 00:02:48.258391 systemd[1]: cri-containerd-1f94a7bd9bc039ef6f71e9e09761f6cae4e9a1c355343528c207463298554686.scope: Deactivated successfully. Oct 30 00:02:48.258578 systemd[1]: cri-containerd-1f94a7bd9bc039ef6f71e9e09761f6cae4e9a1c355343528c207463298554686.scope: Consumed 311ms CPU time, 165.2M memory peak, 3.6M read from disk, 171.3M written to disk. Oct 30 00:02:48.324431 containerd[1684]: time="2025-10-30T00:02:48.324303103Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1f94a7bd9bc039ef6f71e9e09761f6cae4e9a1c355343528c207463298554686\" id:\"1f94a7bd9bc039ef6f71e9e09761f6cae4e9a1c355343528c207463298554686\" pid:3741 exited_at:{seconds:1761782568 nanos:307680757}" Oct 30 00:02:48.325184 containerd[1684]: time="2025-10-30T00:02:48.325159568Z" level=info msg="received exit event container_id:\"1f94a7bd9bc039ef6f71e9e09761f6cae4e9a1c355343528c207463298554686\" id:\"1f94a7bd9bc039ef6f71e9e09761f6cae4e9a1c355343528c207463298554686\" pid:3741 exited_at:{seconds:1761782568 nanos:307680757}" Oct 30 00:02:48.331470 kubelet[2999]: I1030 00:02:48.331453 2999 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Oct 30 00:02:48.365303 systemd[1]: Created slice kubepods-burstable-pod070e753e_6ee6_4538_bab1_e3c05026a56a.slice - libcontainer container kubepods-burstable-pod070e753e_6ee6_4538_bab1_e3c05026a56a.slice. Oct 30 00:02:48.371750 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1f94a7bd9bc039ef6f71e9e09761f6cae4e9a1c355343528c207463298554686-rootfs.mount: Deactivated successfully. Oct 30 00:02:48.381175 systemd[1]: Created slice kubepods-besteffort-pod5b6d7e41_a662_49c6_bb05_81f7e9c9a829.slice - libcontainer container kubepods-besteffort-pod5b6d7e41_a662_49c6_bb05_81f7e9c9a829.slice. Oct 30 00:02:48.390288 systemd[1]: Created slice kubepods-burstable-pod2d168db8_a265_440f_81ca_e805c0d25f52.slice - libcontainer container kubepods-burstable-pod2d168db8_a265_440f_81ca_e805c0d25f52.slice. Oct 30 00:02:48.399051 kubelet[2999]: E1030 00:02:48.398952 2999 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"whisker-ca-bundle\" is forbidden: User \"system:node:localhost\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" reflector="object-\"calico-system\"/\"whisker-ca-bundle\"" type="*v1.ConfigMap" Oct 30 00:02:48.400696 systemd[1]: Created slice kubepods-besteffort-podf2b9de57_0f59_4e5f_9352_7a17cefee0f0.slice - libcontainer container kubepods-besteffort-podf2b9de57_0f59_4e5f_9352_7a17cefee0f0.slice. Oct 30 00:02:48.406205 systemd[1]: Created slice kubepods-besteffort-pod3e83ebe5_6a04_4887_9815_f7c972a7870a.slice - libcontainer container kubepods-besteffort-pod3e83ebe5_6a04_4887_9815_f7c972a7870a.slice. Oct 30 00:02:48.412190 systemd[1]: Created slice kubepods-besteffort-poddc87ca77_2c29_4c4e_beb8_2b81cdefd490.slice - libcontainer container kubepods-besteffort-poddc87ca77_2c29_4c4e_beb8_2b81cdefd490.slice. Oct 30 00:02:48.417560 systemd[1]: Created slice kubepods-besteffort-pod542f58e7_43b6_487a_8c10_54da1c4b4004.slice - libcontainer container kubepods-besteffort-pod542f58e7_43b6_487a_8c10_54da1c4b4004.slice. Oct 30 00:02:48.530189 kubelet[2999]: I1030 00:02:48.530163 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2d168db8-a265-440f-81ca-e805c0d25f52-config-volume\") pod \"coredns-66bc5c9577-64n8r\" (UID: \"2d168db8-a265-440f-81ca-e805c0d25f52\") " pod="kube-system/coredns-66bc5c9577-64n8r" Oct 30 00:02:48.530632 kubelet[2999]: I1030 00:02:48.530424 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/3e83ebe5-6a04-4887-9815-f7c972a7870a-calico-apiserver-certs\") pod \"calico-apiserver-7b74d8c9c5-74m4p\" (UID: \"3e83ebe5-6a04-4887-9815-f7c972a7870a\") " pod="calico-apiserver/calico-apiserver-7b74d8c9c5-74m4p" Oct 30 00:02:48.530632 kubelet[2999]: I1030 00:02:48.530444 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5b6d7e41-a662-49c6-bb05-81f7e9c9a829-goldmane-ca-bundle\") pod \"goldmane-7c778bb748-pm7rt\" (UID: \"5b6d7e41-a662-49c6-bb05-81f7e9c9a829\") " pod="calico-system/goldmane-7c778bb748-pm7rt" Oct 30 00:02:48.530632 kubelet[2999]: I1030 00:02:48.530453 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbqdg\" (UniqueName: \"kubernetes.io/projected/3e83ebe5-6a04-4887-9815-f7c972a7870a-kube-api-access-hbqdg\") pod \"calico-apiserver-7b74d8c9c5-74m4p\" (UID: \"3e83ebe5-6a04-4887-9815-f7c972a7870a\") " pod="calico-apiserver/calico-apiserver-7b74d8c9c5-74m4p" Oct 30 00:02:48.530632 kubelet[2999]: I1030 00:02:48.530463 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzxnm\" (UniqueName: \"kubernetes.io/projected/dc87ca77-2c29-4c4e-beb8-2b81cdefd490-kube-api-access-bzxnm\") pod \"calico-kube-controllers-7d5c95967b-s9656\" (UID: \"dc87ca77-2c29-4c4e-beb8-2b81cdefd490\") " pod="calico-system/calico-kube-controllers-7d5c95967b-s9656" Oct 30 00:02:48.530632 kubelet[2999]: I1030 00:02:48.530473 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/070e753e-6ee6-4538-bab1-e3c05026a56a-config-volume\") pod \"coredns-66bc5c9577-nf8m8\" (UID: \"070e753e-6ee6-4538-bab1-e3c05026a56a\") " pod="kube-system/coredns-66bc5c9577-nf8m8" Oct 30 00:02:48.530792 kubelet[2999]: I1030 00:02:48.530481 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f2b9de57-0f59-4e5f-9352-7a17cefee0f0-whisker-backend-key-pair\") pod \"whisker-5456cc7b59-4vbts\" (UID: \"f2b9de57-0f59-4e5f-9352-7a17cefee0f0\") " pod="calico-system/whisker-5456cc7b59-4vbts" Oct 30 00:02:48.530792 kubelet[2999]: I1030 00:02:48.530511 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4j6tp\" (UniqueName: \"kubernetes.io/projected/2d168db8-a265-440f-81ca-e805c0d25f52-kube-api-access-4j6tp\") pod \"coredns-66bc5c9577-64n8r\" (UID: \"2d168db8-a265-440f-81ca-e805c0d25f52\") " pod="kube-system/coredns-66bc5c9577-64n8r" Oct 30 00:02:48.530792 kubelet[2999]: I1030 00:02:48.530528 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b6d7e41-a662-49c6-bb05-81f7e9c9a829-config\") pod \"goldmane-7c778bb748-pm7rt\" (UID: \"5b6d7e41-a662-49c6-bb05-81f7e9c9a829\") " pod="calico-system/goldmane-7c778bb748-pm7rt" Oct 30 00:02:48.530792 kubelet[2999]: I1030 00:02:48.530543 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/5b6d7e41-a662-49c6-bb05-81f7e9c9a829-goldmane-key-pair\") pod \"goldmane-7c778bb748-pm7rt\" (UID: \"5b6d7e41-a662-49c6-bb05-81f7e9c9a829\") " pod="calico-system/goldmane-7c778bb748-pm7rt" Oct 30 00:02:48.530792 kubelet[2999]: I1030 00:02:48.530577 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/542f58e7-43b6-487a-8c10-54da1c4b4004-calico-apiserver-certs\") pod \"calico-apiserver-7b74d8c9c5-hkpns\" (UID: \"542f58e7-43b6-487a-8c10-54da1c4b4004\") " pod="calico-apiserver/calico-apiserver-7b74d8c9c5-hkpns" Oct 30 00:02:48.530874 kubelet[2999]: I1030 00:02:48.530615 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wt4qp\" (UniqueName: \"kubernetes.io/projected/5b6d7e41-a662-49c6-bb05-81f7e9c9a829-kube-api-access-wt4qp\") pod \"goldmane-7c778bb748-pm7rt\" (UID: \"5b6d7e41-a662-49c6-bb05-81f7e9c9a829\") " pod="calico-system/goldmane-7c778bb748-pm7rt" Oct 30 00:02:48.530874 kubelet[2999]: I1030 00:02:48.530633 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dc87ca77-2c29-4c4e-beb8-2b81cdefd490-tigera-ca-bundle\") pod \"calico-kube-controllers-7d5c95967b-s9656\" (UID: \"dc87ca77-2c29-4c4e-beb8-2b81cdefd490\") " pod="calico-system/calico-kube-controllers-7d5c95967b-s9656" Oct 30 00:02:48.530874 kubelet[2999]: I1030 00:02:48.530651 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6s62\" (UniqueName: \"kubernetes.io/projected/070e753e-6ee6-4538-bab1-e3c05026a56a-kube-api-access-k6s62\") pod \"coredns-66bc5c9577-nf8m8\" (UID: \"070e753e-6ee6-4538-bab1-e3c05026a56a\") " pod="kube-system/coredns-66bc5c9577-nf8m8" Oct 30 00:02:48.530874 kubelet[2999]: I1030 00:02:48.530666 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f2b9de57-0f59-4e5f-9352-7a17cefee0f0-whisker-ca-bundle\") pod \"whisker-5456cc7b59-4vbts\" (UID: \"f2b9de57-0f59-4e5f-9352-7a17cefee0f0\") " pod="calico-system/whisker-5456cc7b59-4vbts" Oct 30 00:02:48.530874 kubelet[2999]: I1030 00:02:48.530682 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlvhf\" (UniqueName: \"kubernetes.io/projected/f2b9de57-0f59-4e5f-9352-7a17cefee0f0-kube-api-access-xlvhf\") pod \"whisker-5456cc7b59-4vbts\" (UID: \"f2b9de57-0f59-4e5f-9352-7a17cefee0f0\") " pod="calico-system/whisker-5456cc7b59-4vbts" Oct 30 00:02:48.531129 kubelet[2999]: I1030 00:02:48.530697 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c74pq\" (UniqueName: \"kubernetes.io/projected/542f58e7-43b6-487a-8c10-54da1c4b4004-kube-api-access-c74pq\") pod \"calico-apiserver-7b74d8c9c5-hkpns\" (UID: \"542f58e7-43b6-487a-8c10-54da1c4b4004\") " pod="calico-apiserver/calico-apiserver-7b74d8c9c5-hkpns" Oct 30 00:02:48.684889 containerd[1684]: time="2025-10-30T00:02:48.684855398Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-nf8m8,Uid:070e753e-6ee6-4538-bab1-e3c05026a56a,Namespace:kube-system,Attempt:0,}" Oct 30 00:02:48.690159 containerd[1684]: time="2025-10-30T00:02:48.690137902Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-pm7rt,Uid:5b6d7e41-a662-49c6-bb05-81f7e9c9a829,Namespace:calico-system,Attempt:0,}" Oct 30 00:02:48.706188 containerd[1684]: time="2025-10-30T00:02:48.706155796Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-64n8r,Uid:2d168db8-a265-440f-81ca-e805c0d25f52,Namespace:kube-system,Attempt:0,}" Oct 30 00:02:48.718867 containerd[1684]: time="2025-10-30T00:02:48.718845164Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7d5c95967b-s9656,Uid:dc87ca77-2c29-4c4e-beb8-2b81cdefd490,Namespace:calico-system,Attempt:0,}" Oct 30 00:02:48.739686 containerd[1684]: time="2025-10-30T00:02:48.739653839Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b74d8c9c5-hkpns,Uid:542f58e7-43b6-487a-8c10-54da1c4b4004,Namespace:calico-apiserver,Attempt:0,}" Oct 30 00:02:48.740943 containerd[1684]: time="2025-10-30T00:02:48.740911508Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b74d8c9c5-74m4p,Uid:3e83ebe5-6a04-4887-9815-f7c972a7870a,Namespace:calico-apiserver,Attempt:0,}" Oct 30 00:02:48.941251 containerd[1684]: time="2025-10-30T00:02:48.940494163Z" level=error msg="Failed to destroy network for sandbox \"833dfeeb766028b9b084b123b6664a6f2030d2a9ffb8f8237d1392973f2695e9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:02:48.941251 containerd[1684]: time="2025-10-30T00:02:48.941081932Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b74d8c9c5-74m4p,Uid:3e83ebe5-6a04-4887-9815-f7c972a7870a,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"833dfeeb766028b9b084b123b6664a6f2030d2a9ffb8f8237d1392973f2695e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:02:48.944581 containerd[1684]: time="2025-10-30T00:02:48.944562691Z" level=error msg="Failed to destroy network for sandbox \"fada15fbc27ec625e89c8df271bbad2035e22ce8d3b4a8eb4c20bd0f8e404745\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:02:48.945370 containerd[1684]: time="2025-10-30T00:02:48.945355927Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7d5c95967b-s9656,Uid:dc87ca77-2c29-4c4e-beb8-2b81cdefd490,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"fada15fbc27ec625e89c8df271bbad2035e22ce8d3b4a8eb4c20bd0f8e404745\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:02:48.948416 kubelet[2999]: E1030 00:02:48.945562 2999 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"833dfeeb766028b9b084b123b6664a6f2030d2a9ffb8f8237d1392973f2695e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:02:48.948416 kubelet[2999]: E1030 00:02:48.945602 2999 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"833dfeeb766028b9b084b123b6664a6f2030d2a9ffb8f8237d1392973f2695e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7b74d8c9c5-74m4p" Oct 30 00:02:48.948416 kubelet[2999]: E1030 00:02:48.945615 2999 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"833dfeeb766028b9b084b123b6664a6f2030d2a9ffb8f8237d1392973f2695e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7b74d8c9c5-74m4p" Oct 30 00:02:48.948526 kubelet[2999]: E1030 00:02:48.945647 2999 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7b74d8c9c5-74m4p_calico-apiserver(3e83ebe5-6a04-4887-9815-f7c972a7870a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7b74d8c9c5-74m4p_calico-apiserver(3e83ebe5-6a04-4887-9815-f7c972a7870a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"833dfeeb766028b9b084b123b6664a6f2030d2a9ffb8f8237d1392973f2695e9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7b74d8c9c5-74m4p" podUID="3e83ebe5-6a04-4887-9815-f7c972a7870a" Oct 30 00:02:48.948526 kubelet[2999]: E1030 00:02:48.946171 2999 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fada15fbc27ec625e89c8df271bbad2035e22ce8d3b4a8eb4c20bd0f8e404745\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:02:48.948526 kubelet[2999]: E1030 00:02:48.946186 2999 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fada15fbc27ec625e89c8df271bbad2035e22ce8d3b4a8eb4c20bd0f8e404745\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7d5c95967b-s9656" Oct 30 00:02:48.948597 kubelet[2999]: E1030 00:02:48.946195 2999 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fada15fbc27ec625e89c8df271bbad2035e22ce8d3b4a8eb4c20bd0f8e404745\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7d5c95967b-s9656" Oct 30 00:02:48.948597 kubelet[2999]: E1030 00:02:48.946215 2999 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7d5c95967b-s9656_calico-system(dc87ca77-2c29-4c4e-beb8-2b81cdefd490)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7d5c95967b-s9656_calico-system(dc87ca77-2c29-4c4e-beb8-2b81cdefd490)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fada15fbc27ec625e89c8df271bbad2035e22ce8d3b4a8eb4c20bd0f8e404745\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7d5c95967b-s9656" podUID="dc87ca77-2c29-4c4e-beb8-2b81cdefd490" Oct 30 00:02:48.961324 containerd[1684]: time="2025-10-30T00:02:48.961243571Z" level=error msg="Failed to destroy network for sandbox \"f332424eb32d395515ee65561bb8b48b592bf926bc4e7cc8f9d6ea1667d672d8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:02:48.961488 containerd[1684]: time="2025-10-30T00:02:48.961289815Z" level=error msg="Failed to destroy network for sandbox \"fd68f2daff8a56d10469b174d5411dc91f219c300d92ca3614a5681caa382a76\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:02:48.961992 containerd[1684]: time="2025-10-30T00:02:48.961299701Z" level=error msg="Failed to destroy network for sandbox \"6566de63e30abc0246a2f70b7bc9529d3034ae22d422ccf0c539253a935844cf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:02:48.961992 containerd[1684]: time="2025-10-30T00:02:48.961314427Z" level=error msg="Failed to destroy network for sandbox \"89a03ae1327f32fe04970048f50be8e23fc87a4c7d481beaad07da7495e802c7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:02:48.962201 containerd[1684]: time="2025-10-30T00:02:48.962162495Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-64n8r,Uid:2d168db8-a265-440f-81ca-e805c0d25f52,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f332424eb32d395515ee65561bb8b48b592bf926bc4e7cc8f9d6ea1667d672d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:02:48.962771 containerd[1684]: time="2025-10-30T00:02:48.962491526Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-pm7rt,Uid:5b6d7e41-a662-49c6-bb05-81f7e9c9a829,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6566de63e30abc0246a2f70b7bc9529d3034ae22d422ccf0c539253a935844cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:02:48.962831 kubelet[2999]: E1030 00:02:48.962526 2999 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f332424eb32d395515ee65561bb8b48b592bf926bc4e7cc8f9d6ea1667d672d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:02:48.962831 kubelet[2999]: E1030 00:02:48.962565 2999 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f332424eb32d395515ee65561bb8b48b592bf926bc4e7cc8f9d6ea1667d672d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-64n8r" Oct 30 00:02:48.962831 kubelet[2999]: E1030 00:02:48.962579 2999 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f332424eb32d395515ee65561bb8b48b592bf926bc4e7cc8f9d6ea1667d672d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-64n8r" Oct 30 00:02:48.963290 kubelet[2999]: E1030 00:02:48.962617 2999 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-64n8r_kube-system(2d168db8-a265-440f-81ca-e805c0d25f52)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-64n8r_kube-system(2d168db8-a265-440f-81ca-e805c0d25f52)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f332424eb32d395515ee65561bb8b48b592bf926bc4e7cc8f9d6ea1667d672d8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-64n8r" podUID="2d168db8-a265-440f-81ca-e805c0d25f52" Oct 30 00:02:48.963290 kubelet[2999]: E1030 00:02:48.963062 2999 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6566de63e30abc0246a2f70b7bc9529d3034ae22d422ccf0c539253a935844cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:02:48.963290 kubelet[2999]: E1030 00:02:48.963127 2999 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6566de63e30abc0246a2f70b7bc9529d3034ae22d422ccf0c539253a935844cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-pm7rt" Oct 30 00:02:48.963374 containerd[1684]: time="2025-10-30T00:02:48.963181533Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b74d8c9c5-hkpns,Uid:542f58e7-43b6-487a-8c10-54da1c4b4004,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"fd68f2daff8a56d10469b174d5411dc91f219c300d92ca3614a5681caa382a76\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:02:48.964071 kubelet[2999]: E1030 00:02:48.964057 2999 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6566de63e30abc0246a2f70b7bc9529d3034ae22d422ccf0c539253a935844cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-pm7rt" Oct 30 00:02:48.964248 kubelet[2999]: E1030 00:02:48.964183 2999 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-pm7rt_calico-system(5b6d7e41-a662-49c6-bb05-81f7e9c9a829)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-pm7rt_calico-system(5b6d7e41-a662-49c6-bb05-81f7e9c9a829)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6566de63e30abc0246a2f70b7bc9529d3034ae22d422ccf0c539253a935844cf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-pm7rt" podUID="5b6d7e41-a662-49c6-bb05-81f7e9c9a829" Oct 30 00:02:48.964445 containerd[1684]: time="2025-10-30T00:02:48.964308155Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-nf8m8,Uid:070e753e-6ee6-4538-bab1-e3c05026a56a,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"89a03ae1327f32fe04970048f50be8e23fc87a4c7d481beaad07da7495e802c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:02:48.964503 kubelet[2999]: E1030 00:02:48.964349 2999 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fd68f2daff8a56d10469b174d5411dc91f219c300d92ca3614a5681caa382a76\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:02:48.964503 kubelet[2999]: E1030 00:02:48.964363 2999 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fd68f2daff8a56d10469b174d5411dc91f219c300d92ca3614a5681caa382a76\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7b74d8c9c5-hkpns" Oct 30 00:02:48.964503 kubelet[2999]: E1030 00:02:48.964372 2999 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fd68f2daff8a56d10469b174d5411dc91f219c300d92ca3614a5681caa382a76\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7b74d8c9c5-hkpns" Oct 30 00:02:48.964562 kubelet[2999]: E1030 00:02:48.964390 2999 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7b74d8c9c5-hkpns_calico-apiserver(542f58e7-43b6-487a-8c10-54da1c4b4004)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7b74d8c9c5-hkpns_calico-apiserver(542f58e7-43b6-487a-8c10-54da1c4b4004)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fd68f2daff8a56d10469b174d5411dc91f219c300d92ca3614a5681caa382a76\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7b74d8c9c5-hkpns" podUID="542f58e7-43b6-487a-8c10-54da1c4b4004" Oct 30 00:02:48.965168 kubelet[2999]: E1030 00:02:48.965052 2999 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"89a03ae1327f32fe04970048f50be8e23fc87a4c7d481beaad07da7495e802c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:02:48.965168 kubelet[2999]: E1030 00:02:48.965128 2999 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"89a03ae1327f32fe04970048f50be8e23fc87a4c7d481beaad07da7495e802c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-nf8m8" Oct 30 00:02:48.965168 kubelet[2999]: E1030 00:02:48.965143 2999 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"89a03ae1327f32fe04970048f50be8e23fc87a4c7d481beaad07da7495e802c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-nf8m8" Oct 30 00:02:48.965293 kubelet[2999]: E1030 00:02:48.965274 2999 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-nf8m8_kube-system(070e753e-6ee6-4538-bab1-e3c05026a56a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-nf8m8_kube-system(070e753e-6ee6-4538-bab1-e3c05026a56a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"89a03ae1327f32fe04970048f50be8e23fc87a4c7d481beaad07da7495e802c7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-nf8m8" podUID="070e753e-6ee6-4538-bab1-e3c05026a56a" Oct 30 00:02:49.107292 containerd[1684]: time="2025-10-30T00:02:49.107263804Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Oct 30 00:02:49.304991 containerd[1684]: time="2025-10-30T00:02:49.304420465Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5456cc7b59-4vbts,Uid:f2b9de57-0f59-4e5f-9352-7a17cefee0f0,Namespace:calico-system,Attempt:0,}" Oct 30 00:02:49.335911 containerd[1684]: time="2025-10-30T00:02:49.335879269Z" level=error msg="Failed to destroy network for sandbox \"9acaacfce0295345fc8bcaf2d5a3ef8457bcdcbfdc217f5e2906ea8256863d62\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:02:49.336395 containerd[1684]: time="2025-10-30T00:02:49.336373150Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5456cc7b59-4vbts,Uid:f2b9de57-0f59-4e5f-9352-7a17cefee0f0,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9acaacfce0295345fc8bcaf2d5a3ef8457bcdcbfdc217f5e2906ea8256863d62\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:02:49.336554 kubelet[2999]: E1030 00:02:49.336529 2999 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9acaacfce0295345fc8bcaf2d5a3ef8457bcdcbfdc217f5e2906ea8256863d62\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:02:49.336681 kubelet[2999]: E1030 00:02:49.336564 2999 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9acaacfce0295345fc8bcaf2d5a3ef8457bcdcbfdc217f5e2906ea8256863d62\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5456cc7b59-4vbts" Oct 30 00:02:49.336681 kubelet[2999]: E1030 00:02:49.336578 2999 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9acaacfce0295345fc8bcaf2d5a3ef8457bcdcbfdc217f5e2906ea8256863d62\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5456cc7b59-4vbts" Oct 30 00:02:49.336681 kubelet[2999]: E1030 00:02:49.336623 2999 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5456cc7b59-4vbts_calico-system(f2b9de57-0f59-4e5f-9352-7a17cefee0f0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5456cc7b59-4vbts_calico-system(f2b9de57-0f59-4e5f-9352-7a17cefee0f0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9acaacfce0295345fc8bcaf2d5a3ef8457bcdcbfdc217f5e2906ea8256863d62\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5456cc7b59-4vbts" podUID="f2b9de57-0f59-4e5f-9352-7a17cefee0f0" Oct 30 00:02:49.371633 systemd[1]: run-netns-cni\x2d6a0f2030\x2dbf9c\x2d1180\x2d6c46\x2d30f43e40823d.mount: Deactivated successfully. Oct 30 00:02:50.037428 systemd[1]: Created slice kubepods-besteffort-podb56fb7b2_ab30_4c17_b3d4_f41ec039c361.slice - libcontainer container kubepods-besteffort-podb56fb7b2_ab30_4c17_b3d4_f41ec039c361.slice. Oct 30 00:02:50.039730 containerd[1684]: time="2025-10-30T00:02:50.039701972Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7pkzg,Uid:b56fb7b2-ab30-4c17-b3d4-f41ec039c361,Namespace:calico-system,Attempt:0,}" Oct 30 00:02:50.077426 containerd[1684]: time="2025-10-30T00:02:50.076130695Z" level=error msg="Failed to destroy network for sandbox \"2a0a348b420c20579262fcff53943785592a1403f9a29dd9ba724dd6440c29f5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:02:50.077427 systemd[1]: run-netns-cni\x2d26d5974a\x2d2ceb\x2d38db\x2debfa\x2dc0638174c85c.mount: Deactivated successfully. Oct 30 00:02:50.078377 containerd[1684]: time="2025-10-30T00:02:50.078332339Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7pkzg,Uid:b56fb7b2-ab30-4c17-b3d4-f41ec039c361,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2a0a348b420c20579262fcff53943785592a1403f9a29dd9ba724dd6440c29f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:02:50.079146 kubelet[2999]: E1030 00:02:50.078531 2999 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2a0a348b420c20579262fcff53943785592a1403f9a29dd9ba724dd6440c29f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:02:50.079146 kubelet[2999]: E1030 00:02:50.078578 2999 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2a0a348b420c20579262fcff53943785592a1403f9a29dd9ba724dd6440c29f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7pkzg" Oct 30 00:02:50.079146 kubelet[2999]: E1030 00:02:50.078592 2999 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2a0a348b420c20579262fcff53943785592a1403f9a29dd9ba724dd6440c29f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7pkzg" Oct 30 00:02:50.079546 kubelet[2999]: E1030 00:02:50.078628 2999 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-7pkzg_calico-system(b56fb7b2-ab30-4c17-b3d4-f41ec039c361)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-7pkzg_calico-system(b56fb7b2-ab30-4c17-b3d4-f41ec039c361)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2a0a348b420c20579262fcff53943785592a1403f9a29dd9ba724dd6440c29f5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-7pkzg" podUID="b56fb7b2-ab30-4c17-b3d4-f41ec039c361" Oct 30 00:02:53.503463 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1912416155.mount: Deactivated successfully. Oct 30 00:02:53.683126 containerd[1684]: time="2025-10-30T00:02:53.671081369Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:02:53.692235 containerd[1684]: time="2025-10-30T00:02:53.691537772Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:02:53.692235 containerd[1684]: time="2025-10-30T00:02:53.691807912Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Oct 30 00:02:53.693259 containerd[1684]: time="2025-10-30T00:02:53.693235237Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:02:53.694152 containerd[1684]: time="2025-10-30T00:02:53.694124349Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 4.585546248s" Oct 30 00:02:53.694152 containerd[1684]: time="2025-10-30T00:02:53.694151377Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Oct 30 00:02:53.720516 containerd[1684]: time="2025-10-30T00:02:53.720483458Z" level=info msg="CreateContainer within sandbox \"a6042e06e61cd7d678f09c13e0382eec58ebeccca83242bacf47707364f66888\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Oct 30 00:02:53.774160 containerd[1684]: time="2025-10-30T00:02:53.773056013Z" level=info msg="Container b7a1c6e305feea3adeac73b0a06b4995b4ff93915ae6a0bebf7453203bde1f48: CDI devices from CRI Config.CDIDevices: []" Oct 30 00:02:53.775111 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2329606960.mount: Deactivated successfully. Oct 30 00:02:53.805558 containerd[1684]: time="2025-10-30T00:02:53.805487187Z" level=info msg="CreateContainer within sandbox \"a6042e06e61cd7d678f09c13e0382eec58ebeccca83242bacf47707364f66888\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"b7a1c6e305feea3adeac73b0a06b4995b4ff93915ae6a0bebf7453203bde1f48\"" Oct 30 00:02:53.805852 containerd[1684]: time="2025-10-30T00:02:53.805835448Z" level=info msg="StartContainer for \"b7a1c6e305feea3adeac73b0a06b4995b4ff93915ae6a0bebf7453203bde1f48\"" Oct 30 00:02:53.816020 containerd[1684]: time="2025-10-30T00:02:53.815922294Z" level=info msg="connecting to shim b7a1c6e305feea3adeac73b0a06b4995b4ff93915ae6a0bebf7453203bde1f48" address="unix:///run/containerd/s/df057aa2d296eff8b2db22a36750326695687e676f549ff51d13d90d1a05d71d" protocol=ttrpc version=3 Oct 30 00:02:53.913192 systemd[1]: Started cri-containerd-b7a1c6e305feea3adeac73b0a06b4995b4ff93915ae6a0bebf7453203bde1f48.scope - libcontainer container b7a1c6e305feea3adeac73b0a06b4995b4ff93915ae6a0bebf7453203bde1f48. Oct 30 00:02:53.970248 containerd[1684]: time="2025-10-30T00:02:53.970210096Z" level=info msg="StartContainer for \"b7a1c6e305feea3adeac73b0a06b4995b4ff93915ae6a0bebf7453203bde1f48\" returns successfully" Oct 30 00:02:54.242722 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Oct 30 00:02:54.253841 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Oct 30 00:02:54.385494 containerd[1684]: time="2025-10-30T00:02:54.385365939Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b7a1c6e305feea3adeac73b0a06b4995b4ff93915ae6a0bebf7453203bde1f48\" id:\"c77e862e79216e350217163e03484e11e2256ff79f0659ae01248c40aff84031\" pid:4051 exit_status:1 exited_at:{seconds:1761782574 nanos:384683779}" Oct 30 00:02:54.535958 kubelet[2999]: I1030 00:02:54.533409 2999 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-rhf5k" podStartSLOduration=2.183405794 podStartE2EDuration="17.533394852s" podCreationTimestamp="2025-10-30 00:02:37 +0000 UTC" firstStartedPulling="2025-10-30 00:02:38.34474057 +0000 UTC m=+18.494385940" lastFinishedPulling="2025-10-30 00:02:53.694729629 +0000 UTC m=+33.844374998" observedRunningTime="2025-10-30 00:02:54.22838826 +0000 UTC m=+34.378033632" watchObservedRunningTime="2025-10-30 00:02:54.533394852 +0000 UTC m=+34.683040231" Oct 30 00:02:54.665499 kubelet[2999]: I1030 00:02:54.665470 2999 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f2b9de57-0f59-4e5f-9352-7a17cefee0f0-whisker-backend-key-pair\") pod \"f2b9de57-0f59-4e5f-9352-7a17cefee0f0\" (UID: \"f2b9de57-0f59-4e5f-9352-7a17cefee0f0\") " Oct 30 00:02:54.665596 kubelet[2999]: I1030 00:02:54.665511 2999 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f2b9de57-0f59-4e5f-9352-7a17cefee0f0-whisker-ca-bundle\") pod \"f2b9de57-0f59-4e5f-9352-7a17cefee0f0\" (UID: \"f2b9de57-0f59-4e5f-9352-7a17cefee0f0\") " Oct 30 00:02:54.665596 kubelet[2999]: I1030 00:02:54.665524 2999 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xlvhf\" (UniqueName: \"kubernetes.io/projected/f2b9de57-0f59-4e5f-9352-7a17cefee0f0-kube-api-access-xlvhf\") pod \"f2b9de57-0f59-4e5f-9352-7a17cefee0f0\" (UID: \"f2b9de57-0f59-4e5f-9352-7a17cefee0f0\") " Oct 30 00:02:54.682233 kubelet[2999]: I1030 00:02:54.680344 2999 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2b9de57-0f59-4e5f-9352-7a17cefee0f0-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "f2b9de57-0f59-4e5f-9352-7a17cefee0f0" (UID: "f2b9de57-0f59-4e5f-9352-7a17cefee0f0"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Oct 30 00:02:54.687402 systemd[1]: var-lib-kubelet-pods-f2b9de57\x2d0f59\x2d4e5f\x2d9352\x2d7a17cefee0f0-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dxlvhf.mount: Deactivated successfully. Oct 30 00:02:54.689725 kubelet[2999]: I1030 00:02:54.689546 2999 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2b9de57-0f59-4e5f-9352-7a17cefee0f0-kube-api-access-xlvhf" (OuterVolumeSpecName: "kube-api-access-xlvhf") pod "f2b9de57-0f59-4e5f-9352-7a17cefee0f0" (UID: "f2b9de57-0f59-4e5f-9352-7a17cefee0f0"). InnerVolumeSpecName "kube-api-access-xlvhf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Oct 30 00:02:54.692840 systemd[1]: var-lib-kubelet-pods-f2b9de57\x2d0f59\x2d4e5f\x2d9352\x2d7a17cefee0f0-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Oct 30 00:02:54.694255 kubelet[2999]: I1030 00:02:54.694231 2999 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2b9de57-0f59-4e5f-9352-7a17cefee0f0-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "f2b9de57-0f59-4e5f-9352-7a17cefee0f0" (UID: "f2b9de57-0f59-4e5f-9352-7a17cefee0f0"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Oct 30 00:02:54.765950 kubelet[2999]: I1030 00:02:54.765885 2999 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f2b9de57-0f59-4e5f-9352-7a17cefee0f0-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Oct 30 00:02:54.765950 kubelet[2999]: I1030 00:02:54.765910 2999 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f2b9de57-0f59-4e5f-9352-7a17cefee0f0-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Oct 30 00:02:54.765950 kubelet[2999]: I1030 00:02:54.765932 2999 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xlvhf\" (UniqueName: \"kubernetes.io/projected/f2b9de57-0f59-4e5f-9352-7a17cefee0f0-kube-api-access-xlvhf\") on node \"localhost\" DevicePath \"\"" Oct 30 00:02:55.185810 systemd[1]: Removed slice kubepods-besteffort-podf2b9de57_0f59_4e5f_9352_7a17cefee0f0.slice - libcontainer container kubepods-besteffort-podf2b9de57_0f59_4e5f_9352_7a17cefee0f0.slice. Oct 30 00:02:55.267941 containerd[1684]: time="2025-10-30T00:02:55.267913663Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b7a1c6e305feea3adeac73b0a06b4995b4ff93915ae6a0bebf7453203bde1f48\" id:\"ada5b6b4fa046b90a1053f7fb7538c190220ebe9176d72d2af284f72d89d785b\" pid:4094 exit_status:1 exited_at:{seconds:1761782575 nanos:267704292}" Oct 30 00:02:55.285554 systemd[1]: Created slice kubepods-besteffort-pod233c3511_5659_4261_b377_890b1ba99d60.slice - libcontainer container kubepods-besteffort-pod233c3511_5659_4261_b377_890b1ba99d60.slice. Oct 30 00:02:55.369534 kubelet[2999]: I1030 00:02:55.369473 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svxvj\" (UniqueName: \"kubernetes.io/projected/233c3511-5659-4261-b377-890b1ba99d60-kube-api-access-svxvj\") pod \"whisker-5d498cdb87-vd49x\" (UID: \"233c3511-5659-4261-b377-890b1ba99d60\") " pod="calico-system/whisker-5d498cdb87-vd49x" Oct 30 00:02:55.369753 kubelet[2999]: I1030 00:02:55.369511 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/233c3511-5659-4261-b377-890b1ba99d60-whisker-backend-key-pair\") pod \"whisker-5d498cdb87-vd49x\" (UID: \"233c3511-5659-4261-b377-890b1ba99d60\") " pod="calico-system/whisker-5d498cdb87-vd49x" Oct 30 00:02:55.369753 kubelet[2999]: I1030 00:02:55.369719 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/233c3511-5659-4261-b377-890b1ba99d60-whisker-ca-bundle\") pod \"whisker-5d498cdb87-vd49x\" (UID: \"233c3511-5659-4261-b377-890b1ba99d60\") " pod="calico-system/whisker-5d498cdb87-vd49x" Oct 30 00:02:55.590204 containerd[1684]: time="2025-10-30T00:02:55.590125889Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5d498cdb87-vd49x,Uid:233c3511-5659-4261-b377-890b1ba99d60,Namespace:calico-system,Attempt:0,}" Oct 30 00:02:56.041923 kubelet[2999]: I1030 00:02:56.041838 2999 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2b9de57-0f59-4e5f-9352-7a17cefee0f0" path="/var/lib/kubelet/pods/f2b9de57-0f59-4e5f-9352-7a17cefee0f0/volumes" Oct 30 00:02:56.216008 systemd-networkd[1584]: cali16bc2231b33: Link UP Oct 30 00:02:56.216200 systemd-networkd[1584]: cali16bc2231b33: Gained carrier Oct 30 00:02:56.229289 containerd[1684]: 2025-10-30 00:02:55.616 [INFO][4111] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 30 00:02:56.229289 containerd[1684]: 2025-10-30 00:02:55.652 [INFO][4111] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--5d498cdb87--vd49x-eth0 whisker-5d498cdb87- calico-system 233c3511-5659-4261-b377-890b1ba99d60 909 0 2025-10-30 00:02:55 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:5d498cdb87 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-5d498cdb87-vd49x eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali16bc2231b33 [] [] }} ContainerID="c388212fa766f864f36db4fb11ebdcc95c952d4f249171ab838afe5e7a77d635" Namespace="calico-system" Pod="whisker-5d498cdb87-vd49x" WorkloadEndpoint="localhost-k8s-whisker--5d498cdb87--vd49x-" Oct 30 00:02:56.229289 containerd[1684]: 2025-10-30 00:02:55.653 [INFO][4111] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c388212fa766f864f36db4fb11ebdcc95c952d4f249171ab838afe5e7a77d635" Namespace="calico-system" Pod="whisker-5d498cdb87-vd49x" WorkloadEndpoint="localhost-k8s-whisker--5d498cdb87--vd49x-eth0" Oct 30 00:02:56.229289 containerd[1684]: 2025-10-30 00:02:56.101 [INFO][4122] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c388212fa766f864f36db4fb11ebdcc95c952d4f249171ab838afe5e7a77d635" HandleID="k8s-pod-network.c388212fa766f864f36db4fb11ebdcc95c952d4f249171ab838afe5e7a77d635" Workload="localhost-k8s-whisker--5d498cdb87--vd49x-eth0" Oct 30 00:02:56.229584 containerd[1684]: 2025-10-30 00:02:56.107 [INFO][4122] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c388212fa766f864f36db4fb11ebdcc95c952d4f249171ab838afe5e7a77d635" HandleID="k8s-pod-network.c388212fa766f864f36db4fb11ebdcc95c952d4f249171ab838afe5e7a77d635" Workload="localhost-k8s-whisker--5d498cdb87--vd49x-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024d090), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-5d498cdb87-vd49x", "timestamp":"2025-10-30 00:02:56.101417116 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 30 00:02:56.229584 containerd[1684]: 2025-10-30 00:02:56.107 [INFO][4122] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 30 00:02:56.229584 containerd[1684]: 2025-10-30 00:02:56.119 [INFO][4122] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 30 00:02:56.229584 containerd[1684]: 2025-10-30 00:02:56.124 [INFO][4122] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 30 00:02:56.229584 containerd[1684]: 2025-10-30 00:02:56.140 [INFO][4122] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c388212fa766f864f36db4fb11ebdcc95c952d4f249171ab838afe5e7a77d635" host="localhost" Oct 30 00:02:56.229584 containerd[1684]: 2025-10-30 00:02:56.153 [INFO][4122] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 30 00:02:56.229584 containerd[1684]: 2025-10-30 00:02:56.157 [INFO][4122] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 30 00:02:56.229584 containerd[1684]: 2025-10-30 00:02:56.159 [INFO][4122] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 30 00:02:56.229584 containerd[1684]: 2025-10-30 00:02:56.160 [INFO][4122] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 30 00:02:56.229584 containerd[1684]: 2025-10-30 00:02:56.160 [INFO][4122] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c388212fa766f864f36db4fb11ebdcc95c952d4f249171ab838afe5e7a77d635" host="localhost" Oct 30 00:02:56.230385 containerd[1684]: 2025-10-30 00:02:56.162 [INFO][4122] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c388212fa766f864f36db4fb11ebdcc95c952d4f249171ab838afe5e7a77d635 Oct 30 00:02:56.230385 containerd[1684]: 2025-10-30 00:02:56.166 [INFO][4122] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c388212fa766f864f36db4fb11ebdcc95c952d4f249171ab838afe5e7a77d635" host="localhost" Oct 30 00:02:56.230385 containerd[1684]: 2025-10-30 00:02:56.171 [INFO][4122] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.c388212fa766f864f36db4fb11ebdcc95c952d4f249171ab838afe5e7a77d635" host="localhost" Oct 30 00:02:56.230385 containerd[1684]: 2025-10-30 00:02:56.171 [INFO][4122] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.c388212fa766f864f36db4fb11ebdcc95c952d4f249171ab838afe5e7a77d635" host="localhost" Oct 30 00:02:56.230385 containerd[1684]: 2025-10-30 00:02:56.171 [INFO][4122] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 30 00:02:56.230385 containerd[1684]: 2025-10-30 00:02:56.171 [INFO][4122] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="c388212fa766f864f36db4fb11ebdcc95c952d4f249171ab838afe5e7a77d635" HandleID="k8s-pod-network.c388212fa766f864f36db4fb11ebdcc95c952d4f249171ab838afe5e7a77d635" Workload="localhost-k8s-whisker--5d498cdb87--vd49x-eth0" Oct 30 00:02:56.230906 containerd[1684]: 2025-10-30 00:02:56.173 [INFO][4111] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c388212fa766f864f36db4fb11ebdcc95c952d4f249171ab838afe5e7a77d635" Namespace="calico-system" Pod="whisker-5d498cdb87-vd49x" WorkloadEndpoint="localhost-k8s-whisker--5d498cdb87--vd49x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--5d498cdb87--vd49x-eth0", GenerateName:"whisker-5d498cdb87-", Namespace:"calico-system", SelfLink:"", UID:"233c3511-5659-4261-b377-890b1ba99d60", ResourceVersion:"909", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 0, 2, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5d498cdb87", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-5d498cdb87-vd49x", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali16bc2231b33", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 00:02:56.230906 containerd[1684]: 2025-10-30 00:02:56.173 [INFO][4111] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="c388212fa766f864f36db4fb11ebdcc95c952d4f249171ab838afe5e7a77d635" Namespace="calico-system" Pod="whisker-5d498cdb87-vd49x" WorkloadEndpoint="localhost-k8s-whisker--5d498cdb87--vd49x-eth0" Oct 30 00:02:56.231152 containerd[1684]: 2025-10-30 00:02:56.173 [INFO][4111] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali16bc2231b33 ContainerID="c388212fa766f864f36db4fb11ebdcc95c952d4f249171ab838afe5e7a77d635" Namespace="calico-system" Pod="whisker-5d498cdb87-vd49x" WorkloadEndpoint="localhost-k8s-whisker--5d498cdb87--vd49x-eth0" Oct 30 00:02:56.231152 containerd[1684]: 2025-10-30 00:02:56.211 [INFO][4111] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c388212fa766f864f36db4fb11ebdcc95c952d4f249171ab838afe5e7a77d635" Namespace="calico-system" Pod="whisker-5d498cdb87-vd49x" WorkloadEndpoint="localhost-k8s-whisker--5d498cdb87--vd49x-eth0" Oct 30 00:02:56.231428 containerd[1684]: 2025-10-30 00:02:56.213 [INFO][4111] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c388212fa766f864f36db4fb11ebdcc95c952d4f249171ab838afe5e7a77d635" Namespace="calico-system" Pod="whisker-5d498cdb87-vd49x" WorkloadEndpoint="localhost-k8s-whisker--5d498cdb87--vd49x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--5d498cdb87--vd49x-eth0", GenerateName:"whisker-5d498cdb87-", Namespace:"calico-system", SelfLink:"", UID:"233c3511-5659-4261-b377-890b1ba99d60", ResourceVersion:"909", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 0, 2, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5d498cdb87", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c388212fa766f864f36db4fb11ebdcc95c952d4f249171ab838afe5e7a77d635", Pod:"whisker-5d498cdb87-vd49x", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali16bc2231b33", MAC:"86:02:8a:9d:0a:6a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 00:02:56.232011 containerd[1684]: 2025-10-30 00:02:56.225 [INFO][4111] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c388212fa766f864f36db4fb11ebdcc95c952d4f249171ab838afe5e7a77d635" Namespace="calico-system" Pod="whisker-5d498cdb87-vd49x" WorkloadEndpoint="localhost-k8s-whisker--5d498cdb87--vd49x-eth0" Oct 30 00:02:56.295647 systemd-networkd[1584]: vxlan.calico: Link UP Oct 30 00:02:56.295652 systemd-networkd[1584]: vxlan.calico: Gained carrier Oct 30 00:02:56.361662 containerd[1684]: time="2025-10-30T00:02:56.361068191Z" level=info msg="connecting to shim c388212fa766f864f36db4fb11ebdcc95c952d4f249171ab838afe5e7a77d635" address="unix:///run/containerd/s/0733cf0d8fdf4dfe323948616c3fe343ba0880ff8204f48f35a353fc83f215fc" namespace=k8s.io protocol=ttrpc version=3 Oct 30 00:02:56.398257 systemd[1]: Started cri-containerd-c388212fa766f864f36db4fb11ebdcc95c952d4f249171ab838afe5e7a77d635.scope - libcontainer container c388212fa766f864f36db4fb11ebdcc95c952d4f249171ab838afe5e7a77d635. Oct 30 00:02:56.425396 systemd-resolved[1337]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 30 00:02:56.492262 containerd[1684]: time="2025-10-30T00:02:56.492229069Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5d498cdb87-vd49x,Uid:233c3511-5659-4261-b377-890b1ba99d60,Namespace:calico-system,Attempt:0,} returns sandbox id \"c388212fa766f864f36db4fb11ebdcc95c952d4f249171ab838afe5e7a77d635\"" Oct 30 00:02:56.494652 containerd[1684]: time="2025-10-30T00:02:56.494624174Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 30 00:02:56.867829 containerd[1684]: time="2025-10-30T00:02:56.867705197Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:02:56.876616 containerd[1684]: time="2025-10-30T00:02:56.876516067Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 30 00:02:56.876616 containerd[1684]: time="2025-10-30T00:02:56.876589933Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Oct 30 00:02:56.877264 kubelet[2999]: E1030 00:02:56.877198 2999 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 30 00:02:56.877264 kubelet[2999]: E1030 00:02:56.877232 2999 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 30 00:02:56.877354 kubelet[2999]: E1030 00:02:56.877300 2999 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-5d498cdb87-vd49x_calico-system(233c3511-5659-4261-b377-890b1ba99d60): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 30 00:02:56.878667 containerd[1684]: time="2025-10-30T00:02:56.878647884Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 30 00:02:57.273138 containerd[1684]: time="2025-10-30T00:02:57.273084142Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:02:57.273458 containerd[1684]: time="2025-10-30T00:02:57.273430032Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 30 00:02:57.273553 containerd[1684]: time="2025-10-30T00:02:57.273490399Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Oct 30 00:02:57.273675 kubelet[2999]: E1030 00:02:57.273623 2999 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 30 00:02:57.274074 kubelet[2999]: E1030 00:02:57.273683 2999 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 30 00:02:57.274074 kubelet[2999]: E1030 00:02:57.273755 2999 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-5d498cdb87-vd49x_calico-system(233c3511-5659-4261-b377-890b1ba99d60): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 30 00:02:57.274074 kubelet[2999]: E1030 00:02:57.273794 2999 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5d498cdb87-vd49x" podUID="233c3511-5659-4261-b377-890b1ba99d60" Oct 30 00:02:57.396232 systemd-networkd[1584]: cali16bc2231b33: Gained IPv6LL Oct 30 00:02:57.460246 systemd-networkd[1584]: vxlan.calico: Gained IPv6LL Oct 30 00:02:58.192179 kubelet[2999]: E1030 00:02:58.192146 2999 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5d498cdb87-vd49x" podUID="233c3511-5659-4261-b377-890b1ba99d60" Oct 30 00:03:00.035462 containerd[1684]: time="2025-10-30T00:03:00.035215641Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7d5c95967b-s9656,Uid:dc87ca77-2c29-4c4e-beb8-2b81cdefd490,Namespace:calico-system,Attempt:0,}" Oct 30 00:03:00.035955 containerd[1684]: time="2025-10-30T00:03:00.035942911Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-nf8m8,Uid:070e753e-6ee6-4538-bab1-e3c05026a56a,Namespace:kube-system,Attempt:0,}" Oct 30 00:03:00.135276 systemd-networkd[1584]: calia91e5a02bd8: Link UP Oct 30 00:03:00.136096 systemd-networkd[1584]: calia91e5a02bd8: Gained carrier Oct 30 00:03:00.156126 containerd[1684]: 2025-10-30 00:03:00.081 [INFO][4394] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--7d5c95967b--s9656-eth0 calico-kube-controllers-7d5c95967b- calico-system dc87ca77-2c29-4c4e-beb8-2b81cdefd490 843 0 2025-10-30 00:02:38 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7d5c95967b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-7d5c95967b-s9656 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calia91e5a02bd8 [] [] }} ContainerID="8154f576c8f9dd89c3c06d90353fe627b137e276f5c426fa89dfdf55c541dd52" Namespace="calico-system" Pod="calico-kube-controllers-7d5c95967b-s9656" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7d5c95967b--s9656-" Oct 30 00:03:00.156126 containerd[1684]: 2025-10-30 00:03:00.081 [INFO][4394] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8154f576c8f9dd89c3c06d90353fe627b137e276f5c426fa89dfdf55c541dd52" Namespace="calico-system" Pod="calico-kube-controllers-7d5c95967b-s9656" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7d5c95967b--s9656-eth0" Oct 30 00:03:00.156126 containerd[1684]: 2025-10-30 00:03:00.102 [INFO][4410] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8154f576c8f9dd89c3c06d90353fe627b137e276f5c426fa89dfdf55c541dd52" HandleID="k8s-pod-network.8154f576c8f9dd89c3c06d90353fe627b137e276f5c426fa89dfdf55c541dd52" Workload="localhost-k8s-calico--kube--controllers--7d5c95967b--s9656-eth0" Oct 30 00:03:00.156290 containerd[1684]: 2025-10-30 00:03:00.102 [INFO][4410] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="8154f576c8f9dd89c3c06d90353fe627b137e276f5c426fa89dfdf55c541dd52" HandleID="k8s-pod-network.8154f576c8f9dd89c3c06d90353fe627b137e276f5c426fa89dfdf55c541dd52" Workload="localhost-k8s-calico--kube--controllers--7d5c95967b--s9656-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5680), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-7d5c95967b-s9656", "timestamp":"2025-10-30 00:03:00.102124128 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 30 00:03:00.156290 containerd[1684]: 2025-10-30 00:03:00.102 [INFO][4410] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 30 00:03:00.156290 containerd[1684]: 2025-10-30 00:03:00.102 [INFO][4410] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 30 00:03:00.156290 containerd[1684]: 2025-10-30 00:03:00.102 [INFO][4410] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 30 00:03:00.156290 containerd[1684]: 2025-10-30 00:03:00.107 [INFO][4410] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8154f576c8f9dd89c3c06d90353fe627b137e276f5c426fa89dfdf55c541dd52" host="localhost" Oct 30 00:03:00.156290 containerd[1684]: 2025-10-30 00:03:00.113 [INFO][4410] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 30 00:03:00.156290 containerd[1684]: 2025-10-30 00:03:00.116 [INFO][4410] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 30 00:03:00.156290 containerd[1684]: 2025-10-30 00:03:00.117 [INFO][4410] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 30 00:03:00.156290 containerd[1684]: 2025-10-30 00:03:00.120 [INFO][4410] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 30 00:03:00.156290 containerd[1684]: 2025-10-30 00:03:00.120 [INFO][4410] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8154f576c8f9dd89c3c06d90353fe627b137e276f5c426fa89dfdf55c541dd52" host="localhost" Oct 30 00:03:00.157447 containerd[1684]: 2025-10-30 00:03:00.122 [INFO][4410] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.8154f576c8f9dd89c3c06d90353fe627b137e276f5c426fa89dfdf55c541dd52 Oct 30 00:03:00.157447 containerd[1684]: 2025-10-30 00:03:00.124 [INFO][4410] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8154f576c8f9dd89c3c06d90353fe627b137e276f5c426fa89dfdf55c541dd52" host="localhost" Oct 30 00:03:00.157447 containerd[1684]: 2025-10-30 00:03:00.128 [INFO][4410] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.8154f576c8f9dd89c3c06d90353fe627b137e276f5c426fa89dfdf55c541dd52" host="localhost" Oct 30 00:03:00.157447 containerd[1684]: 2025-10-30 00:03:00.128 [INFO][4410] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.8154f576c8f9dd89c3c06d90353fe627b137e276f5c426fa89dfdf55c541dd52" host="localhost" Oct 30 00:03:00.157447 containerd[1684]: 2025-10-30 00:03:00.128 [INFO][4410] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 30 00:03:00.157447 containerd[1684]: 2025-10-30 00:03:00.128 [INFO][4410] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="8154f576c8f9dd89c3c06d90353fe627b137e276f5c426fa89dfdf55c541dd52" HandleID="k8s-pod-network.8154f576c8f9dd89c3c06d90353fe627b137e276f5c426fa89dfdf55c541dd52" Workload="localhost-k8s-calico--kube--controllers--7d5c95967b--s9656-eth0" Oct 30 00:03:00.158250 containerd[1684]: 2025-10-30 00:03:00.130 [INFO][4394] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8154f576c8f9dd89c3c06d90353fe627b137e276f5c426fa89dfdf55c541dd52" Namespace="calico-system" Pod="calico-kube-controllers-7d5c95967b-s9656" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7d5c95967b--s9656-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7d5c95967b--s9656-eth0", GenerateName:"calico-kube-controllers-7d5c95967b-", Namespace:"calico-system", SelfLink:"", UID:"dc87ca77-2c29-4c4e-beb8-2b81cdefd490", ResourceVersion:"843", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 0, 2, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7d5c95967b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-7d5c95967b-s9656", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia91e5a02bd8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 00:03:00.158301 containerd[1684]: 2025-10-30 00:03:00.130 [INFO][4394] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="8154f576c8f9dd89c3c06d90353fe627b137e276f5c426fa89dfdf55c541dd52" Namespace="calico-system" Pod="calico-kube-controllers-7d5c95967b-s9656" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7d5c95967b--s9656-eth0" Oct 30 00:03:00.158301 containerd[1684]: 2025-10-30 00:03:00.130 [INFO][4394] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia91e5a02bd8 ContainerID="8154f576c8f9dd89c3c06d90353fe627b137e276f5c426fa89dfdf55c541dd52" Namespace="calico-system" Pod="calico-kube-controllers-7d5c95967b-s9656" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7d5c95967b--s9656-eth0" Oct 30 00:03:00.158301 containerd[1684]: 2025-10-30 00:03:00.136 [INFO][4394] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8154f576c8f9dd89c3c06d90353fe627b137e276f5c426fa89dfdf55c541dd52" Namespace="calico-system" Pod="calico-kube-controllers-7d5c95967b-s9656" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7d5c95967b--s9656-eth0" Oct 30 00:03:00.158350 containerd[1684]: 2025-10-30 00:03:00.138 [INFO][4394] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8154f576c8f9dd89c3c06d90353fe627b137e276f5c426fa89dfdf55c541dd52" Namespace="calico-system" Pod="calico-kube-controllers-7d5c95967b-s9656" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7d5c95967b--s9656-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7d5c95967b--s9656-eth0", GenerateName:"calico-kube-controllers-7d5c95967b-", Namespace:"calico-system", SelfLink:"", UID:"dc87ca77-2c29-4c4e-beb8-2b81cdefd490", ResourceVersion:"843", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 0, 2, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7d5c95967b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8154f576c8f9dd89c3c06d90353fe627b137e276f5c426fa89dfdf55c541dd52", Pod:"calico-kube-controllers-7d5c95967b-s9656", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia91e5a02bd8", MAC:"2e:17:4c:75:1b:42", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 00:03:00.158391 containerd[1684]: 2025-10-30 00:03:00.145 [INFO][4394] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8154f576c8f9dd89c3c06d90353fe627b137e276f5c426fa89dfdf55c541dd52" Namespace="calico-system" Pod="calico-kube-controllers-7d5c95967b-s9656" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7d5c95967b--s9656-eth0" Oct 30 00:03:00.182810 containerd[1684]: time="2025-10-30T00:03:00.182775004Z" level=info msg="connecting to shim 8154f576c8f9dd89c3c06d90353fe627b137e276f5c426fa89dfdf55c541dd52" address="unix:///run/containerd/s/c8aca93f89c5fd4f4a15ebb112037829897ccd89710a7f305cfdcb09abe5b4ac" namespace=k8s.io protocol=ttrpc version=3 Oct 30 00:03:00.202440 systemd[1]: Started cri-containerd-8154f576c8f9dd89c3c06d90353fe627b137e276f5c426fa89dfdf55c541dd52.scope - libcontainer container 8154f576c8f9dd89c3c06d90353fe627b137e276f5c426fa89dfdf55c541dd52. Oct 30 00:03:00.213052 systemd-resolved[1337]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 30 00:03:00.230203 systemd-networkd[1584]: cali2ed136c31a5: Link UP Oct 30 00:03:00.230780 systemd-networkd[1584]: cali2ed136c31a5: Gained carrier Oct 30 00:03:00.244752 containerd[1684]: 2025-10-30 00:03:00.081 [INFO][4386] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--nf8m8-eth0 coredns-66bc5c9577- kube-system 070e753e-6ee6-4538-bab1-e3c05026a56a 832 0 2025-10-30 00:02:25 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-nf8m8 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali2ed136c31a5 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="35442db6083a64449e3c6a8d586ca0abeb33b28ae19f7296cd35507fd8c41f0f" Namespace="kube-system" Pod="coredns-66bc5c9577-nf8m8" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--nf8m8-" Oct 30 00:03:00.244752 containerd[1684]: 2025-10-30 00:03:00.081 [INFO][4386] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="35442db6083a64449e3c6a8d586ca0abeb33b28ae19f7296cd35507fd8c41f0f" Namespace="kube-system" Pod="coredns-66bc5c9577-nf8m8" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--nf8m8-eth0" Oct 30 00:03:00.244752 containerd[1684]: 2025-10-30 00:03:00.121 [INFO][4413] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="35442db6083a64449e3c6a8d586ca0abeb33b28ae19f7296cd35507fd8c41f0f" HandleID="k8s-pod-network.35442db6083a64449e3c6a8d586ca0abeb33b28ae19f7296cd35507fd8c41f0f" Workload="localhost-k8s-coredns--66bc5c9577--nf8m8-eth0" Oct 30 00:03:00.244908 containerd[1684]: 2025-10-30 00:03:00.121 [INFO][4413] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="35442db6083a64449e3c6a8d586ca0abeb33b28ae19f7296cd35507fd8c41f0f" HandleID="k8s-pod-network.35442db6083a64449e3c6a8d586ca0abeb33b28ae19f7296cd35507fd8c41f0f" Workload="localhost-k8s-coredns--66bc5c9577--nf8m8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000239600), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-nf8m8", "timestamp":"2025-10-30 00:03:00.121756253 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 30 00:03:00.244908 containerd[1684]: 2025-10-30 00:03:00.121 [INFO][4413] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 30 00:03:00.244908 containerd[1684]: 2025-10-30 00:03:00.128 [INFO][4413] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 30 00:03:00.244908 containerd[1684]: 2025-10-30 00:03:00.128 [INFO][4413] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 30 00:03:00.244908 containerd[1684]: 2025-10-30 00:03:00.208 [INFO][4413] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.35442db6083a64449e3c6a8d586ca0abeb33b28ae19f7296cd35507fd8c41f0f" host="localhost" Oct 30 00:03:00.244908 containerd[1684]: 2025-10-30 00:03:00.214 [INFO][4413] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 30 00:03:00.244908 containerd[1684]: 2025-10-30 00:03:00.217 [INFO][4413] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 30 00:03:00.244908 containerd[1684]: 2025-10-30 00:03:00.218 [INFO][4413] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 30 00:03:00.244908 containerd[1684]: 2025-10-30 00:03:00.219 [INFO][4413] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 30 00:03:00.244908 containerd[1684]: 2025-10-30 00:03:00.219 [INFO][4413] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.35442db6083a64449e3c6a8d586ca0abeb33b28ae19f7296cd35507fd8c41f0f" host="localhost" Oct 30 00:03:00.245083 containerd[1684]: 2025-10-30 00:03:00.220 [INFO][4413] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.35442db6083a64449e3c6a8d586ca0abeb33b28ae19f7296cd35507fd8c41f0f Oct 30 00:03:00.245083 containerd[1684]: 2025-10-30 00:03:00.223 [INFO][4413] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.35442db6083a64449e3c6a8d586ca0abeb33b28ae19f7296cd35507fd8c41f0f" host="localhost" Oct 30 00:03:00.245083 containerd[1684]: 2025-10-30 00:03:00.226 [INFO][4413] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.35442db6083a64449e3c6a8d586ca0abeb33b28ae19f7296cd35507fd8c41f0f" host="localhost" Oct 30 00:03:00.245083 containerd[1684]: 2025-10-30 00:03:00.226 [INFO][4413] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.35442db6083a64449e3c6a8d586ca0abeb33b28ae19f7296cd35507fd8c41f0f" host="localhost" Oct 30 00:03:00.245083 containerd[1684]: 2025-10-30 00:03:00.226 [INFO][4413] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 30 00:03:00.245083 containerd[1684]: 2025-10-30 00:03:00.226 [INFO][4413] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="35442db6083a64449e3c6a8d586ca0abeb33b28ae19f7296cd35507fd8c41f0f" HandleID="k8s-pod-network.35442db6083a64449e3c6a8d586ca0abeb33b28ae19f7296cd35507fd8c41f0f" Workload="localhost-k8s-coredns--66bc5c9577--nf8m8-eth0" Oct 30 00:03:00.245206 containerd[1684]: 2025-10-30 00:03:00.228 [INFO][4386] cni-plugin/k8s.go 418: Populated endpoint ContainerID="35442db6083a64449e3c6a8d586ca0abeb33b28ae19f7296cd35507fd8c41f0f" Namespace="kube-system" Pod="coredns-66bc5c9577-nf8m8" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--nf8m8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--nf8m8-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"070e753e-6ee6-4538-bab1-e3c05026a56a", ResourceVersion:"832", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 0, 2, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-nf8m8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2ed136c31a5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 00:03:00.245206 containerd[1684]: 2025-10-30 00:03:00.228 [INFO][4386] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="35442db6083a64449e3c6a8d586ca0abeb33b28ae19f7296cd35507fd8c41f0f" Namespace="kube-system" Pod="coredns-66bc5c9577-nf8m8" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--nf8m8-eth0" Oct 30 00:03:00.245206 containerd[1684]: 2025-10-30 00:03:00.228 [INFO][4386] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2ed136c31a5 ContainerID="35442db6083a64449e3c6a8d586ca0abeb33b28ae19f7296cd35507fd8c41f0f" Namespace="kube-system" Pod="coredns-66bc5c9577-nf8m8" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--nf8m8-eth0" Oct 30 00:03:00.245206 containerd[1684]: 2025-10-30 00:03:00.231 [INFO][4386] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="35442db6083a64449e3c6a8d586ca0abeb33b28ae19f7296cd35507fd8c41f0f" Namespace="kube-system" Pod="coredns-66bc5c9577-nf8m8" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--nf8m8-eth0" Oct 30 00:03:00.245206 containerd[1684]: 2025-10-30 00:03:00.233 [INFO][4386] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="35442db6083a64449e3c6a8d586ca0abeb33b28ae19f7296cd35507fd8c41f0f" Namespace="kube-system" Pod="coredns-66bc5c9577-nf8m8" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--nf8m8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--nf8m8-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"070e753e-6ee6-4538-bab1-e3c05026a56a", ResourceVersion:"832", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 0, 2, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"35442db6083a64449e3c6a8d586ca0abeb33b28ae19f7296cd35507fd8c41f0f", Pod:"coredns-66bc5c9577-nf8m8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2ed136c31a5", MAC:"7e:7b:95:26:91:9d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 00:03:00.245206 containerd[1684]: 2025-10-30 00:03:00.243 [INFO][4386] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="35442db6083a64449e3c6a8d586ca0abeb33b28ae19f7296cd35507fd8c41f0f" Namespace="kube-system" Pod="coredns-66bc5c9577-nf8m8" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--nf8m8-eth0" Oct 30 00:03:00.261337 containerd[1684]: time="2025-10-30T00:03:00.261219439Z" level=info msg="connecting to shim 35442db6083a64449e3c6a8d586ca0abeb33b28ae19f7296cd35507fd8c41f0f" address="unix:///run/containerd/s/bce921f2352540c461d1918babc4db0566830e8f884a211655683af58cc602e9" namespace=k8s.io protocol=ttrpc version=3 Oct 30 00:03:00.274478 containerd[1684]: time="2025-10-30T00:03:00.274449803Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7d5c95967b-s9656,Uid:dc87ca77-2c29-4c4e-beb8-2b81cdefd490,Namespace:calico-system,Attempt:0,} returns sandbox id \"8154f576c8f9dd89c3c06d90353fe627b137e276f5c426fa89dfdf55c541dd52\"" Oct 30 00:03:00.276300 containerd[1684]: time="2025-10-30T00:03:00.276270428Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 30 00:03:00.289236 systemd[1]: Started cri-containerd-35442db6083a64449e3c6a8d586ca0abeb33b28ae19f7296cd35507fd8c41f0f.scope - libcontainer container 35442db6083a64449e3c6a8d586ca0abeb33b28ae19f7296cd35507fd8c41f0f. Oct 30 00:03:00.299003 systemd-resolved[1337]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 30 00:03:00.340310 containerd[1684]: time="2025-10-30T00:03:00.340282581Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-nf8m8,Uid:070e753e-6ee6-4538-bab1-e3c05026a56a,Namespace:kube-system,Attempt:0,} returns sandbox id \"35442db6083a64449e3c6a8d586ca0abeb33b28ae19f7296cd35507fd8c41f0f\"" Oct 30 00:03:00.355570 containerd[1684]: time="2025-10-30T00:03:00.355543548Z" level=info msg="CreateContainer within sandbox \"35442db6083a64449e3c6a8d586ca0abeb33b28ae19f7296cd35507fd8c41f0f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 30 00:03:00.365587 containerd[1684]: time="2025-10-30T00:03:00.365550415Z" level=info msg="Container c2fbe6c1451dbed30eb23281fc73f76a73a075cf2e68ec1e3807f2bdfa75feca: CDI devices from CRI Config.CDIDevices: []" Oct 30 00:03:00.369280 containerd[1684]: time="2025-10-30T00:03:00.369248161Z" level=info msg="CreateContainer within sandbox \"35442db6083a64449e3c6a8d586ca0abeb33b28ae19f7296cd35507fd8c41f0f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c2fbe6c1451dbed30eb23281fc73f76a73a075cf2e68ec1e3807f2bdfa75feca\"" Oct 30 00:03:00.370290 containerd[1684]: time="2025-10-30T00:03:00.370270986Z" level=info msg="StartContainer for \"c2fbe6c1451dbed30eb23281fc73f76a73a075cf2e68ec1e3807f2bdfa75feca\"" Oct 30 00:03:00.371002 containerd[1684]: time="2025-10-30T00:03:00.370980527Z" level=info msg="connecting to shim c2fbe6c1451dbed30eb23281fc73f76a73a075cf2e68ec1e3807f2bdfa75feca" address="unix:///run/containerd/s/bce921f2352540c461d1918babc4db0566830e8f884a211655683af58cc602e9" protocol=ttrpc version=3 Oct 30 00:03:00.387251 systemd[1]: Started cri-containerd-c2fbe6c1451dbed30eb23281fc73f76a73a075cf2e68ec1e3807f2bdfa75feca.scope - libcontainer container c2fbe6c1451dbed30eb23281fc73f76a73a075cf2e68ec1e3807f2bdfa75feca. Oct 30 00:03:00.417863 containerd[1684]: time="2025-10-30T00:03:00.417836664Z" level=info msg="StartContainer for \"c2fbe6c1451dbed30eb23281fc73f76a73a075cf2e68ec1e3807f2bdfa75feca\" returns successfully" Oct 30 00:03:00.795092 containerd[1684]: time="2025-10-30T00:03:00.794956129Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:03:00.795597 containerd[1684]: time="2025-10-30T00:03:00.795527172Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 30 00:03:00.795597 containerd[1684]: time="2025-10-30T00:03:00.795581803Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Oct 30 00:03:00.795785 kubelet[2999]: E1030 00:03:00.795746 2999 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 30 00:03:00.796241 kubelet[2999]: E1030 00:03:00.795787 2999 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 30 00:03:00.796241 kubelet[2999]: E1030 00:03:00.795844 2999 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-7d5c95967b-s9656_calico-system(dc87ca77-2c29-4c4e-beb8-2b81cdefd490): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 30 00:03:00.796241 kubelet[2999]: E1030 00:03:00.795879 2999 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7d5c95967b-s9656" podUID="dc87ca77-2c29-4c4e-beb8-2b81cdefd490" Oct 30 00:03:01.044298 containerd[1684]: time="2025-10-30T00:03:01.044273866Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b74d8c9c5-74m4p,Uid:3e83ebe5-6a04-4887-9815-f7c972a7870a,Namespace:calico-apiserver,Attempt:0,}" Oct 30 00:03:01.045047 containerd[1684]: time="2025-10-30T00:03:01.044354301Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7pkzg,Uid:b56fb7b2-ab30-4c17-b3d4-f41ec039c361,Namespace:calico-system,Attempt:0,}" Oct 30 00:03:01.220096 systemd-networkd[1584]: calib2a29b4de76: Link UP Oct 30 00:03:01.220758 systemd-networkd[1584]: calib2a29b4de76: Gained carrier Oct 30 00:03:01.243159 containerd[1684]: 2025-10-30 00:03:01.086 [INFO][4570] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7b74d8c9c5--74m4p-eth0 calico-apiserver-7b74d8c9c5- calico-apiserver 3e83ebe5-6a04-4887-9815-f7c972a7870a 842 0 2025-10-30 00:02:34 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7b74d8c9c5 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7b74d8c9c5-74m4p eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calib2a29b4de76 [] [] }} ContainerID="3a972c306b38cda84340e77b0d555d16b522f077161d397956751c227706532f" Namespace="calico-apiserver" Pod="calico-apiserver-7b74d8c9c5-74m4p" WorkloadEndpoint="localhost-k8s-calico--apiserver--7b74d8c9c5--74m4p-" Oct 30 00:03:01.243159 containerd[1684]: 2025-10-30 00:03:01.087 [INFO][4570] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3a972c306b38cda84340e77b0d555d16b522f077161d397956751c227706532f" Namespace="calico-apiserver" Pod="calico-apiserver-7b74d8c9c5-74m4p" WorkloadEndpoint="localhost-k8s-calico--apiserver--7b74d8c9c5--74m4p-eth0" Oct 30 00:03:01.243159 containerd[1684]: 2025-10-30 00:03:01.152 [INFO][4596] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3a972c306b38cda84340e77b0d555d16b522f077161d397956751c227706532f" HandleID="k8s-pod-network.3a972c306b38cda84340e77b0d555d16b522f077161d397956751c227706532f" Workload="localhost-k8s-calico--apiserver--7b74d8c9c5--74m4p-eth0" Oct 30 00:03:01.243159 containerd[1684]: 2025-10-30 00:03:01.152 [INFO][4596] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="3a972c306b38cda84340e77b0d555d16b522f077161d397956751c227706532f" HandleID="k8s-pod-network.3a972c306b38cda84340e77b0d555d16b522f077161d397956751c227706532f" Workload="localhost-k8s-calico--apiserver--7b74d8c9c5--74m4p-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ad370), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7b74d8c9c5-74m4p", "timestamp":"2025-10-30 00:03:01.152291299 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 30 00:03:01.243159 containerd[1684]: 2025-10-30 00:03:01.152 [INFO][4596] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 30 00:03:01.243159 containerd[1684]: 2025-10-30 00:03:01.152 [INFO][4596] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 30 00:03:01.243159 containerd[1684]: 2025-10-30 00:03:01.152 [INFO][4596] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 30 00:03:01.243159 containerd[1684]: 2025-10-30 00:03:01.156 [INFO][4596] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3a972c306b38cda84340e77b0d555d16b522f077161d397956751c227706532f" host="localhost" Oct 30 00:03:01.243159 containerd[1684]: 2025-10-30 00:03:01.159 [INFO][4596] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 30 00:03:01.243159 containerd[1684]: 2025-10-30 00:03:01.161 [INFO][4596] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 30 00:03:01.243159 containerd[1684]: 2025-10-30 00:03:01.162 [INFO][4596] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 30 00:03:01.243159 containerd[1684]: 2025-10-30 00:03:01.163 [INFO][4596] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 30 00:03:01.243159 containerd[1684]: 2025-10-30 00:03:01.163 [INFO][4596] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3a972c306b38cda84340e77b0d555d16b522f077161d397956751c227706532f" host="localhost" Oct 30 00:03:01.243159 containerd[1684]: 2025-10-30 00:03:01.164 [INFO][4596] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.3a972c306b38cda84340e77b0d555d16b522f077161d397956751c227706532f Oct 30 00:03:01.243159 containerd[1684]: 2025-10-30 00:03:01.199 [INFO][4596] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3a972c306b38cda84340e77b0d555d16b522f077161d397956751c227706532f" host="localhost" Oct 30 00:03:01.243159 containerd[1684]: 2025-10-30 00:03:01.215 [INFO][4596] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.3a972c306b38cda84340e77b0d555d16b522f077161d397956751c227706532f" host="localhost" Oct 30 00:03:01.243159 containerd[1684]: 2025-10-30 00:03:01.216 [INFO][4596] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.3a972c306b38cda84340e77b0d555d16b522f077161d397956751c227706532f" host="localhost" Oct 30 00:03:01.243159 containerd[1684]: 2025-10-30 00:03:01.216 [INFO][4596] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 30 00:03:01.243159 containerd[1684]: 2025-10-30 00:03:01.216 [INFO][4596] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="3a972c306b38cda84340e77b0d555d16b522f077161d397956751c227706532f" HandleID="k8s-pod-network.3a972c306b38cda84340e77b0d555d16b522f077161d397956751c227706532f" Workload="localhost-k8s-calico--apiserver--7b74d8c9c5--74m4p-eth0" Oct 30 00:03:01.243611 containerd[1684]: 2025-10-30 00:03:01.217 [INFO][4570] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3a972c306b38cda84340e77b0d555d16b522f077161d397956751c227706532f" Namespace="calico-apiserver" Pod="calico-apiserver-7b74d8c9c5-74m4p" WorkloadEndpoint="localhost-k8s-calico--apiserver--7b74d8c9c5--74m4p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7b74d8c9c5--74m4p-eth0", GenerateName:"calico-apiserver-7b74d8c9c5-", Namespace:"calico-apiserver", SelfLink:"", UID:"3e83ebe5-6a04-4887-9815-f7c972a7870a", ResourceVersion:"842", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 0, 2, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b74d8c9c5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7b74d8c9c5-74m4p", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib2a29b4de76", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 00:03:01.243611 containerd[1684]: 2025-10-30 00:03:01.217 [INFO][4570] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="3a972c306b38cda84340e77b0d555d16b522f077161d397956751c227706532f" Namespace="calico-apiserver" Pod="calico-apiserver-7b74d8c9c5-74m4p" WorkloadEndpoint="localhost-k8s-calico--apiserver--7b74d8c9c5--74m4p-eth0" Oct 30 00:03:01.243611 containerd[1684]: 2025-10-30 00:03:01.218 [INFO][4570] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib2a29b4de76 ContainerID="3a972c306b38cda84340e77b0d555d16b522f077161d397956751c227706532f" Namespace="calico-apiserver" Pod="calico-apiserver-7b74d8c9c5-74m4p" WorkloadEndpoint="localhost-k8s-calico--apiserver--7b74d8c9c5--74m4p-eth0" Oct 30 00:03:01.243611 containerd[1684]: 2025-10-30 00:03:01.221 [INFO][4570] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3a972c306b38cda84340e77b0d555d16b522f077161d397956751c227706532f" Namespace="calico-apiserver" Pod="calico-apiserver-7b74d8c9c5-74m4p" WorkloadEndpoint="localhost-k8s-calico--apiserver--7b74d8c9c5--74m4p-eth0" Oct 30 00:03:01.243611 containerd[1684]: 2025-10-30 00:03:01.222 [INFO][4570] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3a972c306b38cda84340e77b0d555d16b522f077161d397956751c227706532f" Namespace="calico-apiserver" Pod="calico-apiserver-7b74d8c9c5-74m4p" WorkloadEndpoint="localhost-k8s-calico--apiserver--7b74d8c9c5--74m4p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7b74d8c9c5--74m4p-eth0", GenerateName:"calico-apiserver-7b74d8c9c5-", Namespace:"calico-apiserver", SelfLink:"", UID:"3e83ebe5-6a04-4887-9815-f7c972a7870a", ResourceVersion:"842", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 0, 2, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b74d8c9c5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3a972c306b38cda84340e77b0d555d16b522f077161d397956751c227706532f", Pod:"calico-apiserver-7b74d8c9c5-74m4p", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib2a29b4de76", MAC:"a6:65:27:e8:9b:15", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 00:03:01.243611 containerd[1684]: 2025-10-30 00:03:01.238 [INFO][4570] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3a972c306b38cda84340e77b0d555d16b522f077161d397956751c227706532f" Namespace="calico-apiserver" Pod="calico-apiserver-7b74d8c9c5-74m4p" WorkloadEndpoint="localhost-k8s-calico--apiserver--7b74d8c9c5--74m4p-eth0" Oct 30 00:03:01.246562 kubelet[2999]: E1030 00:03:01.246098 2999 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7d5c95967b-s9656" podUID="dc87ca77-2c29-4c4e-beb8-2b81cdefd490" Oct 30 00:03:01.272187 containerd[1684]: time="2025-10-30T00:03:01.271525341Z" level=info msg="connecting to shim 3a972c306b38cda84340e77b0d555d16b522f077161d397956751c227706532f" address="unix:///run/containerd/s/a95e11fd96100b6ad1b32acb5082482f160fd9753b70b7d9ada12103a2066661" namespace=k8s.io protocol=ttrpc version=3 Oct 30 00:03:01.297290 systemd[1]: Started cri-containerd-3a972c306b38cda84340e77b0d555d16b522f077161d397956751c227706532f.scope - libcontainer container 3a972c306b38cda84340e77b0d555d16b522f077161d397956751c227706532f. Oct 30 00:03:01.304353 kubelet[2999]: I1030 00:03:01.304270 2999 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-nf8m8" podStartSLOduration=36.304251361 podStartE2EDuration="36.304251361s" podCreationTimestamp="2025-10-30 00:02:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-30 00:03:01.304010073 +0000 UTC m=+41.453655453" watchObservedRunningTime="2025-10-30 00:03:01.304251361 +0000 UTC m=+41.453896746" Oct 30 00:03:01.316360 systemd-networkd[1584]: calib44ae864a3d: Link UP Oct 30 00:03:01.317414 systemd-networkd[1584]: calib44ae864a3d: Gained carrier Oct 30 00:03:01.331360 containerd[1684]: 2025-10-30 00:03:01.113 [INFO][4583] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--7pkzg-eth0 csi-node-driver- calico-system b56fb7b2-ab30-4c17-b3d4-f41ec039c361 721 0 2025-10-30 00:02:37 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:9d99788f7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-7pkzg eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calib44ae864a3d [] [] }} ContainerID="9edc0cc01d5e43152317895100e31831c39e16eb912145da15dae2a0900a4955" Namespace="calico-system" Pod="csi-node-driver-7pkzg" WorkloadEndpoint="localhost-k8s-csi--node--driver--7pkzg-" Oct 30 00:03:01.331360 containerd[1684]: 2025-10-30 00:03:01.113 [INFO][4583] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9edc0cc01d5e43152317895100e31831c39e16eb912145da15dae2a0900a4955" Namespace="calico-system" Pod="csi-node-driver-7pkzg" WorkloadEndpoint="localhost-k8s-csi--node--driver--7pkzg-eth0" Oct 30 00:03:01.331360 containerd[1684]: 2025-10-30 00:03:01.152 [INFO][4602] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9edc0cc01d5e43152317895100e31831c39e16eb912145da15dae2a0900a4955" HandleID="k8s-pod-network.9edc0cc01d5e43152317895100e31831c39e16eb912145da15dae2a0900a4955" Workload="localhost-k8s-csi--node--driver--7pkzg-eth0" Oct 30 00:03:01.331360 containerd[1684]: 2025-10-30 00:03:01.152 [INFO][4602] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="9edc0cc01d5e43152317895100e31831c39e16eb912145da15dae2a0900a4955" HandleID="k8s-pod-network.9edc0cc01d5e43152317895100e31831c39e16eb912145da15dae2a0900a4955" Workload="localhost-k8s-csi--node--driver--7pkzg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d4fe0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-7pkzg", "timestamp":"2025-10-30 00:03:01.152523462 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 30 00:03:01.331360 containerd[1684]: 2025-10-30 00:03:01.153 [INFO][4602] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 30 00:03:01.331360 containerd[1684]: 2025-10-30 00:03:01.216 [INFO][4602] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 30 00:03:01.331360 containerd[1684]: 2025-10-30 00:03:01.216 [INFO][4602] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 30 00:03:01.331360 containerd[1684]: 2025-10-30 00:03:01.259 [INFO][4602] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9edc0cc01d5e43152317895100e31831c39e16eb912145da15dae2a0900a4955" host="localhost" Oct 30 00:03:01.331360 containerd[1684]: 2025-10-30 00:03:01.266 [INFO][4602] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 30 00:03:01.331360 containerd[1684]: 2025-10-30 00:03:01.270 [INFO][4602] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 30 00:03:01.331360 containerd[1684]: 2025-10-30 00:03:01.274 [INFO][4602] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 30 00:03:01.331360 containerd[1684]: 2025-10-30 00:03:01.277 [INFO][4602] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 30 00:03:01.331360 containerd[1684]: 2025-10-30 00:03:01.277 [INFO][4602] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9edc0cc01d5e43152317895100e31831c39e16eb912145da15dae2a0900a4955" host="localhost" Oct 30 00:03:01.331360 containerd[1684]: 2025-10-30 00:03:01.285 [INFO][4602] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.9edc0cc01d5e43152317895100e31831c39e16eb912145da15dae2a0900a4955 Oct 30 00:03:01.331360 containerd[1684]: 2025-10-30 00:03:01.299 [INFO][4602] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9edc0cc01d5e43152317895100e31831c39e16eb912145da15dae2a0900a4955" host="localhost" Oct 30 00:03:01.331360 containerd[1684]: 2025-10-30 00:03:01.307 [INFO][4602] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.9edc0cc01d5e43152317895100e31831c39e16eb912145da15dae2a0900a4955" host="localhost" Oct 30 00:03:01.331360 containerd[1684]: 2025-10-30 00:03:01.307 [INFO][4602] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.9edc0cc01d5e43152317895100e31831c39e16eb912145da15dae2a0900a4955" host="localhost" Oct 30 00:03:01.331360 containerd[1684]: 2025-10-30 00:03:01.307 [INFO][4602] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 30 00:03:01.331360 containerd[1684]: 2025-10-30 00:03:01.307 [INFO][4602] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="9edc0cc01d5e43152317895100e31831c39e16eb912145da15dae2a0900a4955" HandleID="k8s-pod-network.9edc0cc01d5e43152317895100e31831c39e16eb912145da15dae2a0900a4955" Workload="localhost-k8s-csi--node--driver--7pkzg-eth0" Oct 30 00:03:01.332619 containerd[1684]: 2025-10-30 00:03:01.309 [INFO][4583] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9edc0cc01d5e43152317895100e31831c39e16eb912145da15dae2a0900a4955" Namespace="calico-system" Pod="csi-node-driver-7pkzg" WorkloadEndpoint="localhost-k8s-csi--node--driver--7pkzg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--7pkzg-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b56fb7b2-ab30-4c17-b3d4-f41ec039c361", ResourceVersion:"721", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 0, 2, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-7pkzg", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib44ae864a3d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 00:03:01.332619 containerd[1684]: 2025-10-30 00:03:01.310 [INFO][4583] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="9edc0cc01d5e43152317895100e31831c39e16eb912145da15dae2a0900a4955" Namespace="calico-system" Pod="csi-node-driver-7pkzg" WorkloadEndpoint="localhost-k8s-csi--node--driver--7pkzg-eth0" Oct 30 00:03:01.332619 containerd[1684]: 2025-10-30 00:03:01.311 [INFO][4583] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib44ae864a3d ContainerID="9edc0cc01d5e43152317895100e31831c39e16eb912145da15dae2a0900a4955" Namespace="calico-system" Pod="csi-node-driver-7pkzg" WorkloadEndpoint="localhost-k8s-csi--node--driver--7pkzg-eth0" Oct 30 00:03:01.332619 containerd[1684]: 2025-10-30 00:03:01.317 [INFO][4583] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9edc0cc01d5e43152317895100e31831c39e16eb912145da15dae2a0900a4955" Namespace="calico-system" Pod="csi-node-driver-7pkzg" WorkloadEndpoint="localhost-k8s-csi--node--driver--7pkzg-eth0" Oct 30 00:03:01.332619 containerd[1684]: 2025-10-30 00:03:01.318 [INFO][4583] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9edc0cc01d5e43152317895100e31831c39e16eb912145da15dae2a0900a4955" Namespace="calico-system" Pod="csi-node-driver-7pkzg" WorkloadEndpoint="localhost-k8s-csi--node--driver--7pkzg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--7pkzg-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b56fb7b2-ab30-4c17-b3d4-f41ec039c361", ResourceVersion:"721", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 0, 2, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9edc0cc01d5e43152317895100e31831c39e16eb912145da15dae2a0900a4955", Pod:"csi-node-driver-7pkzg", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib44ae864a3d", MAC:"8a:7c:3a:69:73:a4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 00:03:01.332619 containerd[1684]: 2025-10-30 00:03:01.328 [INFO][4583] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9edc0cc01d5e43152317895100e31831c39e16eb912145da15dae2a0900a4955" Namespace="calico-system" Pod="csi-node-driver-7pkzg" WorkloadEndpoint="localhost-k8s-csi--node--driver--7pkzg-eth0" Oct 30 00:03:01.343707 systemd-resolved[1337]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 30 00:03:01.348903 containerd[1684]: time="2025-10-30T00:03:01.348798130Z" level=info msg="connecting to shim 9edc0cc01d5e43152317895100e31831c39e16eb912145da15dae2a0900a4955" address="unix:///run/containerd/s/c6bbd99d763c25c6b74bfee413763b8d75f90ff8efdb96f40b03fcbee50301c2" namespace=k8s.io protocol=ttrpc version=3 Oct 30 00:03:01.372290 systemd[1]: Started cri-containerd-9edc0cc01d5e43152317895100e31831c39e16eb912145da15dae2a0900a4955.scope - libcontainer container 9edc0cc01d5e43152317895100e31831c39e16eb912145da15dae2a0900a4955. Oct 30 00:03:01.391004 systemd-resolved[1337]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 30 00:03:01.404496 containerd[1684]: time="2025-10-30T00:03:01.404380187Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b74d8c9c5-74m4p,Uid:3e83ebe5-6a04-4887-9815-f7c972a7870a,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"3a972c306b38cda84340e77b0d555d16b522f077161d397956751c227706532f\"" Oct 30 00:03:01.405377 containerd[1684]: time="2025-10-30T00:03:01.405343633Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7pkzg,Uid:b56fb7b2-ab30-4c17-b3d4-f41ec039c361,Namespace:calico-system,Attempt:0,} returns sandbox id \"9edc0cc01d5e43152317895100e31831c39e16eb912145da15dae2a0900a4955\"" Oct 30 00:03:01.405905 containerd[1684]: time="2025-10-30T00:03:01.405882436Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 30 00:03:01.748233 systemd-networkd[1584]: cali2ed136c31a5: Gained IPv6LL Oct 30 00:03:01.793229 containerd[1684]: time="2025-10-30T00:03:01.793193029Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:03:01.799975 containerd[1684]: time="2025-10-30T00:03:01.799941285Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 30 00:03:01.800055 containerd[1684]: time="2025-10-30T00:03:01.799955245Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 30 00:03:01.800186 kubelet[2999]: E1030 00:03:01.800156 2999 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 30 00:03:01.800820 kubelet[2999]: E1030 00:03:01.800193 2999 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 30 00:03:01.800820 kubelet[2999]: E1030 00:03:01.800407 2999 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-7b74d8c9c5-74m4p_calico-apiserver(3e83ebe5-6a04-4887-9815-f7c972a7870a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 30 00:03:01.800820 kubelet[2999]: E1030 00:03:01.800435 2999 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b74d8c9c5-74m4p" podUID="3e83ebe5-6a04-4887-9815-f7c972a7870a" Oct 30 00:03:01.800941 containerd[1684]: time="2025-10-30T00:03:01.800438095Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 30 00:03:01.940446 systemd-networkd[1584]: calia91e5a02bd8: Gained IPv6LL Oct 30 00:03:02.192768 containerd[1684]: time="2025-10-30T00:03:02.192735993Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:03:02.194687 containerd[1684]: time="2025-10-30T00:03:02.194644123Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 30 00:03:02.194687 containerd[1684]: time="2025-10-30T00:03:02.194671555Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Oct 30 00:03:02.194872 kubelet[2999]: E1030 00:03:02.194804 2999 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 30 00:03:02.194918 kubelet[2999]: E1030 00:03:02.194838 2999 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 30 00:03:02.195064 kubelet[2999]: E1030 00:03:02.195008 2999 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-7pkzg_calico-system(b56fb7b2-ab30-4c17-b3d4-f41ec039c361): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 30 00:03:02.198924 containerd[1684]: time="2025-10-30T00:03:02.195644957Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 30 00:03:02.252194 kubelet[2999]: E1030 00:03:02.252147 2999 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b74d8c9c5-74m4p" podUID="3e83ebe5-6a04-4887-9815-f7c972a7870a" Oct 30 00:03:02.255028 kubelet[2999]: E1030 00:03:02.254990 2999 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7d5c95967b-s9656" podUID="dc87ca77-2c29-4c4e-beb8-2b81cdefd490" Oct 30 00:03:02.551302 containerd[1684]: time="2025-10-30T00:03:02.551202333Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:03:02.557590 containerd[1684]: time="2025-10-30T00:03:02.557543595Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 30 00:03:02.562919 containerd[1684]: time="2025-10-30T00:03:02.557622526Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Oct 30 00:03:02.562966 kubelet[2999]: E1030 00:03:02.557735 2999 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 30 00:03:02.562966 kubelet[2999]: E1030 00:03:02.557805 2999 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 30 00:03:02.562966 kubelet[2999]: E1030 00:03:02.557927 2999 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-7pkzg_calico-system(b56fb7b2-ab30-4c17-b3d4-f41ec039c361): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 30 00:03:02.563064 kubelet[2999]: E1030 00:03:02.557960 2999 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7pkzg" podUID="b56fb7b2-ab30-4c17-b3d4-f41ec039c361" Oct 30 00:03:02.964285 systemd-networkd[1584]: calib44ae864a3d: Gained IPv6LL Oct 30 00:03:03.046590 containerd[1684]: time="2025-10-30T00:03:03.046557426Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-pm7rt,Uid:5b6d7e41-a662-49c6-bb05-81f7e9c9a829,Namespace:calico-system,Attempt:0,}" Oct 30 00:03:03.092448 systemd-networkd[1584]: calib2a29b4de76: Gained IPv6LL Oct 30 00:03:03.198562 systemd-networkd[1584]: calie51a7ad58aa: Link UP Oct 30 00:03:03.200566 systemd-networkd[1584]: calie51a7ad58aa: Gained carrier Oct 30 00:03:03.218695 containerd[1684]: 2025-10-30 00:03:03.117 [INFO][4730] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--7c778bb748--pm7rt-eth0 goldmane-7c778bb748- calico-system 5b6d7e41-a662-49c6-bb05-81f7e9c9a829 837 0 2025-10-30 00:02:35 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7c778bb748 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-7c778bb748-pm7rt eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calie51a7ad58aa [] [] }} ContainerID="d826571f1198183237b08d5d8cdd30ca20a55f1a5ad2e7bf4e788523472682ae" Namespace="calico-system" Pod="goldmane-7c778bb748-pm7rt" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--pm7rt-" Oct 30 00:03:03.218695 containerd[1684]: 2025-10-30 00:03:03.118 [INFO][4730] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d826571f1198183237b08d5d8cdd30ca20a55f1a5ad2e7bf4e788523472682ae" Namespace="calico-system" Pod="goldmane-7c778bb748-pm7rt" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--pm7rt-eth0" Oct 30 00:03:03.218695 containerd[1684]: 2025-10-30 00:03:03.148 [INFO][4741] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d826571f1198183237b08d5d8cdd30ca20a55f1a5ad2e7bf4e788523472682ae" HandleID="k8s-pod-network.d826571f1198183237b08d5d8cdd30ca20a55f1a5ad2e7bf4e788523472682ae" Workload="localhost-k8s-goldmane--7c778bb748--pm7rt-eth0" Oct 30 00:03:03.218695 containerd[1684]: 2025-10-30 00:03:03.148 [INFO][4741] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="d826571f1198183237b08d5d8cdd30ca20a55f1a5ad2e7bf4e788523472682ae" HandleID="k8s-pod-network.d826571f1198183237b08d5d8cdd30ca20a55f1a5ad2e7bf4e788523472682ae" Workload="localhost-k8s-goldmane--7c778bb748--pm7rt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002b73a0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-7c778bb748-pm7rt", "timestamp":"2025-10-30 00:03:03.148183768 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 30 00:03:03.218695 containerd[1684]: 2025-10-30 00:03:03.148 [INFO][4741] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 30 00:03:03.218695 containerd[1684]: 2025-10-30 00:03:03.148 [INFO][4741] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 30 00:03:03.218695 containerd[1684]: 2025-10-30 00:03:03.148 [INFO][4741] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 30 00:03:03.218695 containerd[1684]: 2025-10-30 00:03:03.152 [INFO][4741] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d826571f1198183237b08d5d8cdd30ca20a55f1a5ad2e7bf4e788523472682ae" host="localhost" Oct 30 00:03:03.218695 containerd[1684]: 2025-10-30 00:03:03.155 [INFO][4741] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 30 00:03:03.218695 containerd[1684]: 2025-10-30 00:03:03.158 [INFO][4741] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 30 00:03:03.218695 containerd[1684]: 2025-10-30 00:03:03.159 [INFO][4741] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 30 00:03:03.218695 containerd[1684]: 2025-10-30 00:03:03.161 [INFO][4741] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 30 00:03:03.218695 containerd[1684]: 2025-10-30 00:03:03.161 [INFO][4741] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d826571f1198183237b08d5d8cdd30ca20a55f1a5ad2e7bf4e788523472682ae" host="localhost" Oct 30 00:03:03.218695 containerd[1684]: 2025-10-30 00:03:03.162 [INFO][4741] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.d826571f1198183237b08d5d8cdd30ca20a55f1a5ad2e7bf4e788523472682ae Oct 30 00:03:03.218695 containerd[1684]: 2025-10-30 00:03:03.167 [INFO][4741] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d826571f1198183237b08d5d8cdd30ca20a55f1a5ad2e7bf4e788523472682ae" host="localhost" Oct 30 00:03:03.218695 containerd[1684]: 2025-10-30 00:03:03.190 [INFO][4741] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.d826571f1198183237b08d5d8cdd30ca20a55f1a5ad2e7bf4e788523472682ae" host="localhost" Oct 30 00:03:03.218695 containerd[1684]: 2025-10-30 00:03:03.191 [INFO][4741] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.d826571f1198183237b08d5d8cdd30ca20a55f1a5ad2e7bf4e788523472682ae" host="localhost" Oct 30 00:03:03.218695 containerd[1684]: 2025-10-30 00:03:03.191 [INFO][4741] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 30 00:03:03.218695 containerd[1684]: 2025-10-30 00:03:03.191 [INFO][4741] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="d826571f1198183237b08d5d8cdd30ca20a55f1a5ad2e7bf4e788523472682ae" HandleID="k8s-pod-network.d826571f1198183237b08d5d8cdd30ca20a55f1a5ad2e7bf4e788523472682ae" Workload="localhost-k8s-goldmane--7c778bb748--pm7rt-eth0" Oct 30 00:03:03.221521 containerd[1684]: 2025-10-30 00:03:03.193 [INFO][4730] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d826571f1198183237b08d5d8cdd30ca20a55f1a5ad2e7bf4e788523472682ae" Namespace="calico-system" Pod="goldmane-7c778bb748-pm7rt" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--pm7rt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7c778bb748--pm7rt-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"5b6d7e41-a662-49c6-bb05-81f7e9c9a829", ResourceVersion:"837", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 0, 2, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-7c778bb748-pm7rt", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calie51a7ad58aa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 00:03:03.221521 containerd[1684]: 2025-10-30 00:03:03.193 [INFO][4730] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="d826571f1198183237b08d5d8cdd30ca20a55f1a5ad2e7bf4e788523472682ae" Namespace="calico-system" Pod="goldmane-7c778bb748-pm7rt" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--pm7rt-eth0" Oct 30 00:03:03.221521 containerd[1684]: 2025-10-30 00:03:03.193 [INFO][4730] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie51a7ad58aa ContainerID="d826571f1198183237b08d5d8cdd30ca20a55f1a5ad2e7bf4e788523472682ae" Namespace="calico-system" Pod="goldmane-7c778bb748-pm7rt" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--pm7rt-eth0" Oct 30 00:03:03.221521 containerd[1684]: 2025-10-30 00:03:03.201 [INFO][4730] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d826571f1198183237b08d5d8cdd30ca20a55f1a5ad2e7bf4e788523472682ae" Namespace="calico-system" Pod="goldmane-7c778bb748-pm7rt" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--pm7rt-eth0" Oct 30 00:03:03.221521 containerd[1684]: 2025-10-30 00:03:03.202 [INFO][4730] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d826571f1198183237b08d5d8cdd30ca20a55f1a5ad2e7bf4e788523472682ae" Namespace="calico-system" Pod="goldmane-7c778bb748-pm7rt" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--pm7rt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7c778bb748--pm7rt-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"5b6d7e41-a662-49c6-bb05-81f7e9c9a829", ResourceVersion:"837", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 0, 2, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d826571f1198183237b08d5d8cdd30ca20a55f1a5ad2e7bf4e788523472682ae", Pod:"goldmane-7c778bb748-pm7rt", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calie51a7ad58aa", MAC:"de:e4:23:c2:84:e1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 00:03:03.221521 containerd[1684]: 2025-10-30 00:03:03.214 [INFO][4730] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d826571f1198183237b08d5d8cdd30ca20a55f1a5ad2e7bf4e788523472682ae" Namespace="calico-system" Pod="goldmane-7c778bb748-pm7rt" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--pm7rt-eth0" Oct 30 00:03:03.239666 containerd[1684]: time="2025-10-30T00:03:03.239603380Z" level=info msg="connecting to shim d826571f1198183237b08d5d8cdd30ca20a55f1a5ad2e7bf4e788523472682ae" address="unix:///run/containerd/s/a2ea74e1e22058015c30f45c7ab709fa11efa3d119cbae486141ff4dfa6a83df" namespace=k8s.io protocol=ttrpc version=3 Oct 30 00:03:03.257263 kubelet[2999]: E1030 00:03:03.257238 2999 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b74d8c9c5-74m4p" podUID="3e83ebe5-6a04-4887-9815-f7c972a7870a" Oct 30 00:03:03.259026 kubelet[2999]: E1030 00:03:03.258690 2999 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7pkzg" podUID="b56fb7b2-ab30-4c17-b3d4-f41ec039c361" Oct 30 00:03:03.274415 systemd[1]: Started cri-containerd-d826571f1198183237b08d5d8cdd30ca20a55f1a5ad2e7bf4e788523472682ae.scope - libcontainer container d826571f1198183237b08d5d8cdd30ca20a55f1a5ad2e7bf4e788523472682ae. Oct 30 00:03:03.295077 systemd-resolved[1337]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 30 00:03:03.348660 containerd[1684]: time="2025-10-30T00:03:03.348596210Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-pm7rt,Uid:5b6d7e41-a662-49c6-bb05-81f7e9c9a829,Namespace:calico-system,Attempt:0,} returns sandbox id \"d826571f1198183237b08d5d8cdd30ca20a55f1a5ad2e7bf4e788523472682ae\"" Oct 30 00:03:03.349883 containerd[1684]: time="2025-10-30T00:03:03.349867556Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 30 00:03:03.700098 containerd[1684]: time="2025-10-30T00:03:03.700065306Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:03:03.701374 containerd[1684]: time="2025-10-30T00:03:03.701354933Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 30 00:03:03.701431 containerd[1684]: time="2025-10-30T00:03:03.701418880Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Oct 30 00:03:03.701565 kubelet[2999]: E1030 00:03:03.701528 2999 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 30 00:03:03.701633 kubelet[2999]: E1030 00:03:03.701572 2999 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 30 00:03:03.705702 kubelet[2999]: E1030 00:03:03.701635 2999 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-pm7rt_calico-system(5b6d7e41-a662-49c6-bb05-81f7e9c9a829): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 30 00:03:03.705702 kubelet[2999]: E1030 00:03:03.701656 2999 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-pm7rt" podUID="5b6d7e41-a662-49c6-bb05-81f7e9c9a829" Oct 30 00:03:04.036034 containerd[1684]: time="2025-10-30T00:03:04.035766924Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-64n8r,Uid:2d168db8-a265-440f-81ca-e805c0d25f52,Namespace:kube-system,Attempt:0,}" Oct 30 00:03:04.036688 containerd[1684]: time="2025-10-30T00:03:04.035971346Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b74d8c9c5-hkpns,Uid:542f58e7-43b6-487a-8c10-54da1c4b4004,Namespace:calico-apiserver,Attempt:0,}" Oct 30 00:03:04.138614 systemd-networkd[1584]: cali337be359791: Link UP Oct 30 00:03:04.139742 systemd-networkd[1584]: cali337be359791: Gained carrier Oct 30 00:03:04.154417 containerd[1684]: 2025-10-30 00:03:04.083 [INFO][4800] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--64n8r-eth0 coredns-66bc5c9577- kube-system 2d168db8-a265-440f-81ca-e805c0d25f52 840 0 2025-10-30 00:02:25 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-64n8r eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali337be359791 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="e4b78c95c258c7978468d01ebe20121a1c36d6857cd66ea28fd1f76e18f63ba4" Namespace="kube-system" Pod="coredns-66bc5c9577-64n8r" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--64n8r-" Oct 30 00:03:04.154417 containerd[1684]: 2025-10-30 00:03:04.083 [INFO][4800] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e4b78c95c258c7978468d01ebe20121a1c36d6857cd66ea28fd1f76e18f63ba4" Namespace="kube-system" Pod="coredns-66bc5c9577-64n8r" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--64n8r-eth0" Oct 30 00:03:04.154417 containerd[1684]: 2025-10-30 00:03:04.104 [INFO][4827] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e4b78c95c258c7978468d01ebe20121a1c36d6857cd66ea28fd1f76e18f63ba4" HandleID="k8s-pod-network.e4b78c95c258c7978468d01ebe20121a1c36d6857cd66ea28fd1f76e18f63ba4" Workload="localhost-k8s-coredns--66bc5c9577--64n8r-eth0" Oct 30 00:03:04.154417 containerd[1684]: 2025-10-30 00:03:04.104 [INFO][4827] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="e4b78c95c258c7978468d01ebe20121a1c36d6857cd66ea28fd1f76e18f63ba4" HandleID="k8s-pod-network.e4b78c95c258c7978468d01ebe20121a1c36d6857cd66ea28fd1f76e18f63ba4" Workload="localhost-k8s-coredns--66bc5c9577--64n8r-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f2a0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-64n8r", "timestamp":"2025-10-30 00:03:04.104505645 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 30 00:03:04.154417 containerd[1684]: 2025-10-30 00:03:04.104 [INFO][4827] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 30 00:03:04.154417 containerd[1684]: 2025-10-30 00:03:04.104 [INFO][4827] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 30 00:03:04.154417 containerd[1684]: 2025-10-30 00:03:04.104 [INFO][4827] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 30 00:03:04.154417 containerd[1684]: 2025-10-30 00:03:04.113 [INFO][4827] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e4b78c95c258c7978468d01ebe20121a1c36d6857cd66ea28fd1f76e18f63ba4" host="localhost" Oct 30 00:03:04.154417 containerd[1684]: 2025-10-30 00:03:04.118 [INFO][4827] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 30 00:03:04.154417 containerd[1684]: 2025-10-30 00:03:04.121 [INFO][4827] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 30 00:03:04.154417 containerd[1684]: 2025-10-30 00:03:04.123 [INFO][4827] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 30 00:03:04.154417 containerd[1684]: 2025-10-30 00:03:04.124 [INFO][4827] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 30 00:03:04.154417 containerd[1684]: 2025-10-30 00:03:04.124 [INFO][4827] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e4b78c95c258c7978468d01ebe20121a1c36d6857cd66ea28fd1f76e18f63ba4" host="localhost" Oct 30 00:03:04.154417 containerd[1684]: 2025-10-30 00:03:04.125 [INFO][4827] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.e4b78c95c258c7978468d01ebe20121a1c36d6857cd66ea28fd1f76e18f63ba4 Oct 30 00:03:04.154417 containerd[1684]: 2025-10-30 00:03:04.127 [INFO][4827] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e4b78c95c258c7978468d01ebe20121a1c36d6857cd66ea28fd1f76e18f63ba4" host="localhost" Oct 30 00:03:04.154417 containerd[1684]: 2025-10-30 00:03:04.131 [INFO][4827] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.e4b78c95c258c7978468d01ebe20121a1c36d6857cd66ea28fd1f76e18f63ba4" host="localhost" Oct 30 00:03:04.154417 containerd[1684]: 2025-10-30 00:03:04.131 [INFO][4827] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.e4b78c95c258c7978468d01ebe20121a1c36d6857cd66ea28fd1f76e18f63ba4" host="localhost" Oct 30 00:03:04.154417 containerd[1684]: 2025-10-30 00:03:04.131 [INFO][4827] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 30 00:03:04.154417 containerd[1684]: 2025-10-30 00:03:04.131 [INFO][4827] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="e4b78c95c258c7978468d01ebe20121a1c36d6857cd66ea28fd1f76e18f63ba4" HandleID="k8s-pod-network.e4b78c95c258c7978468d01ebe20121a1c36d6857cd66ea28fd1f76e18f63ba4" Workload="localhost-k8s-coredns--66bc5c9577--64n8r-eth0" Oct 30 00:03:04.158642 containerd[1684]: 2025-10-30 00:03:04.134 [INFO][4800] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e4b78c95c258c7978468d01ebe20121a1c36d6857cd66ea28fd1f76e18f63ba4" Namespace="kube-system" Pod="coredns-66bc5c9577-64n8r" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--64n8r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--64n8r-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"2d168db8-a265-440f-81ca-e805c0d25f52", ResourceVersion:"840", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 0, 2, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-64n8r", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali337be359791", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 00:03:04.158642 containerd[1684]: 2025-10-30 00:03:04.135 [INFO][4800] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="e4b78c95c258c7978468d01ebe20121a1c36d6857cd66ea28fd1f76e18f63ba4" Namespace="kube-system" Pod="coredns-66bc5c9577-64n8r" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--64n8r-eth0" Oct 30 00:03:04.158642 containerd[1684]: 2025-10-30 00:03:04.135 [INFO][4800] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali337be359791 ContainerID="e4b78c95c258c7978468d01ebe20121a1c36d6857cd66ea28fd1f76e18f63ba4" Namespace="kube-system" Pod="coredns-66bc5c9577-64n8r" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--64n8r-eth0" Oct 30 00:03:04.158642 containerd[1684]: 2025-10-30 00:03:04.139 [INFO][4800] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e4b78c95c258c7978468d01ebe20121a1c36d6857cd66ea28fd1f76e18f63ba4" Namespace="kube-system" Pod="coredns-66bc5c9577-64n8r" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--64n8r-eth0" Oct 30 00:03:04.158642 containerd[1684]: 2025-10-30 00:03:04.140 [INFO][4800] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e4b78c95c258c7978468d01ebe20121a1c36d6857cd66ea28fd1f76e18f63ba4" Namespace="kube-system" Pod="coredns-66bc5c9577-64n8r" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--64n8r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--64n8r-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"2d168db8-a265-440f-81ca-e805c0d25f52", ResourceVersion:"840", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 0, 2, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e4b78c95c258c7978468d01ebe20121a1c36d6857cd66ea28fd1f76e18f63ba4", Pod:"coredns-66bc5c9577-64n8r", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali337be359791", MAC:"56:15:c1:a6:b6:2e", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 00:03:04.158642 containerd[1684]: 2025-10-30 00:03:04.149 [INFO][4800] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e4b78c95c258c7978468d01ebe20121a1c36d6857cd66ea28fd1f76e18f63ba4" Namespace="kube-system" Pod="coredns-66bc5c9577-64n8r" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--64n8r-eth0" Oct 30 00:03:04.182252 containerd[1684]: time="2025-10-30T00:03:04.181984283Z" level=info msg="connecting to shim e4b78c95c258c7978468d01ebe20121a1c36d6857cd66ea28fd1f76e18f63ba4" address="unix:///run/containerd/s/6097a6e68bbafc971cbe9736ff0a45d4362891beccacbd7a384300bcd0c5cca5" namespace=k8s.io protocol=ttrpc version=3 Oct 30 00:03:04.224274 systemd[1]: Started cri-containerd-e4b78c95c258c7978468d01ebe20121a1c36d6857cd66ea28fd1f76e18f63ba4.scope - libcontainer container e4b78c95c258c7978468d01ebe20121a1c36d6857cd66ea28fd1f76e18f63ba4. Oct 30 00:03:04.232616 systemd-resolved[1337]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 30 00:03:04.259739 kubelet[2999]: E1030 00:03:04.259701 2999 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-pm7rt" podUID="5b6d7e41-a662-49c6-bb05-81f7e9c9a829" Oct 30 00:03:04.278491 containerd[1684]: time="2025-10-30T00:03:04.278467293Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-64n8r,Uid:2d168db8-a265-440f-81ca-e805c0d25f52,Namespace:kube-system,Attempt:0,} returns sandbox id \"e4b78c95c258c7978468d01ebe20121a1c36d6857cd66ea28fd1f76e18f63ba4\"" Oct 30 00:03:04.291181 containerd[1684]: time="2025-10-30T00:03:04.286979255Z" level=info msg="CreateContainer within sandbox \"e4b78c95c258c7978468d01ebe20121a1c36d6857cd66ea28fd1f76e18f63ba4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 30 00:03:04.295962 systemd-networkd[1584]: califb4e24cd319: Link UP Oct 30 00:03:04.296504 systemd-networkd[1584]: califb4e24cd319: Gained carrier Oct 30 00:03:04.310877 containerd[1684]: time="2025-10-30T00:03:04.310780463Z" level=info msg="Container a7d1643e0d14c895c4a2b429921e52cd625181070d531e7ea8dbc526d127f5b2: CDI devices from CRI Config.CDIDevices: []" Oct 30 00:03:04.319611 containerd[1684]: 2025-10-30 00:03:04.088 [INFO][4804] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7b74d8c9c5--hkpns-eth0 calico-apiserver-7b74d8c9c5- calico-apiserver 542f58e7-43b6-487a-8c10-54da1c4b4004 844 0 2025-10-30 00:02:34 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7b74d8c9c5 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7b74d8c9c5-hkpns eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] califb4e24cd319 [] [] }} ContainerID="a00635f3e7890de00ead652c578ca8e3bc0affc4cd6f5370bb2f1fc971240704" Namespace="calico-apiserver" Pod="calico-apiserver-7b74d8c9c5-hkpns" WorkloadEndpoint="localhost-k8s-calico--apiserver--7b74d8c9c5--hkpns-" Oct 30 00:03:04.319611 containerd[1684]: 2025-10-30 00:03:04.088 [INFO][4804] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a00635f3e7890de00ead652c578ca8e3bc0affc4cd6f5370bb2f1fc971240704" Namespace="calico-apiserver" Pod="calico-apiserver-7b74d8c9c5-hkpns" WorkloadEndpoint="localhost-k8s-calico--apiserver--7b74d8c9c5--hkpns-eth0" Oct 30 00:03:04.319611 containerd[1684]: 2025-10-30 00:03:04.119 [INFO][4832] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a00635f3e7890de00ead652c578ca8e3bc0affc4cd6f5370bb2f1fc971240704" HandleID="k8s-pod-network.a00635f3e7890de00ead652c578ca8e3bc0affc4cd6f5370bb2f1fc971240704" Workload="localhost-k8s-calico--apiserver--7b74d8c9c5--hkpns-eth0" Oct 30 00:03:04.319611 containerd[1684]: 2025-10-30 00:03:04.120 [INFO][4832] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a00635f3e7890de00ead652c578ca8e3bc0affc4cd6f5370bb2f1fc971240704" HandleID="k8s-pod-network.a00635f3e7890de00ead652c578ca8e3bc0affc4cd6f5370bb2f1fc971240704" Workload="localhost-k8s-calico--apiserver--7b74d8c9c5--hkpns-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d56d0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7b74d8c9c5-hkpns", "timestamp":"2025-10-30 00:03:04.119749249 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 30 00:03:04.319611 containerd[1684]: 2025-10-30 00:03:04.120 [INFO][4832] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 30 00:03:04.319611 containerd[1684]: 2025-10-30 00:03:04.132 [INFO][4832] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 30 00:03:04.319611 containerd[1684]: 2025-10-30 00:03:04.133 [INFO][4832] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 30 00:03:04.319611 containerd[1684]: 2025-10-30 00:03:04.213 [INFO][4832] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a00635f3e7890de00ead652c578ca8e3bc0affc4cd6f5370bb2f1fc971240704" host="localhost" Oct 30 00:03:04.319611 containerd[1684]: 2025-10-30 00:03:04.219 [INFO][4832] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 30 00:03:04.319611 containerd[1684]: 2025-10-30 00:03:04.240 [INFO][4832] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 30 00:03:04.319611 containerd[1684]: 2025-10-30 00:03:04.241 [INFO][4832] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 30 00:03:04.319611 containerd[1684]: 2025-10-30 00:03:04.243 [INFO][4832] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 30 00:03:04.319611 containerd[1684]: 2025-10-30 00:03:04.243 [INFO][4832] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a00635f3e7890de00ead652c578ca8e3bc0affc4cd6f5370bb2f1fc971240704" host="localhost" Oct 30 00:03:04.319611 containerd[1684]: 2025-10-30 00:03:04.244 [INFO][4832] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a00635f3e7890de00ead652c578ca8e3bc0affc4cd6f5370bb2f1fc971240704 Oct 30 00:03:04.319611 containerd[1684]: 2025-10-30 00:03:04.254 [INFO][4832] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a00635f3e7890de00ead652c578ca8e3bc0affc4cd6f5370bb2f1fc971240704" host="localhost" Oct 30 00:03:04.319611 containerd[1684]: 2025-10-30 00:03:04.271 [INFO][4832] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.a00635f3e7890de00ead652c578ca8e3bc0affc4cd6f5370bb2f1fc971240704" host="localhost" Oct 30 00:03:04.319611 containerd[1684]: 2025-10-30 00:03:04.271 [INFO][4832] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.a00635f3e7890de00ead652c578ca8e3bc0affc4cd6f5370bb2f1fc971240704" host="localhost" Oct 30 00:03:04.319611 containerd[1684]: 2025-10-30 00:03:04.271 [INFO][4832] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 30 00:03:04.319611 containerd[1684]: 2025-10-30 00:03:04.271 [INFO][4832] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="a00635f3e7890de00ead652c578ca8e3bc0affc4cd6f5370bb2f1fc971240704" HandleID="k8s-pod-network.a00635f3e7890de00ead652c578ca8e3bc0affc4cd6f5370bb2f1fc971240704" Workload="localhost-k8s-calico--apiserver--7b74d8c9c5--hkpns-eth0" Oct 30 00:03:04.320604 containerd[1684]: 2025-10-30 00:03:04.274 [INFO][4804] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a00635f3e7890de00ead652c578ca8e3bc0affc4cd6f5370bb2f1fc971240704" Namespace="calico-apiserver" Pod="calico-apiserver-7b74d8c9c5-hkpns" WorkloadEndpoint="localhost-k8s-calico--apiserver--7b74d8c9c5--hkpns-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7b74d8c9c5--hkpns-eth0", GenerateName:"calico-apiserver-7b74d8c9c5-", Namespace:"calico-apiserver", SelfLink:"", UID:"542f58e7-43b6-487a-8c10-54da1c4b4004", ResourceVersion:"844", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 0, 2, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b74d8c9c5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7b74d8c9c5-hkpns", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califb4e24cd319", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 00:03:04.320604 containerd[1684]: 2025-10-30 00:03:04.285 [INFO][4804] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="a00635f3e7890de00ead652c578ca8e3bc0affc4cd6f5370bb2f1fc971240704" Namespace="calico-apiserver" Pod="calico-apiserver-7b74d8c9c5-hkpns" WorkloadEndpoint="localhost-k8s-calico--apiserver--7b74d8c9c5--hkpns-eth0" Oct 30 00:03:04.320604 containerd[1684]: 2025-10-30 00:03:04.285 [INFO][4804] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califb4e24cd319 ContainerID="a00635f3e7890de00ead652c578ca8e3bc0affc4cd6f5370bb2f1fc971240704" Namespace="calico-apiserver" Pod="calico-apiserver-7b74d8c9c5-hkpns" WorkloadEndpoint="localhost-k8s-calico--apiserver--7b74d8c9c5--hkpns-eth0" Oct 30 00:03:04.320604 containerd[1684]: 2025-10-30 00:03:04.296 [INFO][4804] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a00635f3e7890de00ead652c578ca8e3bc0affc4cd6f5370bb2f1fc971240704" Namespace="calico-apiserver" Pod="calico-apiserver-7b74d8c9c5-hkpns" WorkloadEndpoint="localhost-k8s-calico--apiserver--7b74d8c9c5--hkpns-eth0" Oct 30 00:03:04.320604 containerd[1684]: 2025-10-30 00:03:04.300 [INFO][4804] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a00635f3e7890de00ead652c578ca8e3bc0affc4cd6f5370bb2f1fc971240704" Namespace="calico-apiserver" Pod="calico-apiserver-7b74d8c9c5-hkpns" WorkloadEndpoint="localhost-k8s-calico--apiserver--7b74d8c9c5--hkpns-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7b74d8c9c5--hkpns-eth0", GenerateName:"calico-apiserver-7b74d8c9c5-", Namespace:"calico-apiserver", SelfLink:"", UID:"542f58e7-43b6-487a-8c10-54da1c4b4004", ResourceVersion:"844", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 0, 2, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b74d8c9c5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a00635f3e7890de00ead652c578ca8e3bc0affc4cd6f5370bb2f1fc971240704", Pod:"calico-apiserver-7b74d8c9c5-hkpns", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califb4e24cd319", MAC:"e6:db:82:25:52:25", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 00:03:04.320604 containerd[1684]: 2025-10-30 00:03:04.315 [INFO][4804] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a00635f3e7890de00ead652c578ca8e3bc0affc4cd6f5370bb2f1fc971240704" Namespace="calico-apiserver" Pod="calico-apiserver-7b74d8c9c5-hkpns" WorkloadEndpoint="localhost-k8s-calico--apiserver--7b74d8c9c5--hkpns-eth0" Oct 30 00:03:04.321694 containerd[1684]: time="2025-10-30T00:03:04.321631156Z" level=info msg="CreateContainer within sandbox \"e4b78c95c258c7978468d01ebe20121a1c36d6857cd66ea28fd1f76e18f63ba4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a7d1643e0d14c895c4a2b429921e52cd625181070d531e7ea8dbc526d127f5b2\"" Oct 30 00:03:04.322755 containerd[1684]: time="2025-10-30T00:03:04.322682840Z" level=info msg="StartContainer for \"a7d1643e0d14c895c4a2b429921e52cd625181070d531e7ea8dbc526d127f5b2\"" Oct 30 00:03:04.325334 containerd[1684]: time="2025-10-30T00:03:04.325316447Z" level=info msg="connecting to shim a7d1643e0d14c895c4a2b429921e52cd625181070d531e7ea8dbc526d127f5b2" address="unix:///run/containerd/s/6097a6e68bbafc971cbe9736ff0a45d4362891beccacbd7a384300bcd0c5cca5" protocol=ttrpc version=3 Oct 30 00:03:04.344234 systemd[1]: Started cri-containerd-a7d1643e0d14c895c4a2b429921e52cd625181070d531e7ea8dbc526d127f5b2.scope - libcontainer container a7d1643e0d14c895c4a2b429921e52cd625181070d531e7ea8dbc526d127f5b2. Oct 30 00:03:04.401082 containerd[1684]: time="2025-10-30T00:03:04.401054260Z" level=info msg="StartContainer for \"a7d1643e0d14c895c4a2b429921e52cd625181070d531e7ea8dbc526d127f5b2\" returns successfully" Oct 30 00:03:04.406050 containerd[1684]: time="2025-10-30T00:03:04.405927022Z" level=info msg="connecting to shim a00635f3e7890de00ead652c578ca8e3bc0affc4cd6f5370bb2f1fc971240704" address="unix:///run/containerd/s/86dac8101eee1eaa701e0ef55a2fbdb7c56a568824ffdff720cf41b3b096a4fe" namespace=k8s.io protocol=ttrpc version=3 Oct 30 00:03:04.437289 systemd-networkd[1584]: calie51a7ad58aa: Gained IPv6LL Oct 30 00:03:04.438357 systemd[1]: Started cri-containerd-a00635f3e7890de00ead652c578ca8e3bc0affc4cd6f5370bb2f1fc971240704.scope - libcontainer container a00635f3e7890de00ead652c578ca8e3bc0affc4cd6f5370bb2f1fc971240704. Oct 30 00:03:04.466243 systemd-resolved[1337]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 30 00:03:04.493274 containerd[1684]: time="2025-10-30T00:03:04.493231420Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b74d8c9c5-hkpns,Uid:542f58e7-43b6-487a-8c10-54da1c4b4004,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"a00635f3e7890de00ead652c578ca8e3bc0affc4cd6f5370bb2f1fc971240704\"" Oct 30 00:03:04.494426 containerd[1684]: time="2025-10-30T00:03:04.494395441Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 30 00:03:04.849417 containerd[1684]: time="2025-10-30T00:03:04.849324474Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:03:04.852829 containerd[1684]: time="2025-10-30T00:03:04.852798310Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 30 00:03:04.852916 containerd[1684]: time="2025-10-30T00:03:04.852862292Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 30 00:03:04.853126 kubelet[2999]: E1030 00:03:04.853054 2999 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 30 00:03:04.853328 kubelet[2999]: E1030 00:03:04.853092 2999 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 30 00:03:04.884927 kubelet[2999]: E1030 00:03:04.884871 2999 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-7b74d8c9c5-hkpns_calico-apiserver(542f58e7-43b6-487a-8c10-54da1c4b4004): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 30 00:03:04.885079 kubelet[2999]: E1030 00:03:04.885038 2999 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b74d8c9c5-hkpns" podUID="542f58e7-43b6-487a-8c10-54da1c4b4004" Oct 30 00:03:05.265120 kubelet[2999]: E1030 00:03:05.264369 2999 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-pm7rt" podUID="5b6d7e41-a662-49c6-bb05-81f7e9c9a829" Oct 30 00:03:05.265120 kubelet[2999]: E1030 00:03:05.264736 2999 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b74d8c9c5-hkpns" podUID="542f58e7-43b6-487a-8c10-54da1c4b4004" Oct 30 00:03:05.288419 kubelet[2999]: I1030 00:03:05.288378 2999 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-64n8r" podStartSLOduration=40.288366274 podStartE2EDuration="40.288366274s" podCreationTimestamp="2025-10-30 00:02:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-30 00:03:05.28694387 +0000 UTC m=+45.436589249" watchObservedRunningTime="2025-10-30 00:03:05.288366274 +0000 UTC m=+45.438011648" Oct 30 00:03:05.972214 systemd-networkd[1584]: cali337be359791: Gained IPv6LL Oct 30 00:03:06.269525 kubelet[2999]: E1030 00:03:06.269445 2999 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b74d8c9c5-hkpns" podUID="542f58e7-43b6-487a-8c10-54da1c4b4004" Oct 30 00:03:06.356265 systemd-networkd[1584]: califb4e24cd319: Gained IPv6LL Oct 30 00:03:13.035082 containerd[1684]: time="2025-10-30T00:03:13.035055064Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 30 00:03:13.413114 containerd[1684]: time="2025-10-30T00:03:13.413027140Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:03:13.413406 containerd[1684]: time="2025-10-30T00:03:13.413386087Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 30 00:03:13.413442 containerd[1684]: time="2025-10-30T00:03:13.413433019Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Oct 30 00:03:13.413569 kubelet[2999]: E1030 00:03:13.413532 2999 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 30 00:03:13.413734 kubelet[2999]: E1030 00:03:13.413574 2999 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 30 00:03:13.413734 kubelet[2999]: E1030 00:03:13.413621 2999 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-5d498cdb87-vd49x_calico-system(233c3511-5659-4261-b377-890b1ba99d60): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 30 00:03:13.414832 containerd[1684]: time="2025-10-30T00:03:13.414818138Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 30 00:03:13.758396 containerd[1684]: time="2025-10-30T00:03:13.758366579Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:03:13.758702 containerd[1684]: time="2025-10-30T00:03:13.758683402Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 30 00:03:13.758742 containerd[1684]: time="2025-10-30T00:03:13.758726893Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Oct 30 00:03:13.758859 kubelet[2999]: E1030 00:03:13.758828 2999 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 30 00:03:13.758921 kubelet[2999]: E1030 00:03:13.758864 2999 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 30 00:03:13.758921 kubelet[2999]: E1030 00:03:13.758910 2999 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-5d498cdb87-vd49x_calico-system(233c3511-5659-4261-b377-890b1ba99d60): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 30 00:03:13.758959 kubelet[2999]: E1030 00:03:13.758934 2999 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5d498cdb87-vd49x" podUID="233c3511-5659-4261-b377-890b1ba99d60" Oct 30 00:03:15.035096 containerd[1684]: time="2025-10-30T00:03:15.035042211Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 30 00:03:15.389870 containerd[1684]: time="2025-10-30T00:03:15.388814294Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:03:15.389870 containerd[1684]: time="2025-10-30T00:03:15.389842303Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 30 00:03:15.389988 containerd[1684]: time="2025-10-30T00:03:15.389917601Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Oct 30 00:03:15.390418 kubelet[2999]: E1030 00:03:15.390021 2999 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 30 00:03:15.390418 kubelet[2999]: E1030 00:03:15.390067 2999 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 30 00:03:15.390418 kubelet[2999]: E1030 00:03:15.390348 2999 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-7pkzg_calico-system(b56fb7b2-ab30-4c17-b3d4-f41ec039c361): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 30 00:03:15.390883 containerd[1684]: time="2025-10-30T00:03:15.390274327Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 30 00:03:15.735662 containerd[1684]: time="2025-10-30T00:03:15.735628309Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:03:15.736036 containerd[1684]: time="2025-10-30T00:03:15.736014631Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 30 00:03:15.736086 containerd[1684]: time="2025-10-30T00:03:15.736072918Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Oct 30 00:03:15.736233 kubelet[2999]: E1030 00:03:15.736203 2999 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 30 00:03:15.736280 kubelet[2999]: E1030 00:03:15.736238 2999 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 30 00:03:15.736610 containerd[1684]: time="2025-10-30T00:03:15.736426466Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 30 00:03:15.736786 kubelet[2999]: E1030 00:03:15.736709 2999 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-7d5c95967b-s9656_calico-system(dc87ca77-2c29-4c4e-beb8-2b81cdefd490): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 30 00:03:15.736786 kubelet[2999]: E1030 00:03:15.736749 2999 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7d5c95967b-s9656" podUID="dc87ca77-2c29-4c4e-beb8-2b81cdefd490" Oct 30 00:03:16.096672 containerd[1684]: time="2025-10-30T00:03:16.096596182Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:03:16.097278 containerd[1684]: time="2025-10-30T00:03:16.096991329Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 30 00:03:16.097278 containerd[1684]: time="2025-10-30T00:03:16.097035408Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Oct 30 00:03:16.097378 kubelet[2999]: E1030 00:03:16.097146 2999 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 30 00:03:16.097378 kubelet[2999]: E1030 00:03:16.097173 2999 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 30 00:03:16.097378 kubelet[2999]: E1030 00:03:16.097218 2999 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-7pkzg_calico-system(b56fb7b2-ab30-4c17-b3d4-f41ec039c361): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 30 00:03:16.097500 kubelet[2999]: E1030 00:03:16.097259 2999 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7pkzg" podUID="b56fb7b2-ab30-4c17-b3d4-f41ec039c361" Oct 30 00:03:17.034948 containerd[1684]: time="2025-10-30T00:03:17.034767891Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 30 00:03:17.379823 containerd[1684]: time="2025-10-30T00:03:17.379715041Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:03:17.380999 containerd[1684]: time="2025-10-30T00:03:17.380781471Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 30 00:03:17.380999 containerd[1684]: time="2025-10-30T00:03:17.380829008Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Oct 30 00:03:17.381167 kubelet[2999]: E1030 00:03:17.380931 2999 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 30 00:03:17.381167 kubelet[2999]: E1030 00:03:17.380965 2999 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 30 00:03:17.381167 kubelet[2999]: E1030 00:03:17.381019 2999 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-pm7rt_calico-system(5b6d7e41-a662-49c6-bb05-81f7e9c9a829): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 30 00:03:17.381167 kubelet[2999]: E1030 00:03:17.381040 2999 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-pm7rt" podUID="5b6d7e41-a662-49c6-bb05-81f7e9c9a829" Oct 30 00:03:19.034121 containerd[1684]: time="2025-10-30T00:03:19.033958217Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 30 00:03:19.391940 containerd[1684]: time="2025-10-30T00:03:19.391868340Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:03:19.392222 containerd[1684]: time="2025-10-30T00:03:19.392198537Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 30 00:03:19.392274 containerd[1684]: time="2025-10-30T00:03:19.392247717Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 30 00:03:19.392396 kubelet[2999]: E1030 00:03:19.392368 2999 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 30 00:03:19.392569 kubelet[2999]: E1030 00:03:19.392401 2999 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 30 00:03:19.392569 kubelet[2999]: E1030 00:03:19.392561 2999 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-7b74d8c9c5-hkpns_calico-apiserver(542f58e7-43b6-487a-8c10-54da1c4b4004): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 30 00:03:19.392608 kubelet[2999]: E1030 00:03:19.392584 2999 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b74d8c9c5-hkpns" podUID="542f58e7-43b6-487a-8c10-54da1c4b4004" Oct 30 00:03:19.393375 containerd[1684]: time="2025-10-30T00:03:19.392794817Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 30 00:03:19.784072 containerd[1684]: time="2025-10-30T00:03:19.784030903Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:03:19.784450 containerd[1684]: time="2025-10-30T00:03:19.784414305Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 30 00:03:19.784507 containerd[1684]: time="2025-10-30T00:03:19.784486166Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 30 00:03:19.784924 kubelet[2999]: E1030 00:03:19.784606 2999 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 30 00:03:19.784924 kubelet[2999]: E1030 00:03:19.784637 2999 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 30 00:03:19.784924 kubelet[2999]: E1030 00:03:19.784687 2999 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-7b74d8c9c5-74m4p_calico-apiserver(3e83ebe5-6a04-4887-9815-f7c972a7870a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 30 00:03:19.784924 kubelet[2999]: E1030 00:03:19.784707 2999 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b74d8c9c5-74m4p" podUID="3e83ebe5-6a04-4887-9815-f7c972a7870a" Oct 30 00:03:25.036119 kubelet[2999]: E1030 00:03:25.035991 2999 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5d498cdb87-vd49x" podUID="233c3511-5659-4261-b377-890b1ba99d60" Oct 30 00:03:25.321823 containerd[1684]: time="2025-10-30T00:03:25.321744881Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b7a1c6e305feea3adeac73b0a06b4995b4ff93915ae6a0bebf7453203bde1f48\" id:\"994ac68f71523e9d8dc821ef6e8c57bf6471af928c67854183b9314e6ee1325e\" pid:5022 exited_at:{seconds:1761782605 nanos:321528610}" Oct 30 00:03:29.035574 kubelet[2999]: E1030 00:03:29.035303 2999 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-pm7rt" podUID="5b6d7e41-a662-49c6-bb05-81f7e9c9a829" Oct 30 00:03:30.040024 kubelet[2999]: E1030 00:03:30.039051 2999 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7d5c95967b-s9656" podUID="dc87ca77-2c29-4c4e-beb8-2b81cdefd490" Oct 30 00:03:31.035205 kubelet[2999]: E1030 00:03:31.035160 2999 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7pkzg" podUID="b56fb7b2-ab30-4c17-b3d4-f41ec039c361" Oct 30 00:03:34.035580 kubelet[2999]: E1030 00:03:34.035551 2999 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b74d8c9c5-hkpns" podUID="542f58e7-43b6-487a-8c10-54da1c4b4004" Oct 30 00:03:34.036441 kubelet[2999]: E1030 00:03:34.036426 2999 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b74d8c9c5-74m4p" podUID="3e83ebe5-6a04-4887-9815-f7c972a7870a" Oct 30 00:03:35.470757 systemd[1]: Started sshd@7-139.178.70.100:22-139.178.89.65:36634.service - OpenSSH per-connection server daemon (139.178.89.65:36634). Oct 30 00:03:35.702439 sshd[5045]: Accepted publickey for core from 139.178.89.65 port 36634 ssh2: RSA SHA256:uxUF/q85/qzc+1rZf4OxvPLwxJ/jaHnP3TYKM4pHLgM Oct 30 00:03:35.704349 sshd-session[5045]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:03:35.709272 systemd-logind[1659]: New session 10 of user core. Oct 30 00:03:35.715256 systemd[1]: Started session-10.scope - Session 10 of User core. Oct 30 00:03:37.160877 sshd[5048]: Connection closed by 139.178.89.65 port 36634 Oct 30 00:03:37.172465 systemd-logind[1659]: Session 10 logged out. Waiting for processes to exit. Oct 30 00:03:37.161780 sshd-session[5045]: pam_unix(sshd:session): session closed for user core Oct 30 00:03:37.172909 systemd[1]: sshd@7-139.178.70.100:22-139.178.89.65:36634.service: Deactivated successfully. Oct 30 00:03:37.174596 systemd[1]: session-10.scope: Deactivated successfully. Oct 30 00:03:37.176238 systemd-logind[1659]: Removed session 10. Oct 30 00:03:38.038122 containerd[1684]: time="2025-10-30T00:03:38.037848243Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 30 00:03:38.415696 containerd[1684]: time="2025-10-30T00:03:38.414565898Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:03:38.415916 containerd[1684]: time="2025-10-30T00:03:38.415889706Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 30 00:03:38.415951 containerd[1684]: time="2025-10-30T00:03:38.415904138Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Oct 30 00:03:38.416198 kubelet[2999]: E1030 00:03:38.416171 2999 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 30 00:03:38.416594 kubelet[2999]: E1030 00:03:38.416207 2999 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 30 00:03:38.416594 kubelet[2999]: E1030 00:03:38.416283 2999 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-5d498cdb87-vd49x_calico-system(233c3511-5659-4261-b377-890b1ba99d60): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 30 00:03:38.417332 containerd[1684]: time="2025-10-30T00:03:38.417315863Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 30 00:03:38.892293 containerd[1684]: time="2025-10-30T00:03:38.892239467Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:03:38.897523 containerd[1684]: time="2025-10-30T00:03:38.897499922Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 30 00:03:38.897596 containerd[1684]: time="2025-10-30T00:03:38.897548026Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Oct 30 00:03:38.897654 kubelet[2999]: E1030 00:03:38.897629 2999 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 30 00:03:38.897693 kubelet[2999]: E1030 00:03:38.897669 2999 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 30 00:03:38.897745 kubelet[2999]: E1030 00:03:38.897721 2999 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-5d498cdb87-vd49x_calico-system(233c3511-5659-4261-b377-890b1ba99d60): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 30 00:03:38.897985 kubelet[2999]: E1030 00:03:38.897758 2999 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5d498cdb87-vd49x" podUID="233c3511-5659-4261-b377-890b1ba99d60" Oct 30 00:03:40.036189 containerd[1684]: time="2025-10-30T00:03:40.035808773Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 30 00:03:40.401472 containerd[1684]: time="2025-10-30T00:03:40.401379388Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:03:40.411432 containerd[1684]: time="2025-10-30T00:03:40.411398227Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 30 00:03:40.411487 containerd[1684]: time="2025-10-30T00:03:40.411464926Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Oct 30 00:03:40.411932 kubelet[2999]: E1030 00:03:40.411560 2999 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 30 00:03:40.411932 kubelet[2999]: E1030 00:03:40.411592 2999 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 30 00:03:40.411932 kubelet[2999]: E1030 00:03:40.411653 2999 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-pm7rt_calico-system(5b6d7e41-a662-49c6-bb05-81f7e9c9a829): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 30 00:03:40.411932 kubelet[2999]: E1030 00:03:40.411678 2999 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-pm7rt" podUID="5b6d7e41-a662-49c6-bb05-81f7e9c9a829" Oct 30 00:03:42.173916 systemd[1]: Started sshd@8-139.178.70.100:22-139.178.89.65:53250.service - OpenSSH per-connection server daemon (139.178.89.65:53250). Oct 30 00:03:42.277690 sshd[5067]: Accepted publickey for core from 139.178.89.65 port 53250 ssh2: RSA SHA256:uxUF/q85/qzc+1rZf4OxvPLwxJ/jaHnP3TYKM4pHLgM Oct 30 00:03:42.278541 sshd-session[5067]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:03:42.285361 systemd-logind[1659]: New session 11 of user core. Oct 30 00:03:42.291226 systemd[1]: Started session-11.scope - Session 11 of User core. Oct 30 00:03:42.507169 sshd[5070]: Connection closed by 139.178.89.65 port 53250 Oct 30 00:03:42.507547 sshd-session[5067]: pam_unix(sshd:session): session closed for user core Oct 30 00:03:42.510240 systemd-logind[1659]: Session 11 logged out. Waiting for processes to exit. Oct 30 00:03:42.510860 systemd[1]: sshd@8-139.178.70.100:22-139.178.89.65:53250.service: Deactivated successfully. Oct 30 00:03:42.512259 systemd[1]: session-11.scope: Deactivated successfully. Oct 30 00:03:42.513313 systemd-logind[1659]: Removed session 11. Oct 30 00:03:44.037115 containerd[1684]: time="2025-10-30T00:03:44.037057980Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 30 00:03:44.461239 containerd[1684]: time="2025-10-30T00:03:44.460963495Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:03:44.467309 containerd[1684]: time="2025-10-30T00:03:44.467280173Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 30 00:03:44.467869 containerd[1684]: time="2025-10-30T00:03:44.467399230Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Oct 30 00:03:44.467910 kubelet[2999]: E1030 00:03:44.467539 2999 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 30 00:03:44.467910 kubelet[2999]: E1030 00:03:44.467570 2999 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 30 00:03:44.467910 kubelet[2999]: E1030 00:03:44.467619 2999 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-7d5c95967b-s9656_calico-system(dc87ca77-2c29-4c4e-beb8-2b81cdefd490): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 30 00:03:44.467910 kubelet[2999]: E1030 00:03:44.467641 2999 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7d5c95967b-s9656" podUID="dc87ca77-2c29-4c4e-beb8-2b81cdefd490" Oct 30 00:03:46.035217 containerd[1684]: time="2025-10-30T00:03:46.035176427Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 30 00:03:46.381099 containerd[1684]: time="2025-10-30T00:03:46.381019910Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:03:46.381446 containerd[1684]: time="2025-10-30T00:03:46.381401048Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 30 00:03:46.383133 containerd[1684]: time="2025-10-30T00:03:46.383118627Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Oct 30 00:03:46.383282 kubelet[2999]: E1030 00:03:46.383257 2999 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 30 00:03:46.383484 kubelet[2999]: E1030 00:03:46.383288 2999 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 30 00:03:46.383484 kubelet[2999]: E1030 00:03:46.383418 2999 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-7pkzg_calico-system(b56fb7b2-ab30-4c17-b3d4-f41ec039c361): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 30 00:03:46.383813 containerd[1684]: time="2025-10-30T00:03:46.383793019Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 30 00:03:46.748410 containerd[1684]: time="2025-10-30T00:03:46.748373552Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:03:46.748796 containerd[1684]: time="2025-10-30T00:03:46.748774664Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 30 00:03:46.748858 containerd[1684]: time="2025-10-30T00:03:46.748826565Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 30 00:03:46.748948 kubelet[2999]: E1030 00:03:46.748922 2999 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 30 00:03:46.749010 kubelet[2999]: E1030 00:03:46.748953 2999 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 30 00:03:46.749094 kubelet[2999]: E1030 00:03:46.749076 2999 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-7b74d8c9c5-74m4p_calico-apiserver(3e83ebe5-6a04-4887-9815-f7c972a7870a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 30 00:03:46.749152 kubelet[2999]: E1030 00:03:46.749119 2999 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b74d8c9c5-74m4p" podUID="3e83ebe5-6a04-4887-9815-f7c972a7870a" Oct 30 00:03:46.749931 containerd[1684]: time="2025-10-30T00:03:46.749449966Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 30 00:03:47.285390 containerd[1684]: time="2025-10-30T00:03:47.285266880Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:03:47.286138 containerd[1684]: time="2025-10-30T00:03:47.285865713Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 30 00:03:47.286222 containerd[1684]: time="2025-10-30T00:03:47.286186932Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Oct 30 00:03:47.287187 kubelet[2999]: E1030 00:03:47.287152 2999 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 30 00:03:47.287249 kubelet[2999]: E1030 00:03:47.287185 2999 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 30 00:03:47.287249 kubelet[2999]: E1030 00:03:47.287243 2999 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-7pkzg_calico-system(b56fb7b2-ab30-4c17-b3d4-f41ec039c361): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 30 00:03:47.301032 kubelet[2999]: E1030 00:03:47.287273 2999 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7pkzg" podUID="b56fb7b2-ab30-4c17-b3d4-f41ec039c361" Oct 30 00:03:47.515309 systemd[1]: Started sshd@9-139.178.70.100:22-139.178.89.65:52364.service - OpenSSH per-connection server daemon (139.178.89.65:52364). Oct 30 00:03:47.572270 sshd[5086]: Accepted publickey for core from 139.178.89.65 port 52364 ssh2: RSA SHA256:uxUF/q85/qzc+1rZf4OxvPLwxJ/jaHnP3TYKM4pHLgM Oct 30 00:03:47.573307 sshd-session[5086]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:03:47.576952 systemd-logind[1659]: New session 12 of user core. Oct 30 00:03:47.580200 systemd[1]: Started session-12.scope - Session 12 of User core. Oct 30 00:03:47.691133 sshd[5089]: Connection closed by 139.178.89.65 port 52364 Oct 30 00:03:47.692086 sshd-session[5086]: pam_unix(sshd:session): session closed for user core Oct 30 00:03:47.700549 systemd[1]: sshd@9-139.178.70.100:22-139.178.89.65:52364.service: Deactivated successfully. Oct 30 00:03:47.701938 systemd[1]: session-12.scope: Deactivated successfully. Oct 30 00:03:47.703345 systemd-logind[1659]: Session 12 logged out. Waiting for processes to exit. Oct 30 00:03:47.704711 systemd-logind[1659]: Removed session 12. Oct 30 00:03:47.707093 systemd[1]: Started sshd@10-139.178.70.100:22-139.178.89.65:52366.service - OpenSSH per-connection server daemon (139.178.89.65:52366). Oct 30 00:03:47.750409 sshd[5102]: Accepted publickey for core from 139.178.89.65 port 52366 ssh2: RSA SHA256:uxUF/q85/qzc+1rZf4OxvPLwxJ/jaHnP3TYKM4pHLgM Oct 30 00:03:47.751425 sshd-session[5102]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:03:47.756322 systemd-logind[1659]: New session 13 of user core. Oct 30 00:03:47.760213 systemd[1]: Started session-13.scope - Session 13 of User core. Oct 30 00:03:47.894332 sshd[5105]: Connection closed by 139.178.89.65 port 52366 Oct 30 00:03:47.896416 sshd-session[5102]: pam_unix(sshd:session): session closed for user core Oct 30 00:03:47.905033 systemd[1]: Started sshd@11-139.178.70.100:22-139.178.89.65:52376.service - OpenSSH per-connection server daemon (139.178.89.65:52376). Oct 30 00:03:47.905602 systemd[1]: sshd@10-139.178.70.100:22-139.178.89.65:52366.service: Deactivated successfully. Oct 30 00:03:47.907300 systemd[1]: session-13.scope: Deactivated successfully. Oct 30 00:03:47.910399 systemd-logind[1659]: Session 13 logged out. Waiting for processes to exit. Oct 30 00:03:47.913558 systemd-logind[1659]: Removed session 13. Oct 30 00:03:47.974867 sshd[5112]: Accepted publickey for core from 139.178.89.65 port 52376 ssh2: RSA SHA256:uxUF/q85/qzc+1rZf4OxvPLwxJ/jaHnP3TYKM4pHLgM Oct 30 00:03:47.975666 sshd-session[5112]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:03:47.979227 systemd-logind[1659]: New session 14 of user core. Oct 30 00:03:47.986271 systemd[1]: Started session-14.scope - Session 14 of User core. Oct 30 00:03:48.083912 sshd[5118]: Connection closed by 139.178.89.65 port 52376 Oct 30 00:03:48.084374 sshd-session[5112]: pam_unix(sshd:session): session closed for user core Oct 30 00:03:48.087068 systemd[1]: sshd@11-139.178.70.100:22-139.178.89.65:52376.service: Deactivated successfully. Oct 30 00:03:48.088653 systemd[1]: session-14.scope: Deactivated successfully. Oct 30 00:03:48.089159 systemd-logind[1659]: Session 14 logged out. Waiting for processes to exit. Oct 30 00:03:48.089851 systemd-logind[1659]: Removed session 14. Oct 30 00:03:49.034505 containerd[1684]: time="2025-10-30T00:03:49.034456584Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 30 00:03:49.400192 containerd[1684]: time="2025-10-30T00:03:49.399946091Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:03:49.407559 containerd[1684]: time="2025-10-30T00:03:49.407526708Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 30 00:03:49.407617 containerd[1684]: time="2025-10-30T00:03:49.407594389Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 30 00:03:49.407777 kubelet[2999]: E1030 00:03:49.407745 2999 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 30 00:03:49.408018 kubelet[2999]: E1030 00:03:49.407781 2999 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 30 00:03:49.408018 kubelet[2999]: E1030 00:03:49.407839 2999 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-7b74d8c9c5-hkpns_calico-apiserver(542f58e7-43b6-487a-8c10-54da1c4b4004): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 30 00:03:49.408018 kubelet[2999]: E1030 00:03:49.407864 2999 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b74d8c9c5-hkpns" podUID="542f58e7-43b6-487a-8c10-54da1c4b4004" Oct 30 00:03:52.039118 kubelet[2999]: E1030 00:03:52.037045 2999 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5d498cdb87-vd49x" podUID="233c3511-5659-4261-b377-890b1ba99d60" Oct 30 00:03:53.034843 kubelet[2999]: E1030 00:03:53.034799 2999 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-pm7rt" podUID="5b6d7e41-a662-49c6-bb05-81f7e9c9a829" Oct 30 00:03:53.094054 systemd[1]: Started sshd@12-139.178.70.100:22-139.178.89.65:52390.service - OpenSSH per-connection server daemon (139.178.89.65:52390). Oct 30 00:03:53.173012 sshd[5134]: Accepted publickey for core from 139.178.89.65 port 52390 ssh2: RSA SHA256:uxUF/q85/qzc+1rZf4OxvPLwxJ/jaHnP3TYKM4pHLgM Oct 30 00:03:53.174183 sshd-session[5134]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:03:53.176810 systemd-logind[1659]: New session 15 of user core. Oct 30 00:03:53.181312 systemd[1]: Started session-15.scope - Session 15 of User core. Oct 30 00:03:53.292726 sshd[5137]: Connection closed by 139.178.89.65 port 52390 Oct 30 00:03:53.294091 sshd-session[5134]: pam_unix(sshd:session): session closed for user core Oct 30 00:03:53.296994 systemd[1]: sshd@12-139.178.70.100:22-139.178.89.65:52390.service: Deactivated successfully. Oct 30 00:03:53.299647 systemd[1]: session-15.scope: Deactivated successfully. Oct 30 00:03:53.300906 systemd-logind[1659]: Session 15 logged out. Waiting for processes to exit. Oct 30 00:03:53.302132 systemd-logind[1659]: Removed session 15. Oct 30 00:03:55.289606 containerd[1684]: time="2025-10-30T00:03:55.289554209Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b7a1c6e305feea3adeac73b0a06b4995b4ff93915ae6a0bebf7453203bde1f48\" id:\"e271e8cd59f1235c4cdf1acc6c247bb87de4c749b2bac84b99d1b9fe06835342\" pid:5161 exit_status:1 exited_at:{seconds:1761782635 nanos:289204078}" Oct 30 00:03:56.034609 kubelet[2999]: E1030 00:03:56.034141 2999 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7d5c95967b-s9656" podUID="dc87ca77-2c29-4c4e-beb8-2b81cdefd490" Oct 30 00:03:58.036175 kubelet[2999]: E1030 00:03:58.036144 2999 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b74d8c9c5-74m4p" podUID="3e83ebe5-6a04-4887-9815-f7c972a7870a" Oct 30 00:03:58.308224 systemd[1]: Started sshd@13-139.178.70.100:22-139.178.89.65:34976.service - OpenSSH per-connection server daemon (139.178.89.65:34976). Oct 30 00:03:58.546135 sshd[5177]: Accepted publickey for core from 139.178.89.65 port 34976 ssh2: RSA SHA256:uxUF/q85/qzc+1rZf4OxvPLwxJ/jaHnP3TYKM4pHLgM Oct 30 00:03:58.549255 sshd-session[5177]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:03:58.556007 systemd-logind[1659]: New session 16 of user core. Oct 30 00:03:58.562456 systemd[1]: Started session-16.scope - Session 16 of User core. Oct 30 00:03:58.754304 sshd[5180]: Connection closed by 139.178.89.65 port 34976 Oct 30 00:03:58.754643 sshd-session[5177]: pam_unix(sshd:session): session closed for user core Oct 30 00:03:58.757222 systemd[1]: sshd@13-139.178.70.100:22-139.178.89.65:34976.service: Deactivated successfully. Oct 30 00:03:58.759058 systemd[1]: session-16.scope: Deactivated successfully. Oct 30 00:03:58.759968 systemd-logind[1659]: Session 16 logged out. Waiting for processes to exit. Oct 30 00:03:58.760868 systemd-logind[1659]: Removed session 16. Oct 30 00:03:59.035069 kubelet[2999]: E1030 00:03:59.035023 2999 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7pkzg" podUID="b56fb7b2-ab30-4c17-b3d4-f41ec039c361" Oct 30 00:04:03.045034 kubelet[2999]: E1030 00:04:03.044992 2999 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b74d8c9c5-hkpns" podUID="542f58e7-43b6-487a-8c10-54da1c4b4004" Oct 30 00:04:03.766587 systemd[1]: Started sshd@14-139.178.70.100:22-139.178.89.65:34980.service - OpenSSH per-connection server daemon (139.178.89.65:34980). Oct 30 00:04:03.804147 sshd[5197]: Accepted publickey for core from 139.178.89.65 port 34980 ssh2: RSA SHA256:uxUF/q85/qzc+1rZf4OxvPLwxJ/jaHnP3TYKM4pHLgM Oct 30 00:04:03.804833 sshd-session[5197]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:04:03.807835 systemd-logind[1659]: New session 17 of user core. Oct 30 00:04:03.814184 systemd[1]: Started session-17.scope - Session 17 of User core. Oct 30 00:04:03.908423 sshd[5200]: Connection closed by 139.178.89.65 port 34980 Oct 30 00:04:03.908923 sshd-session[5197]: pam_unix(sshd:session): session closed for user core Oct 30 00:04:03.910759 systemd[1]: sshd@14-139.178.70.100:22-139.178.89.65:34980.service: Deactivated successfully. Oct 30 00:04:03.912190 systemd[1]: session-17.scope: Deactivated successfully. Oct 30 00:04:03.912895 systemd-logind[1659]: Session 17 logged out. Waiting for processes to exit. Oct 30 00:04:03.914081 systemd-logind[1659]: Removed session 17. Oct 30 00:04:05.034446 kubelet[2999]: E1030 00:04:05.034237 2999 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-pm7rt" podUID="5b6d7e41-a662-49c6-bb05-81f7e9c9a829" Oct 30 00:04:06.035733 kubelet[2999]: E1030 00:04:06.035699 2999 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5d498cdb87-vd49x" podUID="233c3511-5659-4261-b377-890b1ba99d60" Oct 30 00:04:08.035998 kubelet[2999]: E1030 00:04:08.035970 2999 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7d5c95967b-s9656" podUID="dc87ca77-2c29-4c4e-beb8-2b81cdefd490" Oct 30 00:04:08.918778 systemd[1]: Started sshd@15-139.178.70.100:22-139.178.89.65:49466.service - OpenSSH per-connection server daemon (139.178.89.65:49466). Oct 30 00:04:08.978745 sshd[5212]: Accepted publickey for core from 139.178.89.65 port 49466 ssh2: RSA SHA256:uxUF/q85/qzc+1rZf4OxvPLwxJ/jaHnP3TYKM4pHLgM Oct 30 00:04:08.979801 sshd-session[5212]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:04:08.983018 systemd-logind[1659]: New session 18 of user core. Oct 30 00:04:08.987205 systemd[1]: Started session-18.scope - Session 18 of User core. Oct 30 00:04:09.088147 sshd[5215]: Connection closed by 139.178.89.65 port 49466 Oct 30 00:04:09.088662 sshd-session[5212]: pam_unix(sshd:session): session closed for user core Oct 30 00:04:09.095272 systemd[1]: sshd@15-139.178.70.100:22-139.178.89.65:49466.service: Deactivated successfully. Oct 30 00:04:09.096254 systemd[1]: session-18.scope: Deactivated successfully. Oct 30 00:04:09.096794 systemd-logind[1659]: Session 18 logged out. Waiting for processes to exit. Oct 30 00:04:09.098466 systemd[1]: Started sshd@16-139.178.70.100:22-139.178.89.65:49478.service - OpenSSH per-connection server daemon (139.178.89.65:49478). Oct 30 00:04:09.098994 systemd-logind[1659]: Removed session 18. Oct 30 00:04:09.144137 sshd[5227]: Accepted publickey for core from 139.178.89.65 port 49478 ssh2: RSA SHA256:uxUF/q85/qzc+1rZf4OxvPLwxJ/jaHnP3TYKM4pHLgM Oct 30 00:04:09.144850 sshd-session[5227]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:04:09.147619 systemd-logind[1659]: New session 19 of user core. Oct 30 00:04:09.154198 systemd[1]: Started session-19.scope - Session 19 of User core. Oct 30 00:04:09.529062 sshd[5230]: Connection closed by 139.178.89.65 port 49478 Oct 30 00:04:09.530044 sshd-session[5227]: pam_unix(sshd:session): session closed for user core Oct 30 00:04:09.535884 systemd[1]: sshd@16-139.178.70.100:22-139.178.89.65:49478.service: Deactivated successfully. Oct 30 00:04:09.537519 systemd[1]: session-19.scope: Deactivated successfully. Oct 30 00:04:09.538173 systemd-logind[1659]: Session 19 logged out. Waiting for processes to exit. Oct 30 00:04:09.539948 systemd[1]: Started sshd@17-139.178.70.100:22-139.178.89.65:49482.service - OpenSSH per-connection server daemon (139.178.89.65:49482). Oct 30 00:04:09.542827 systemd-logind[1659]: Removed session 19. Oct 30 00:04:09.602449 sshd[5240]: Accepted publickey for core from 139.178.89.65 port 49482 ssh2: RSA SHA256:uxUF/q85/qzc+1rZf4OxvPLwxJ/jaHnP3TYKM4pHLgM Oct 30 00:04:09.603441 sshd-session[5240]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:04:09.606904 systemd-logind[1659]: New session 20 of user core. Oct 30 00:04:09.610200 systemd[1]: Started session-20.scope - Session 20 of User core. Oct 30 00:04:10.036080 kubelet[2999]: E1030 00:04:10.036056 2999 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b74d8c9c5-74m4p" podUID="3e83ebe5-6a04-4887-9815-f7c972a7870a" Oct 30 00:04:10.037004 kubelet[2999]: E1030 00:04:10.036896 2999 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7pkzg" podUID="b56fb7b2-ab30-4c17-b3d4-f41ec039c361" Oct 30 00:04:10.247717 sshd[5243]: Connection closed by 139.178.89.65 port 49482 Oct 30 00:04:10.248670 sshd-session[5240]: pam_unix(sshd:session): session closed for user core Oct 30 00:04:10.256309 systemd[1]: sshd@17-139.178.70.100:22-139.178.89.65:49482.service: Deactivated successfully. Oct 30 00:04:10.257790 systemd[1]: session-20.scope: Deactivated successfully. Oct 30 00:04:10.259626 systemd-logind[1659]: Session 20 logged out. Waiting for processes to exit. Oct 30 00:04:10.260862 systemd[1]: Started sshd@18-139.178.70.100:22-139.178.89.65:49484.service - OpenSSH per-connection server daemon (139.178.89.65:49484). Oct 30 00:04:10.263523 systemd-logind[1659]: Removed session 20. Oct 30 00:04:10.309366 sshd[5258]: Accepted publickey for core from 139.178.89.65 port 49484 ssh2: RSA SHA256:uxUF/q85/qzc+1rZf4OxvPLwxJ/jaHnP3TYKM4pHLgM Oct 30 00:04:10.310533 sshd-session[5258]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:04:10.313688 systemd-logind[1659]: New session 21 of user core. Oct 30 00:04:10.317248 systemd[1]: Started session-21.scope - Session 21 of User core. Oct 30 00:04:10.558512 sshd[5261]: Connection closed by 139.178.89.65 port 49484 Oct 30 00:04:10.560531 sshd-session[5258]: pam_unix(sshd:session): session closed for user core Oct 30 00:04:10.566393 systemd[1]: sshd@18-139.178.70.100:22-139.178.89.65:49484.service: Deactivated successfully. Oct 30 00:04:10.568530 systemd[1]: session-21.scope: Deactivated successfully. Oct 30 00:04:10.570178 systemd-logind[1659]: Session 21 logged out. Waiting for processes to exit. Oct 30 00:04:10.571836 systemd[1]: Started sshd@19-139.178.70.100:22-139.178.89.65:49492.service - OpenSSH per-connection server daemon (139.178.89.65:49492). Oct 30 00:04:10.576819 systemd-logind[1659]: Removed session 21. Oct 30 00:04:10.635489 sshd[5270]: Accepted publickey for core from 139.178.89.65 port 49492 ssh2: RSA SHA256:uxUF/q85/qzc+1rZf4OxvPLwxJ/jaHnP3TYKM4pHLgM Oct 30 00:04:10.636416 sshd-session[5270]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:04:10.639043 systemd-logind[1659]: New session 22 of user core. Oct 30 00:04:10.644210 systemd[1]: Started session-22.scope - Session 22 of User core. Oct 30 00:04:10.818765 sshd[5273]: Connection closed by 139.178.89.65 port 49492 Oct 30 00:04:10.818583 sshd-session[5270]: pam_unix(sshd:session): session closed for user core Oct 30 00:04:10.821663 systemd[1]: sshd@19-139.178.70.100:22-139.178.89.65:49492.service: Deactivated successfully. Oct 30 00:04:10.822773 systemd[1]: session-22.scope: Deactivated successfully. Oct 30 00:04:10.823345 systemd-logind[1659]: Session 22 logged out. Waiting for processes to exit. Oct 30 00:04:10.824882 systemd-logind[1659]: Removed session 22. Oct 30 00:04:15.828820 systemd[1]: Started sshd@20-139.178.70.100:22-139.178.89.65:49506.service - OpenSSH per-connection server daemon (139.178.89.65:49506). Oct 30 00:04:15.894005 sshd[5291]: Accepted publickey for core from 139.178.89.65 port 49506 ssh2: RSA SHA256:uxUF/q85/qzc+1rZf4OxvPLwxJ/jaHnP3TYKM4pHLgM Oct 30 00:04:15.895073 sshd-session[5291]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:04:15.897883 systemd-logind[1659]: New session 23 of user core. Oct 30 00:04:15.904217 systemd[1]: Started session-23.scope - Session 23 of User core. Oct 30 00:04:16.069278 sshd[5294]: Connection closed by 139.178.89.65 port 49506 Oct 30 00:04:16.068346 sshd-session[5291]: pam_unix(sshd:session): session closed for user core Oct 30 00:04:16.071355 systemd-logind[1659]: Session 23 logged out. Waiting for processes to exit. Oct 30 00:04:16.071433 systemd[1]: sshd@20-139.178.70.100:22-139.178.89.65:49506.service: Deactivated successfully. Oct 30 00:04:16.072584 systemd[1]: session-23.scope: Deactivated successfully. Oct 30 00:04:16.074531 systemd-logind[1659]: Removed session 23. Oct 30 00:04:17.035254 kubelet[2999]: E1030 00:04:17.035093 2999 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b74d8c9c5-hkpns" podUID="542f58e7-43b6-487a-8c10-54da1c4b4004" Oct 30 00:04:18.035366 kubelet[2999]: E1030 00:04:18.035330 2999 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-pm7rt" podUID="5b6d7e41-a662-49c6-bb05-81f7e9c9a829" Oct 30 00:04:19.037153 kubelet[2999]: E1030 00:04:19.035034 2999 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7d5c95967b-s9656" podUID="dc87ca77-2c29-4c4e-beb8-2b81cdefd490" Oct 30 00:04:19.038550 containerd[1684]: time="2025-10-30T00:04:19.038053522Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 30 00:04:19.428649 containerd[1684]: time="2025-10-30T00:04:19.428570993Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:04:19.430369 containerd[1684]: time="2025-10-30T00:04:19.430271114Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 30 00:04:19.430369 containerd[1684]: time="2025-10-30T00:04:19.430356793Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Oct 30 00:04:19.430490 kubelet[2999]: E1030 00:04:19.430445 2999 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 30 00:04:19.430532 kubelet[2999]: E1030 00:04:19.430499 2999 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 30 00:04:19.430587 kubelet[2999]: E1030 00:04:19.430570 2999 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-5d498cdb87-vd49x_calico-system(233c3511-5659-4261-b377-890b1ba99d60): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 30 00:04:19.434180 containerd[1684]: time="2025-10-30T00:04:19.434166929Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 30 00:04:19.780505 containerd[1684]: time="2025-10-30T00:04:19.780403190Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:04:19.781074 containerd[1684]: time="2025-10-30T00:04:19.780887264Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 30 00:04:19.781242 containerd[1684]: time="2025-10-30T00:04:19.781233728Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Oct 30 00:04:19.781351 kubelet[2999]: E1030 00:04:19.781316 2999 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 30 00:04:19.781415 kubelet[2999]: E1030 00:04:19.781364 2999 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 30 00:04:19.781444 kubelet[2999]: E1030 00:04:19.781419 2999 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-5d498cdb87-vd49x_calico-system(233c3511-5659-4261-b377-890b1ba99d60): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 30 00:04:19.782196 kubelet[2999]: E1030 00:04:19.781457 2999 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5d498cdb87-vd49x" podUID="233c3511-5659-4261-b377-890b1ba99d60" Oct 30 00:04:21.077511 systemd[1]: Started sshd@21-139.178.70.100:22-139.178.89.65:43906.service - OpenSSH per-connection server daemon (139.178.89.65:43906). Oct 30 00:04:21.124968 sshd[5314]: Accepted publickey for core from 139.178.89.65 port 43906 ssh2: RSA SHA256:uxUF/q85/qzc+1rZf4OxvPLwxJ/jaHnP3TYKM4pHLgM Oct 30 00:04:21.126047 sshd-session[5314]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:04:21.132130 systemd-logind[1659]: New session 24 of user core. Oct 30 00:04:21.136211 systemd[1]: Started session-24.scope - Session 24 of User core. Oct 30 00:04:21.265482 sshd[5317]: Connection closed by 139.178.89.65 port 43906 Oct 30 00:04:21.266438 sshd-session[5314]: pam_unix(sshd:session): session closed for user core Oct 30 00:04:21.268798 systemd[1]: sshd@21-139.178.70.100:22-139.178.89.65:43906.service: Deactivated successfully. Oct 30 00:04:21.270963 systemd[1]: session-24.scope: Deactivated successfully. Oct 30 00:04:21.272728 systemd-logind[1659]: Session 24 logged out. Waiting for processes to exit. Oct 30 00:04:21.276142 systemd-logind[1659]: Removed session 24. Oct 30 00:04:23.035871 kubelet[2999]: E1030 00:04:23.035818 2999 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7pkzg" podUID="b56fb7b2-ab30-4c17-b3d4-f41ec039c361" Oct 30 00:04:24.034731 kubelet[2999]: E1030 00:04:24.034700 2999 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b74d8c9c5-74m4p" podUID="3e83ebe5-6a04-4887-9815-f7c972a7870a" Oct 30 00:04:25.250039 containerd[1684]: time="2025-10-30T00:04:25.249907008Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b7a1c6e305feea3adeac73b0a06b4995b4ff93915ae6a0bebf7453203bde1f48\" id:\"a1041adf770bff2e4e0b467cf4ed74d5e490c05411a028d3d51c8d20840657ab\" pid:5344 exited_at:{seconds:1761782665 nanos:249519281}" Oct 30 00:04:26.277844 systemd[1]: Started sshd@22-139.178.70.100:22-139.178.89.65:56786.service - OpenSSH per-connection server daemon (139.178.89.65:56786). Oct 30 00:04:26.332481 sshd[5358]: Accepted publickey for core from 139.178.89.65 port 56786 ssh2: RSA SHA256:uxUF/q85/qzc+1rZf4OxvPLwxJ/jaHnP3TYKM4pHLgM Oct 30 00:04:26.333283 sshd-session[5358]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:04:26.339320 systemd-logind[1659]: New session 25 of user core. Oct 30 00:04:26.344349 systemd[1]: Started session-25.scope - Session 25 of User core. Oct 30 00:04:26.447431 sshd[5361]: Connection closed by 139.178.89.65 port 56786 Oct 30 00:04:26.448862 sshd-session[5358]: pam_unix(sshd:session): session closed for user core Oct 30 00:04:26.451597 systemd[1]: sshd@22-139.178.70.100:22-139.178.89.65:56786.service: Deactivated successfully. Oct 30 00:04:26.454185 systemd[1]: session-25.scope: Deactivated successfully. Oct 30 00:04:26.455144 systemd-logind[1659]: Session 25 logged out. Waiting for processes to exit. Oct 30 00:04:26.456997 systemd-logind[1659]: Removed session 25.