Nov 4 04:49:42.643133 kernel: Linux version 6.12.54-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Tue Nov 4 03:00:51 -00 2025 Nov 4 04:49:42.643154 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=c479bf273e218e23ca82ede45f2bfcd1a1714a33fe5860e964ed0aea09538f01 Nov 4 04:49:42.643161 kernel: Disabled fast string operations Nov 4 04:49:42.643166 kernel: BIOS-provided physical RAM map: Nov 4 04:49:42.643170 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ebff] usable Nov 4 04:49:42.643175 kernel: BIOS-e820: [mem 0x000000000009ec00-0x000000000009ffff] reserved Nov 4 04:49:42.643180 kernel: BIOS-e820: [mem 0x00000000000dc000-0x00000000000fffff] reserved Nov 4 04:49:42.643186 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007fedffff] usable Nov 4 04:49:42.643191 kernel: BIOS-e820: [mem 0x000000007fee0000-0x000000007fefefff] ACPI data Nov 4 04:49:42.643195 kernel: BIOS-e820: [mem 0x000000007feff000-0x000000007fefffff] ACPI NVS Nov 4 04:49:42.643200 kernel: BIOS-e820: [mem 0x000000007ff00000-0x000000007fffffff] usable Nov 4 04:49:42.643205 kernel: BIOS-e820: [mem 0x00000000f0000000-0x00000000f7ffffff] reserved Nov 4 04:49:42.643209 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec0ffff] reserved Nov 4 04:49:42.643214 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Nov 4 04:49:42.643221 kernel: BIOS-e820: [mem 0x00000000fffe0000-0x00000000ffffffff] reserved Nov 4 04:49:42.643227 kernel: NX (Execute Disable) protection: active Nov 4 04:49:42.643232 kernel: APIC: Static calls initialized Nov 4 04:49:42.643237 kernel: SMBIOS 2.7 present. Nov 4 04:49:42.643243 kernel: DMI: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 05/28/2020 Nov 4 04:49:42.643248 kernel: DMI: Memory slots populated: 1/128 Nov 4 04:49:42.643253 kernel: vmware: hypercall mode: 0x00 Nov 4 04:49:42.643260 kernel: Hypervisor detected: VMware Nov 4 04:49:42.643265 kernel: vmware: TSC freq read from hypervisor : 3408.000 MHz Nov 4 04:49:42.643270 kernel: vmware: Host bus clock speed read from hypervisor : 66000000 Hz Nov 4 04:49:42.643276 kernel: vmware: using clock offset of 3702081787 ns Nov 4 04:49:42.643281 kernel: tsc: Detected 3408.000 MHz processor Nov 4 04:49:42.643287 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 4 04:49:42.643293 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 4 04:49:42.643298 kernel: last_pfn = 0x80000 max_arch_pfn = 0x400000000 Nov 4 04:49:42.643304 kernel: total RAM covered: 3072M Nov 4 04:49:42.643311 kernel: Found optimal setting for mtrr clean up Nov 4 04:49:42.643317 kernel: gran_size: 64K chunk_size: 64K num_reg: 2 lose cover RAM: 0G Nov 4 04:49:42.643323 kernel: MTRR map: 6 entries (5 fixed + 1 variable; max 21), built from 8 variable MTRRs Nov 4 04:49:42.643329 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 4 04:49:42.643334 kernel: Using GB pages for direct mapping Nov 4 04:49:42.643339 kernel: ACPI: Early table checksum verification disabled Nov 4 04:49:42.643345 kernel: ACPI: RSDP 0x00000000000F6A00 000024 (v02 PTLTD ) Nov 4 04:49:42.643352 kernel: ACPI: XSDT 0x000000007FEE965B 00005C (v01 INTEL 440BX 06040000 VMW 01324272) Nov 4 04:49:42.643358 kernel: ACPI: FACP 0x000000007FEFEE73 0000F4 (v04 INTEL 440BX 06040000 PTL 000F4240) Nov 4 04:49:42.643364 kernel: ACPI: DSDT 0x000000007FEEAD55 01411E (v01 PTLTD Custom 06040000 MSFT 03000001) Nov 4 04:49:42.643371 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Nov 4 04:49:42.643377 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Nov 4 04:49:42.643382 kernel: ACPI: BOOT 0x000000007FEEAD2D 000028 (v01 PTLTD $SBFTBL$ 06040000 LTP 00000001) Nov 4 04:49:42.643389 kernel: ACPI: APIC 0x000000007FEEA5EB 000742 (v01 PTLTD ? APIC 06040000 LTP 00000000) Nov 4 04:49:42.643395 kernel: ACPI: MCFG 0x000000007FEEA5AF 00003C (v01 PTLTD $PCITBL$ 06040000 LTP 00000001) Nov 4 04:49:42.643401 kernel: ACPI: SRAT 0x000000007FEE9757 0008A8 (v02 VMWARE MEMPLUG 06040000 VMW 00000001) Nov 4 04:49:42.643407 kernel: ACPI: HPET 0x000000007FEE971F 000038 (v01 VMWARE VMW HPET 06040000 VMW 00000001) Nov 4 04:49:42.643412 kernel: ACPI: WAET 0x000000007FEE96F7 000028 (v01 VMWARE VMW WAET 06040000 VMW 00000001) Nov 4 04:49:42.643418 kernel: ACPI: Reserving FACP table memory at [mem 0x7fefee73-0x7fefef66] Nov 4 04:49:42.643425 kernel: ACPI: Reserving DSDT table memory at [mem 0x7feead55-0x7fefee72] Nov 4 04:49:42.643431 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Nov 4 04:49:42.643436 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Nov 4 04:49:42.643442 kernel: ACPI: Reserving BOOT table memory at [mem 0x7feead2d-0x7feead54] Nov 4 04:49:42.643448 kernel: ACPI: Reserving APIC table memory at [mem 0x7feea5eb-0x7feead2c] Nov 4 04:49:42.643453 kernel: ACPI: Reserving MCFG table memory at [mem 0x7feea5af-0x7feea5ea] Nov 4 04:49:42.643459 kernel: ACPI: Reserving SRAT table memory at [mem 0x7fee9757-0x7fee9ffe] Nov 4 04:49:42.643466 kernel: ACPI: Reserving HPET table memory at [mem 0x7fee971f-0x7fee9756] Nov 4 04:49:42.643472 kernel: ACPI: Reserving WAET table memory at [mem 0x7fee96f7-0x7fee971e] Nov 4 04:49:42.643477 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Nov 4 04:49:42.643483 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Nov 4 04:49:42.643489 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000-0xbfffffff] hotplug Nov 4 04:49:42.643495 kernel: NUMA: Node 0 [mem 0x00001000-0x0009ffff] + [mem 0x00100000-0x7fffffff] -> [mem 0x00001000-0x7fffffff] Nov 4 04:49:42.643500 kernel: NODE_DATA(0) allocated [mem 0x7fff8dc0-0x7fffffff] Nov 4 04:49:42.643506 kernel: Zone ranges: Nov 4 04:49:42.643513 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 4 04:49:42.643519 kernel: DMA32 [mem 0x0000000001000000-0x000000007fffffff] Nov 4 04:49:42.643525 kernel: Normal empty Nov 4 04:49:42.643530 kernel: Device empty Nov 4 04:49:42.643536 kernel: Movable zone start for each node Nov 4 04:49:42.643542 kernel: Early memory node ranges Nov 4 04:49:42.643547 kernel: node 0: [mem 0x0000000000001000-0x000000000009dfff] Nov 4 04:49:42.643553 kernel: node 0: [mem 0x0000000000100000-0x000000007fedffff] Nov 4 04:49:42.643559 kernel: node 0: [mem 0x000000007ff00000-0x000000007fffffff] Nov 4 04:49:42.643565 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007fffffff] Nov 4 04:49:42.643571 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 4 04:49:42.643577 kernel: On node 0, zone DMA: 98 pages in unavailable ranges Nov 4 04:49:42.643583 kernel: On node 0, zone DMA32: 32 pages in unavailable ranges Nov 4 04:49:42.643588 kernel: ACPI: PM-Timer IO Port: 0x1008 Nov 4 04:49:42.643594 kernel: ACPI: LAPIC_NMI (acpi_id[0x00] high edge lint[0x1]) Nov 4 04:49:42.643601 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Nov 4 04:49:42.643607 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Nov 4 04:49:42.643612 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Nov 4 04:49:42.643617 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Nov 4 04:49:42.643623 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Nov 4 04:49:42.643629 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Nov 4 04:49:42.643634 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Nov 4 04:49:42.643640 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Nov 4 04:49:42.643646 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Nov 4 04:49:42.643652 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Nov 4 04:49:42.643657 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Nov 4 04:49:42.643663 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Nov 4 04:49:42.643668 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Nov 4 04:49:42.643674 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Nov 4 04:49:42.643679 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Nov 4 04:49:42.643685 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Nov 4 04:49:42.643690 kernel: ACPI: LAPIC_NMI (acpi_id[0x11] high edge lint[0x1]) Nov 4 04:49:42.643697 kernel: ACPI: LAPIC_NMI (acpi_id[0x12] high edge lint[0x1]) Nov 4 04:49:42.643703 kernel: ACPI: LAPIC_NMI (acpi_id[0x13] high edge lint[0x1]) Nov 4 04:49:42.643708 kernel: ACPI: LAPIC_NMI (acpi_id[0x14] high edge lint[0x1]) Nov 4 04:49:42.643714 kernel: ACPI: LAPIC_NMI (acpi_id[0x15] high edge lint[0x1]) Nov 4 04:49:42.643719 kernel: ACPI: LAPIC_NMI (acpi_id[0x16] high edge lint[0x1]) Nov 4 04:49:42.643725 kernel: ACPI: LAPIC_NMI (acpi_id[0x17] high edge lint[0x1]) Nov 4 04:49:42.643731 kernel: ACPI: LAPIC_NMI (acpi_id[0x18] high edge lint[0x1]) Nov 4 04:49:42.643737 kernel: ACPI: LAPIC_NMI (acpi_id[0x19] high edge lint[0x1]) Nov 4 04:49:42.643743 kernel: ACPI: LAPIC_NMI (acpi_id[0x1a] high edge lint[0x1]) Nov 4 04:49:42.643749 kernel: ACPI: LAPIC_NMI (acpi_id[0x1b] high edge lint[0x1]) Nov 4 04:49:42.643754 kernel: ACPI: LAPIC_NMI (acpi_id[0x1c] high edge lint[0x1]) Nov 4 04:49:42.643760 kernel: ACPI: LAPIC_NMI (acpi_id[0x1d] high edge lint[0x1]) Nov 4 04:49:42.643765 kernel: ACPI: LAPIC_NMI (acpi_id[0x1e] high edge lint[0x1]) Nov 4 04:49:42.643771 kernel: ACPI: LAPIC_NMI (acpi_id[0x1f] high edge lint[0x1]) Nov 4 04:49:42.643777 kernel: ACPI: LAPIC_NMI (acpi_id[0x20] high edge lint[0x1]) Nov 4 04:49:42.643782 kernel: ACPI: LAPIC_NMI (acpi_id[0x21] high edge lint[0x1]) Nov 4 04:49:42.643788 kernel: ACPI: LAPIC_NMI (acpi_id[0x22] high edge lint[0x1]) Nov 4 04:49:42.643794 kernel: ACPI: LAPIC_NMI (acpi_id[0x23] high edge lint[0x1]) Nov 4 04:49:42.643799 kernel: ACPI: LAPIC_NMI (acpi_id[0x24] high edge lint[0x1]) Nov 4 04:49:42.643805 kernel: ACPI: LAPIC_NMI (acpi_id[0x25] high edge lint[0x1]) Nov 4 04:49:42.643810 kernel: ACPI: LAPIC_NMI (acpi_id[0x26] high edge lint[0x1]) Nov 4 04:49:42.643816 kernel: ACPI: LAPIC_NMI (acpi_id[0x27] high edge lint[0x1]) Nov 4 04:49:42.643826 kernel: ACPI: LAPIC_NMI (acpi_id[0x28] high edge lint[0x1]) Nov 4 04:49:42.643832 kernel: ACPI: LAPIC_NMI (acpi_id[0x29] high edge lint[0x1]) Nov 4 04:49:42.643838 kernel: ACPI: LAPIC_NMI (acpi_id[0x2a] high edge lint[0x1]) Nov 4 04:49:42.643843 kernel: ACPI: LAPIC_NMI (acpi_id[0x2b] high edge lint[0x1]) Nov 4 04:49:42.643850 kernel: ACPI: LAPIC_NMI (acpi_id[0x2c] high edge lint[0x1]) Nov 4 04:49:42.643856 kernel: ACPI: LAPIC_NMI (acpi_id[0x2d] high edge lint[0x1]) Nov 4 04:49:42.643862 kernel: ACPI: LAPIC_NMI (acpi_id[0x2e] high edge lint[0x1]) Nov 4 04:49:42.643868 kernel: ACPI: LAPIC_NMI (acpi_id[0x2f] high edge lint[0x1]) Nov 4 04:49:42.643874 kernel: ACPI: LAPIC_NMI (acpi_id[0x30] high edge lint[0x1]) Nov 4 04:49:42.643881 kernel: ACPI: LAPIC_NMI (acpi_id[0x31] high edge lint[0x1]) Nov 4 04:49:42.643886 kernel: ACPI: LAPIC_NMI (acpi_id[0x32] high edge lint[0x1]) Nov 4 04:49:42.643892 kernel: ACPI: LAPIC_NMI (acpi_id[0x33] high edge lint[0x1]) Nov 4 04:49:42.643898 kernel: ACPI: LAPIC_NMI (acpi_id[0x34] high edge lint[0x1]) Nov 4 04:49:42.643904 kernel: ACPI: LAPIC_NMI (acpi_id[0x35] high edge lint[0x1]) Nov 4 04:49:42.643910 kernel: ACPI: LAPIC_NMI (acpi_id[0x36] high edge lint[0x1]) Nov 4 04:49:42.643916 kernel: ACPI: LAPIC_NMI (acpi_id[0x37] high edge lint[0x1]) Nov 4 04:49:42.643922 kernel: ACPI: LAPIC_NMI (acpi_id[0x38] high edge lint[0x1]) Nov 4 04:49:42.643929 kernel: ACPI: LAPIC_NMI (acpi_id[0x39] high edge lint[0x1]) Nov 4 04:49:42.643935 kernel: ACPI: LAPIC_NMI (acpi_id[0x3a] high edge lint[0x1]) Nov 4 04:49:42.643940 kernel: ACPI: LAPIC_NMI (acpi_id[0x3b] high edge lint[0x1]) Nov 4 04:49:42.643947 kernel: ACPI: LAPIC_NMI (acpi_id[0x3c] high edge lint[0x1]) Nov 4 04:49:42.643953 kernel: ACPI: LAPIC_NMI (acpi_id[0x3d] high edge lint[0x1]) Nov 4 04:49:42.643959 kernel: ACPI: LAPIC_NMI (acpi_id[0x3e] high edge lint[0x1]) Nov 4 04:49:42.643965 kernel: ACPI: LAPIC_NMI (acpi_id[0x3f] high edge lint[0x1]) Nov 4 04:49:42.643971 kernel: ACPI: LAPIC_NMI (acpi_id[0x40] high edge lint[0x1]) Nov 4 04:49:42.643978 kernel: ACPI: LAPIC_NMI (acpi_id[0x41] high edge lint[0x1]) Nov 4 04:49:42.643984 kernel: ACPI: LAPIC_NMI (acpi_id[0x42] high edge lint[0x1]) Nov 4 04:49:42.643990 kernel: ACPI: LAPIC_NMI (acpi_id[0x43] high edge lint[0x1]) Nov 4 04:49:42.643995 kernel: ACPI: LAPIC_NMI (acpi_id[0x44] high edge lint[0x1]) Nov 4 04:49:42.644002 kernel: ACPI: LAPIC_NMI (acpi_id[0x45] high edge lint[0x1]) Nov 4 04:49:42.644007 kernel: ACPI: LAPIC_NMI (acpi_id[0x46] high edge lint[0x1]) Nov 4 04:49:42.644013 kernel: ACPI: LAPIC_NMI (acpi_id[0x47] high edge lint[0x1]) Nov 4 04:49:42.644019 kernel: ACPI: LAPIC_NMI (acpi_id[0x48] high edge lint[0x1]) Nov 4 04:49:42.644025 kernel: ACPI: LAPIC_NMI (acpi_id[0x49] high edge lint[0x1]) Nov 4 04:49:42.644033 kernel: ACPI: LAPIC_NMI (acpi_id[0x4a] high edge lint[0x1]) Nov 4 04:49:42.644038 kernel: ACPI: LAPIC_NMI (acpi_id[0x4b] high edge lint[0x1]) Nov 4 04:49:42.644045 kernel: ACPI: LAPIC_NMI (acpi_id[0x4c] high edge lint[0x1]) Nov 4 04:49:42.644050 kernel: ACPI: LAPIC_NMI (acpi_id[0x4d] high edge lint[0x1]) Nov 4 04:49:42.644057 kernel: ACPI: LAPIC_NMI (acpi_id[0x4e] high edge lint[0x1]) Nov 4 04:49:42.649071 kernel: ACPI: LAPIC_NMI (acpi_id[0x4f] high edge lint[0x1]) Nov 4 04:49:42.649083 kernel: ACPI: LAPIC_NMI (acpi_id[0x50] high edge lint[0x1]) Nov 4 04:49:42.649090 kernel: ACPI: LAPIC_NMI (acpi_id[0x51] high edge lint[0x1]) Nov 4 04:49:42.649099 kernel: ACPI: LAPIC_NMI (acpi_id[0x52] high edge lint[0x1]) Nov 4 04:49:42.649105 kernel: ACPI: LAPIC_NMI (acpi_id[0x53] high edge lint[0x1]) Nov 4 04:49:42.649111 kernel: ACPI: LAPIC_NMI (acpi_id[0x54] high edge lint[0x1]) Nov 4 04:49:42.649117 kernel: ACPI: LAPIC_NMI (acpi_id[0x55] high edge lint[0x1]) Nov 4 04:49:42.649123 kernel: ACPI: LAPIC_NMI (acpi_id[0x56] high edge lint[0x1]) Nov 4 04:49:42.649129 kernel: ACPI: LAPIC_NMI (acpi_id[0x57] high edge lint[0x1]) Nov 4 04:49:42.649135 kernel: ACPI: LAPIC_NMI (acpi_id[0x58] high edge lint[0x1]) Nov 4 04:49:42.649141 kernel: ACPI: LAPIC_NMI (acpi_id[0x59] high edge lint[0x1]) Nov 4 04:49:42.649148 kernel: ACPI: LAPIC_NMI (acpi_id[0x5a] high edge lint[0x1]) Nov 4 04:49:42.649154 kernel: ACPI: LAPIC_NMI (acpi_id[0x5b] high edge lint[0x1]) Nov 4 04:49:42.649160 kernel: ACPI: LAPIC_NMI (acpi_id[0x5c] high edge lint[0x1]) Nov 4 04:49:42.649166 kernel: ACPI: LAPIC_NMI (acpi_id[0x5d] high edge lint[0x1]) Nov 4 04:49:42.649171 kernel: ACPI: LAPIC_NMI (acpi_id[0x5e] high edge lint[0x1]) Nov 4 04:49:42.649178 kernel: ACPI: LAPIC_NMI (acpi_id[0x5f] high edge lint[0x1]) Nov 4 04:49:42.649183 kernel: ACPI: LAPIC_NMI (acpi_id[0x60] high edge lint[0x1]) Nov 4 04:49:42.649190 kernel: ACPI: LAPIC_NMI (acpi_id[0x61] high edge lint[0x1]) Nov 4 04:49:42.649197 kernel: ACPI: LAPIC_NMI (acpi_id[0x62] high edge lint[0x1]) Nov 4 04:49:42.649203 kernel: ACPI: LAPIC_NMI (acpi_id[0x63] high edge lint[0x1]) Nov 4 04:49:42.649209 kernel: ACPI: LAPIC_NMI (acpi_id[0x64] high edge lint[0x1]) Nov 4 04:49:42.649215 kernel: ACPI: LAPIC_NMI (acpi_id[0x65] high edge lint[0x1]) Nov 4 04:49:42.649221 kernel: ACPI: LAPIC_NMI (acpi_id[0x66] high edge lint[0x1]) Nov 4 04:49:42.649227 kernel: ACPI: LAPIC_NMI (acpi_id[0x67] high edge lint[0x1]) Nov 4 04:49:42.649233 kernel: ACPI: LAPIC_NMI (acpi_id[0x68] high edge lint[0x1]) Nov 4 04:49:42.649239 kernel: ACPI: LAPIC_NMI (acpi_id[0x69] high edge lint[0x1]) Nov 4 04:49:42.649246 kernel: ACPI: LAPIC_NMI (acpi_id[0x6a] high edge lint[0x1]) Nov 4 04:49:42.649252 kernel: ACPI: LAPIC_NMI (acpi_id[0x6b] high edge lint[0x1]) Nov 4 04:49:42.649258 kernel: ACPI: LAPIC_NMI (acpi_id[0x6c] high edge lint[0x1]) Nov 4 04:49:42.649264 kernel: ACPI: LAPIC_NMI (acpi_id[0x6d] high edge lint[0x1]) Nov 4 04:49:42.649270 kernel: ACPI: LAPIC_NMI (acpi_id[0x6e] high edge lint[0x1]) Nov 4 04:49:42.649276 kernel: ACPI: LAPIC_NMI (acpi_id[0x6f] high edge lint[0x1]) Nov 4 04:49:42.649282 kernel: ACPI: LAPIC_NMI (acpi_id[0x70] high edge lint[0x1]) Nov 4 04:49:42.649288 kernel: ACPI: LAPIC_NMI (acpi_id[0x71] high edge lint[0x1]) Nov 4 04:49:42.649294 kernel: ACPI: LAPIC_NMI (acpi_id[0x72] high edge lint[0x1]) Nov 4 04:49:42.649301 kernel: ACPI: LAPIC_NMI (acpi_id[0x73] high edge lint[0x1]) Nov 4 04:49:42.649307 kernel: ACPI: LAPIC_NMI (acpi_id[0x74] high edge lint[0x1]) Nov 4 04:49:42.649313 kernel: ACPI: LAPIC_NMI (acpi_id[0x75] high edge lint[0x1]) Nov 4 04:49:42.649319 kernel: ACPI: LAPIC_NMI (acpi_id[0x76] high edge lint[0x1]) Nov 4 04:49:42.649324 kernel: ACPI: LAPIC_NMI (acpi_id[0x77] high edge lint[0x1]) Nov 4 04:49:42.649331 kernel: ACPI: LAPIC_NMI (acpi_id[0x78] high edge lint[0x1]) Nov 4 04:49:42.649336 kernel: ACPI: LAPIC_NMI (acpi_id[0x79] high edge lint[0x1]) Nov 4 04:49:42.649342 kernel: ACPI: LAPIC_NMI (acpi_id[0x7a] high edge lint[0x1]) Nov 4 04:49:42.649349 kernel: ACPI: LAPIC_NMI (acpi_id[0x7b] high edge lint[0x1]) Nov 4 04:49:42.649355 kernel: ACPI: LAPIC_NMI (acpi_id[0x7c] high edge lint[0x1]) Nov 4 04:49:42.649361 kernel: ACPI: LAPIC_NMI (acpi_id[0x7d] high edge lint[0x1]) Nov 4 04:49:42.649366 kernel: ACPI: LAPIC_NMI (acpi_id[0x7e] high edge lint[0x1]) Nov 4 04:49:42.649373 kernel: ACPI: LAPIC_NMI (acpi_id[0x7f] high edge lint[0x1]) Nov 4 04:49:42.649378 kernel: IOAPIC[0]: apic_id 1, version 17, address 0xfec00000, GSI 0-23 Nov 4 04:49:42.649385 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 high edge) Nov 4 04:49:42.649391 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 4 04:49:42.649398 kernel: ACPI: HPET id: 0x8086af01 base: 0xfed00000 Nov 4 04:49:42.649404 kernel: TSC deadline timer available Nov 4 04:49:42.649410 kernel: CPU topo: Max. logical packages: 128 Nov 4 04:49:42.649417 kernel: CPU topo: Max. logical dies: 128 Nov 4 04:49:42.649422 kernel: CPU topo: Max. dies per package: 1 Nov 4 04:49:42.649429 kernel: CPU topo: Max. threads per core: 1 Nov 4 04:49:42.649435 kernel: CPU topo: Num. cores per package: 1 Nov 4 04:49:42.649441 kernel: CPU topo: Num. threads per package: 1 Nov 4 04:49:42.649448 kernel: CPU topo: Allowing 2 present CPUs plus 126 hotplug CPUs Nov 4 04:49:42.649454 kernel: [mem 0x80000000-0xefffffff] available for PCI devices Nov 4 04:49:42.649460 kernel: Booting paravirtualized kernel on VMware hypervisor Nov 4 04:49:42.649466 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 4 04:49:42.649473 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:128 nr_cpu_ids:128 nr_node_ids:1 Nov 4 04:49:42.649479 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u262144 Nov 4 04:49:42.649486 kernel: pcpu-alloc: s207832 r8192 d29736 u262144 alloc=1*2097152 Nov 4 04:49:42.649493 kernel: pcpu-alloc: [0] 000 001 002 003 004 005 006 007 Nov 4 04:49:42.649499 kernel: pcpu-alloc: [0] 008 009 010 011 012 013 014 015 Nov 4 04:49:42.649506 kernel: pcpu-alloc: [0] 016 017 018 019 020 021 022 023 Nov 4 04:49:42.649512 kernel: pcpu-alloc: [0] 024 025 026 027 028 029 030 031 Nov 4 04:49:42.649518 kernel: pcpu-alloc: [0] 032 033 034 035 036 037 038 039 Nov 4 04:49:42.649524 kernel: pcpu-alloc: [0] 040 041 042 043 044 045 046 047 Nov 4 04:49:42.649530 kernel: pcpu-alloc: [0] 048 049 050 051 052 053 054 055 Nov 4 04:49:42.649537 kernel: pcpu-alloc: [0] 056 057 058 059 060 061 062 063 Nov 4 04:49:42.649543 kernel: pcpu-alloc: [0] 064 065 066 067 068 069 070 071 Nov 4 04:49:42.649549 kernel: pcpu-alloc: [0] 072 073 074 075 076 077 078 079 Nov 4 04:49:42.649555 kernel: pcpu-alloc: [0] 080 081 082 083 084 085 086 087 Nov 4 04:49:42.649561 kernel: pcpu-alloc: [0] 088 089 090 091 092 093 094 095 Nov 4 04:49:42.649567 kernel: pcpu-alloc: [0] 096 097 098 099 100 101 102 103 Nov 4 04:49:42.649573 kernel: pcpu-alloc: [0] 104 105 106 107 108 109 110 111 Nov 4 04:49:42.649580 kernel: pcpu-alloc: [0] 112 113 114 115 116 117 118 119 Nov 4 04:49:42.649586 kernel: pcpu-alloc: [0] 120 121 122 123 124 125 126 127 Nov 4 04:49:42.649593 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=c479bf273e218e23ca82ede45f2bfcd1a1714a33fe5860e964ed0aea09538f01 Nov 4 04:49:42.649600 kernel: random: crng init done Nov 4 04:49:42.649606 kernel: printk: log_buf_len individual max cpu contribution: 4096 bytes Nov 4 04:49:42.649613 kernel: printk: log_buf_len total cpu_extra contributions: 520192 bytes Nov 4 04:49:42.649619 kernel: printk: log_buf_len min size: 262144 bytes Nov 4 04:49:42.649626 kernel: printk: log_buf_len: 1048576 bytes Nov 4 04:49:42.649632 kernel: printk: early log buf free: 245688(93%) Nov 4 04:49:42.649638 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 4 04:49:42.649644 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Nov 4 04:49:42.649651 kernel: Fallback order for Node 0: 0 Nov 4 04:49:42.649657 kernel: Built 1 zonelists, mobility grouping on. Total pages: 524157 Nov 4 04:49:42.649663 kernel: Policy zone: DMA32 Nov 4 04:49:42.649670 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 4 04:49:42.649676 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=128, Nodes=1 Nov 4 04:49:42.649682 kernel: ftrace: allocating 40092 entries in 157 pages Nov 4 04:49:42.649689 kernel: ftrace: allocated 157 pages with 5 groups Nov 4 04:49:42.649695 kernel: Dynamic Preempt: voluntary Nov 4 04:49:42.649701 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 4 04:49:42.649708 kernel: rcu: RCU event tracing is enabled. Nov 4 04:49:42.649714 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=128. Nov 4 04:49:42.649721 kernel: Trampoline variant of Tasks RCU enabled. Nov 4 04:49:42.649727 kernel: Rude variant of Tasks RCU enabled. Nov 4 04:49:42.649733 kernel: Tracing variant of Tasks RCU enabled. Nov 4 04:49:42.649739 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 4 04:49:42.649745 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=128 Nov 4 04:49:42.649751 kernel: RCU Tasks: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Nov 4 04:49:42.649758 kernel: RCU Tasks Rude: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Nov 4 04:49:42.649765 kernel: RCU Tasks Trace: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Nov 4 04:49:42.649771 kernel: NR_IRQS: 33024, nr_irqs: 1448, preallocated irqs: 16 Nov 4 04:49:42.649777 kernel: rcu: srcu_init: Setting srcu_struct sizes to big. Nov 4 04:49:42.649783 kernel: Console: colour VGA+ 80x25 Nov 4 04:49:42.649789 kernel: printk: legacy console [tty0] enabled Nov 4 04:49:42.649795 kernel: printk: legacy console [ttyS0] enabled Nov 4 04:49:42.649801 kernel: ACPI: Core revision 20240827 Nov 4 04:49:42.649807 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 133484882848 ns Nov 4 04:49:42.649815 kernel: APIC: Switch to symmetric I/O mode setup Nov 4 04:49:42.649821 kernel: x2apic enabled Nov 4 04:49:42.649827 kernel: APIC: Switched APIC routing to: physical x2apic Nov 4 04:49:42.649834 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Nov 4 04:49:42.649840 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Nov 4 04:49:42.649847 kernel: Calibrating delay loop (skipped) preset value.. 6816.00 BogoMIPS (lpj=3408000) Nov 4 04:49:42.649853 kernel: Disabled fast string operations Nov 4 04:49:42.649860 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Nov 4 04:49:42.649866 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Nov 4 04:49:42.649872 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 4 04:49:42.649878 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall and VM exit Nov 4 04:49:42.649885 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Nov 4 04:49:42.649891 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Nov 4 04:49:42.649897 kernel: RETBleed: Mitigation: Enhanced IBRS Nov 4 04:49:42.649904 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 4 04:49:42.649910 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 4 04:49:42.649917 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Nov 4 04:49:42.649923 kernel: SRBDS: Unknown: Dependent on hypervisor status Nov 4 04:49:42.649929 kernel: GDS: Unknown: Dependent on hypervisor status Nov 4 04:49:42.649935 kernel: active return thunk: its_return_thunk Nov 4 04:49:42.649941 kernel: ITS: Mitigation: Aligned branch/return thunks Nov 4 04:49:42.649949 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 4 04:49:42.649955 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 4 04:49:42.649961 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 4 04:49:42.649968 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 4 04:49:42.649974 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Nov 4 04:49:42.649980 kernel: Freeing SMP alternatives memory: 32K Nov 4 04:49:42.649986 kernel: pid_max: default: 131072 minimum: 1024 Nov 4 04:49:42.649994 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Nov 4 04:49:42.650000 kernel: landlock: Up and running. Nov 4 04:49:42.650006 kernel: SELinux: Initializing. Nov 4 04:49:42.650013 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Nov 4 04:49:42.650019 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Nov 4 04:49:42.650025 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Nov 4 04:49:42.650032 kernel: Performance Events: Skylake events, core PMU driver. Nov 4 04:49:42.650039 kernel: core: CPUID marked event: 'cpu cycles' unavailable Nov 4 04:49:42.650045 kernel: core: CPUID marked event: 'instructions' unavailable Nov 4 04:49:42.650051 kernel: core: CPUID marked event: 'bus cycles' unavailable Nov 4 04:49:42.650058 kernel: core: CPUID marked event: 'cache references' unavailable Nov 4 04:49:42.650074 kernel: core: CPUID marked event: 'cache misses' unavailable Nov 4 04:49:42.650084 kernel: core: CPUID marked event: 'branch instructions' unavailable Nov 4 04:49:42.650094 kernel: core: CPUID marked event: 'branch misses' unavailable Nov 4 04:49:42.650111 kernel: ... version: 1 Nov 4 04:49:42.650118 kernel: ... bit width: 48 Nov 4 04:49:42.650125 kernel: ... generic registers: 4 Nov 4 04:49:42.650131 kernel: ... value mask: 0000ffffffffffff Nov 4 04:49:42.650137 kernel: ... max period: 000000007fffffff Nov 4 04:49:42.650144 kernel: ... fixed-purpose events: 0 Nov 4 04:49:42.650150 kernel: ... event mask: 000000000000000f Nov 4 04:49:42.650157 kernel: signal: max sigframe size: 1776 Nov 4 04:49:42.650164 kernel: rcu: Hierarchical SRCU implementation. Nov 4 04:49:42.650170 kernel: rcu: Max phase no-delay instances is 400. Nov 4 04:49:42.650177 kernel: Timer migration: 3 hierarchy levels; 8 children per group; 3 crossnode level Nov 4 04:49:42.650183 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Nov 4 04:49:42.650189 kernel: smp: Bringing up secondary CPUs ... Nov 4 04:49:42.650195 kernel: smpboot: x86: Booting SMP configuration: Nov 4 04:49:42.650202 kernel: .... node #0, CPUs: #1 Nov 4 04:49:42.650208 kernel: Disabled fast string operations Nov 4 04:49:42.650214 kernel: smp: Brought up 1 node, 2 CPUs Nov 4 04:49:42.650220 kernel: smpboot: Total of 2 processors activated (13632.00 BogoMIPS) Nov 4 04:49:42.650227 kernel: Memory: 1942660K/2096628K available (14336K kernel code, 2443K rwdata, 29892K rodata, 15360K init, 2684K bss, 142588K reserved, 0K cma-reserved) Nov 4 04:49:42.650233 kernel: devtmpfs: initialized Nov 4 04:49:42.650239 kernel: x86/mm: Memory block size: 128MB Nov 4 04:49:42.650246 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7feff000-0x7fefffff] (4096 bytes) Nov 4 04:49:42.650253 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 4 04:49:42.650259 kernel: futex hash table entries: 32768 (order: 9, 2097152 bytes, linear) Nov 4 04:49:42.650266 kernel: pinctrl core: initialized pinctrl subsystem Nov 4 04:49:42.650272 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 4 04:49:42.650278 kernel: audit: initializing netlink subsys (disabled) Nov 4 04:49:42.650284 kernel: audit: type=2000 audit(1762231779.299:1): state=initialized audit_enabled=0 res=1 Nov 4 04:49:42.650290 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 4 04:49:42.650297 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 4 04:49:42.650303 kernel: cpuidle: using governor menu Nov 4 04:49:42.650309 kernel: Simple Boot Flag at 0x36 set to 0x80 Nov 4 04:49:42.650316 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 4 04:49:42.650322 kernel: dca service started, version 1.12.1 Nov 4 04:49:42.650334 kernel: PCI: ECAM [mem 0xf0000000-0xf7ffffff] (base 0xf0000000) for domain 0000 [bus 00-7f] Nov 4 04:49:42.650349 kernel: PCI: Using configuration type 1 for base access Nov 4 04:49:42.650357 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 4 04:49:42.650363 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 4 04:49:42.650370 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 4 04:49:42.650376 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 4 04:49:42.650383 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 4 04:49:42.650389 kernel: ACPI: Added _OSI(Module Device) Nov 4 04:49:42.650396 kernel: ACPI: Added _OSI(Processor Device) Nov 4 04:49:42.650403 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 4 04:49:42.650410 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 4 04:49:42.650416 kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored Nov 4 04:49:42.650426 kernel: ACPI: Interpreter enabled Nov 4 04:49:42.650435 kernel: ACPI: PM: (supports S0 S1 S5) Nov 4 04:49:42.650441 kernel: ACPI: Using IOAPIC for interrupt routing Nov 4 04:49:42.650448 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 4 04:49:42.650455 kernel: PCI: Using E820 reservations for host bridge windows Nov 4 04:49:42.650462 kernel: ACPI: Enabled 4 GPEs in block 00 to 0F Nov 4 04:49:42.650468 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-7f]) Nov 4 04:49:42.650587 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 4 04:49:42.650659 kernel: acpi PNP0A03:00: _OSC: platform does not support [AER LTR] Nov 4 04:49:42.650726 kernel: acpi PNP0A03:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability] Nov 4 04:49:42.650738 kernel: PCI host bridge to bus 0000:00 Nov 4 04:49:42.650807 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 4 04:49:42.650869 kernel: pci_bus 0000:00: root bus resource [mem 0x000cc000-0x000dbfff window] Nov 4 04:49:42.650927 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Nov 4 04:49:42.650985 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 4 04:49:42.651044 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xfeff window] Nov 4 04:49:42.652728 kernel: pci_bus 0000:00: root bus resource [bus 00-7f] Nov 4 04:49:42.652861 kernel: pci 0000:00:00.0: [8086:7190] type 00 class 0x060000 conventional PCI endpoint Nov 4 04:49:42.652956 kernel: pci 0000:00:01.0: [8086:7191] type 01 class 0x060400 conventional PCI bridge Nov 4 04:49:42.653073 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Nov 4 04:49:42.653155 kernel: pci 0000:00:07.0: [8086:7110] type 00 class 0x060100 conventional PCI endpoint Nov 4 04:49:42.653228 kernel: pci 0000:00:07.1: [8086:7111] type 00 class 0x01018a conventional PCI endpoint Nov 4 04:49:42.653300 kernel: pci 0000:00:07.1: BAR 4 [io 0x1060-0x106f] Nov 4 04:49:42.653373 kernel: pci 0000:00:07.1: BAR 0 [io 0x01f0-0x01f7]: legacy IDE quirk Nov 4 04:49:42.653441 kernel: pci 0000:00:07.1: BAR 1 [io 0x03f6]: legacy IDE quirk Nov 4 04:49:42.653506 kernel: pci 0000:00:07.1: BAR 2 [io 0x0170-0x0177]: legacy IDE quirk Nov 4 04:49:42.653572 kernel: pci 0000:00:07.1: BAR 3 [io 0x0376]: legacy IDE quirk Nov 4 04:49:42.653645 kernel: pci 0000:00:07.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint Nov 4 04:49:42.653711 kernel: pci 0000:00:07.3: quirk: [io 0x1000-0x103f] claimed by PIIX4 ACPI Nov 4 04:49:42.653777 kernel: pci 0000:00:07.3: quirk: [io 0x1040-0x104f] claimed by PIIX4 SMB Nov 4 04:49:42.653855 kernel: pci 0000:00:07.7: [15ad:0740] type 00 class 0x088000 conventional PCI endpoint Nov 4 04:49:42.653921 kernel: pci 0000:00:07.7: BAR 0 [io 0x1080-0x10bf] Nov 4 04:49:42.653986 kernel: pci 0000:00:07.7: BAR 1 [mem 0xfebfe000-0xfebfffff 64bit] Nov 4 04:49:42.654057 kernel: pci 0000:00:0f.0: [15ad:0405] type 00 class 0x030000 conventional PCI endpoint Nov 4 04:49:42.654139 kernel: pci 0000:00:0f.0: BAR 0 [io 0x1070-0x107f] Nov 4 04:49:42.654208 kernel: pci 0000:00:0f.0: BAR 1 [mem 0xe8000000-0xefffffff pref] Nov 4 04:49:42.654272 kernel: pci 0000:00:0f.0: BAR 2 [mem 0xfe000000-0xfe7fffff] Nov 4 04:49:42.654337 kernel: pci 0000:00:0f.0: ROM [mem 0x00000000-0x00007fff pref] Nov 4 04:49:42.654402 kernel: pci 0000:00:0f.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 4 04:49:42.654471 kernel: pci 0000:00:11.0: [15ad:0790] type 01 class 0x060401 conventional PCI bridge Nov 4 04:49:42.654536 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] (subtractive decode) Nov 4 04:49:42.654609 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Nov 4 04:49:42.654688 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Nov 4 04:49:42.654755 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Nov 4 04:49:42.654828 kernel: pci 0000:00:15.0: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 4 04:49:42.654906 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Nov 4 04:49:42.654976 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Nov 4 04:49:42.655043 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Nov 4 04:49:42.655154 kernel: pci 0000:00:15.0: PME# supported from D0 D3hot D3cold Nov 4 04:49:42.655228 kernel: pci 0000:00:15.1: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 4 04:49:42.655295 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Nov 4 04:49:42.655368 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Nov 4 04:49:42.655438 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Nov 4 04:49:42.655511 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Nov 4 04:49:42.655577 kernel: pci 0000:00:15.1: PME# supported from D0 D3hot D3cold Nov 4 04:49:42.655659 kernel: pci 0000:00:15.2: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 4 04:49:42.655732 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Nov 4 04:49:42.655809 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Nov 4 04:49:42.655879 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Nov 4 04:49:42.655947 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Nov 4 04:49:42.656019 kernel: pci 0000:00:15.2: PME# supported from D0 D3hot D3cold Nov 4 04:49:42.656107 kernel: pci 0000:00:15.3: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 4 04:49:42.656177 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Nov 4 04:49:42.656270 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Nov 4 04:49:42.656351 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Nov 4 04:49:42.656421 kernel: pci 0000:00:15.3: PME# supported from D0 D3hot D3cold Nov 4 04:49:42.656504 kernel: pci 0000:00:15.4: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 4 04:49:42.656574 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Nov 4 04:49:42.656645 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Nov 4 04:49:42.656714 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Nov 4 04:49:42.656780 kernel: pci 0000:00:15.4: PME# supported from D0 D3hot D3cold Nov 4 04:49:42.656868 kernel: pci 0000:00:15.5: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 4 04:49:42.656954 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Nov 4 04:49:42.657035 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Nov 4 04:49:42.657125 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Nov 4 04:49:42.657209 kernel: pci 0000:00:15.5: PME# supported from D0 D3hot D3cold Nov 4 04:49:42.657285 kernel: pci 0000:00:15.6: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 4 04:49:42.657358 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Nov 4 04:49:42.657424 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Nov 4 04:49:42.657498 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Nov 4 04:49:42.657578 kernel: pci 0000:00:15.6: PME# supported from D0 D3hot D3cold Nov 4 04:49:42.657680 kernel: pci 0000:00:15.7: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 4 04:49:42.657773 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Nov 4 04:49:42.657863 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Nov 4 04:49:42.657932 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Nov 4 04:49:42.658002 kernel: pci 0000:00:15.7: PME# supported from D0 D3hot D3cold Nov 4 04:49:42.658087 kernel: pci 0000:00:16.0: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 4 04:49:42.658161 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Nov 4 04:49:42.658248 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Nov 4 04:49:42.658315 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Nov 4 04:49:42.658381 kernel: pci 0000:00:16.0: PME# supported from D0 D3hot D3cold Nov 4 04:49:42.658452 kernel: pci 0000:00:16.1: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 4 04:49:42.658523 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Nov 4 04:49:42.658592 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Nov 4 04:49:42.658657 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Nov 4 04:49:42.658724 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Nov 4 04:49:42.658792 kernel: pci 0000:00:16.1: PME# supported from D0 D3hot D3cold Nov 4 04:49:42.658862 kernel: pci 0000:00:16.2: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 4 04:49:42.658928 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Nov 4 04:49:42.659000 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Nov 4 04:49:42.659217 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Nov 4 04:49:42.659290 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Nov 4 04:49:42.659361 kernel: pci 0000:00:16.2: PME# supported from D0 D3hot D3cold Nov 4 04:49:42.659436 kernel: pci 0000:00:16.3: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 4 04:49:42.659506 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Nov 4 04:49:42.659572 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Nov 4 04:49:42.659637 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Nov 4 04:49:42.659729 kernel: pci 0000:00:16.3: PME# supported from D0 D3hot D3cold Nov 4 04:49:42.659801 kernel: pci 0000:00:16.4: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 4 04:49:42.659868 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Nov 4 04:49:42.659936 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Nov 4 04:49:42.660007 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Nov 4 04:49:42.660097 kernel: pci 0000:00:16.4: PME# supported from D0 D3hot D3cold Nov 4 04:49:42.660423 kernel: pci 0000:00:16.5: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 4 04:49:42.660493 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Nov 4 04:49:42.660560 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Nov 4 04:49:42.660630 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Nov 4 04:49:42.660696 kernel: pci 0000:00:16.5: PME# supported from D0 D3hot D3cold Nov 4 04:49:42.660776 kernel: pci 0000:00:16.6: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 4 04:49:42.660855 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Nov 4 04:49:42.660922 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Nov 4 04:49:42.660988 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Nov 4 04:49:42.661055 kernel: pci 0000:00:16.6: PME# supported from D0 D3hot D3cold Nov 4 04:49:42.661179 kernel: pci 0000:00:16.7: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 4 04:49:42.661245 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Nov 4 04:49:42.661310 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Nov 4 04:49:42.661375 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Nov 4 04:49:42.661440 kernel: pci 0000:00:16.7: PME# supported from D0 D3hot D3cold Nov 4 04:49:42.661513 kernel: pci 0000:00:17.0: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 4 04:49:42.661579 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Nov 4 04:49:42.661644 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Nov 4 04:49:42.661709 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Nov 4 04:49:42.661773 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Nov 4 04:49:42.661838 kernel: pci 0000:00:17.0: PME# supported from D0 D3hot D3cold Nov 4 04:49:42.661910 kernel: pci 0000:00:17.1: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 4 04:49:42.661975 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Nov 4 04:49:42.662039 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Nov 4 04:49:42.662125 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Nov 4 04:49:42.662201 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Nov 4 04:49:42.662268 kernel: pci 0000:00:17.1: PME# supported from D0 D3hot D3cold Nov 4 04:49:42.662339 kernel: pci 0000:00:17.2: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 4 04:49:42.662404 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Nov 4 04:49:42.662470 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Nov 4 04:49:42.662540 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Nov 4 04:49:42.662606 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Nov 4 04:49:42.662670 kernel: pci 0000:00:17.2: PME# supported from D0 D3hot D3cold Nov 4 04:49:42.662740 kernel: pci 0000:00:17.3: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 4 04:49:42.662805 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Nov 4 04:49:42.662872 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Nov 4 04:49:42.662939 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Nov 4 04:49:42.663004 kernel: pci 0000:00:17.3: PME# supported from D0 D3hot D3cold Nov 4 04:49:42.665101 kernel: pci 0000:00:17.4: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 4 04:49:42.665186 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Nov 4 04:49:42.665256 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Nov 4 04:49:42.665323 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Nov 4 04:49:42.665395 kernel: pci 0000:00:17.4: PME# supported from D0 D3hot D3cold Nov 4 04:49:42.665465 kernel: pci 0000:00:17.5: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 4 04:49:42.665533 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Nov 4 04:49:42.665598 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Nov 4 04:49:42.665663 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Nov 4 04:49:42.665729 kernel: pci 0000:00:17.5: PME# supported from D0 D3hot D3cold Nov 4 04:49:42.665803 kernel: pci 0000:00:17.6: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 4 04:49:42.665904 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Nov 4 04:49:42.665971 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Nov 4 04:49:42.666036 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Nov 4 04:49:42.666132 kernel: pci 0000:00:17.6: PME# supported from D0 D3hot D3cold Nov 4 04:49:42.666211 kernel: pci 0000:00:17.7: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 4 04:49:42.666288 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Nov 4 04:49:42.666353 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Nov 4 04:49:42.666418 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Nov 4 04:49:42.666483 kernel: pci 0000:00:17.7: PME# supported from D0 D3hot D3cold Nov 4 04:49:42.666552 kernel: pci 0000:00:18.0: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 4 04:49:42.666621 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Nov 4 04:49:42.666687 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Nov 4 04:49:42.666751 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Nov 4 04:49:42.666816 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Nov 4 04:49:42.666880 kernel: pci 0000:00:18.0: PME# supported from D0 D3hot D3cold Nov 4 04:49:42.666949 kernel: pci 0000:00:18.1: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 4 04:49:42.667016 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Nov 4 04:49:42.667127 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Nov 4 04:49:42.667196 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Nov 4 04:49:42.667260 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Nov 4 04:49:42.667325 kernel: pci 0000:00:18.1: PME# supported from D0 D3hot D3cold Nov 4 04:49:42.667395 kernel: pci 0000:00:18.2: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 4 04:49:42.667463 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Nov 4 04:49:42.667528 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Nov 4 04:49:42.667592 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Nov 4 04:49:42.667656 kernel: pci 0000:00:18.2: PME# supported from D0 D3hot D3cold Nov 4 04:49:42.667725 kernel: pci 0000:00:18.3: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 4 04:49:42.667790 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Nov 4 04:49:42.667863 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Nov 4 04:49:42.667929 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Nov 4 04:49:42.667993 kernel: pci 0000:00:18.3: PME# supported from D0 D3hot D3cold Nov 4 04:49:42.668088 kernel: pci 0000:00:18.4: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 4 04:49:42.668158 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Nov 4 04:49:42.668232 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Nov 4 04:49:42.668300 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Nov 4 04:49:42.668369 kernel: pci 0000:00:18.4: PME# supported from D0 D3hot D3cold Nov 4 04:49:42.668439 kernel: pci 0000:00:18.5: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 4 04:49:42.668504 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Nov 4 04:49:42.668569 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Nov 4 04:49:42.668634 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Nov 4 04:49:42.668701 kernel: pci 0000:00:18.5: PME# supported from D0 D3hot D3cold Nov 4 04:49:42.668771 kernel: pci 0000:00:18.6: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 4 04:49:42.668836 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Nov 4 04:49:42.668901 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Nov 4 04:49:42.669555 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Nov 4 04:49:42.669862 kernel: pci 0000:00:18.6: PME# supported from D0 D3hot D3cold Nov 4 04:49:42.669946 kernel: pci 0000:00:18.7: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 4 04:49:42.670017 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Nov 4 04:49:42.670100 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Nov 4 04:49:42.670177 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Nov 4 04:49:42.670246 kernel: pci 0000:00:18.7: PME# supported from D0 D3hot D3cold Nov 4 04:49:42.670344 kernel: pci_bus 0000:01: extended config space not accessible Nov 4 04:49:42.670426 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Nov 4 04:49:42.670496 kernel: pci_bus 0000:02: extended config space not accessible Nov 4 04:49:42.670506 kernel: acpiphp: Slot [32] registered Nov 4 04:49:42.670513 kernel: acpiphp: Slot [33] registered Nov 4 04:49:42.670520 kernel: acpiphp: Slot [34] registered Nov 4 04:49:42.670527 kernel: acpiphp: Slot [35] registered Nov 4 04:49:42.670535 kernel: acpiphp: Slot [36] registered Nov 4 04:49:42.670542 kernel: acpiphp: Slot [37] registered Nov 4 04:49:42.670548 kernel: acpiphp: Slot [38] registered Nov 4 04:49:42.670555 kernel: acpiphp: Slot [39] registered Nov 4 04:49:42.670562 kernel: acpiphp: Slot [40] registered Nov 4 04:49:42.670568 kernel: acpiphp: Slot [41] registered Nov 4 04:49:42.670574 kernel: acpiphp: Slot [42] registered Nov 4 04:49:42.670581 kernel: acpiphp: Slot [43] registered Nov 4 04:49:42.670588 kernel: acpiphp: Slot [44] registered Nov 4 04:49:42.670595 kernel: acpiphp: Slot [45] registered Nov 4 04:49:42.670601 kernel: acpiphp: Slot [46] registered Nov 4 04:49:42.670608 kernel: acpiphp: Slot [47] registered Nov 4 04:49:42.670615 kernel: acpiphp: Slot [48] registered Nov 4 04:49:42.670621 kernel: acpiphp: Slot [49] registered Nov 4 04:49:42.670627 kernel: acpiphp: Slot [50] registered Nov 4 04:49:42.670635 kernel: acpiphp: Slot [51] registered Nov 4 04:49:42.670641 kernel: acpiphp: Slot [52] registered Nov 4 04:49:42.670648 kernel: acpiphp: Slot [53] registered Nov 4 04:49:42.670654 kernel: acpiphp: Slot [54] registered Nov 4 04:49:42.670661 kernel: acpiphp: Slot [55] registered Nov 4 04:49:42.670667 kernel: acpiphp: Slot [56] registered Nov 4 04:49:42.670674 kernel: acpiphp: Slot [57] registered Nov 4 04:49:42.670680 kernel: acpiphp: Slot [58] registered Nov 4 04:49:42.670688 kernel: acpiphp: Slot [59] registered Nov 4 04:49:42.670694 kernel: acpiphp: Slot [60] registered Nov 4 04:49:42.670700 kernel: acpiphp: Slot [61] registered Nov 4 04:49:42.670707 kernel: acpiphp: Slot [62] registered Nov 4 04:49:42.670713 kernel: acpiphp: Slot [63] registered Nov 4 04:49:42.670781 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] (subtractive decode) Nov 4 04:49:42.670848 kernel: pci 0000:00:11.0: bridge window [mem 0x000a0000-0x000bffff window] (subtractive decode) Nov 4 04:49:42.670916 kernel: pci 0000:00:11.0: bridge window [mem 0x000cc000-0x000dbfff window] (subtractive decode) Nov 4 04:49:42.670989 kernel: pci 0000:00:11.0: bridge window [mem 0xc0000000-0xfebfffff window] (subtractive decode) Nov 4 04:49:42.671054 kernel: pci 0000:00:11.0: bridge window [io 0x0000-0x0cf7 window] (subtractive decode) Nov 4 04:49:42.671129 kernel: pci 0000:00:11.0: bridge window [io 0x0d00-0xfeff window] (subtractive decode) Nov 4 04:49:42.671202 kernel: pci 0000:03:00.0: [15ad:07c0] type 00 class 0x010700 PCIe Endpoint Nov 4 04:49:42.671270 kernel: pci 0000:03:00.0: BAR 0 [io 0x4000-0x4007] Nov 4 04:49:42.671339 kernel: pci 0000:03:00.0: BAR 1 [mem 0xfd5f8000-0xfd5fffff 64bit] Nov 4 04:49:42.671405 kernel: pci 0000:03:00.0: ROM [mem 0x00000000-0x0000ffff pref] Nov 4 04:49:42.671471 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Nov 4 04:49:42.671537 kernel: pci 0000:03:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Nov 4 04:49:42.671603 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Nov 4 04:49:42.671670 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Nov 4 04:49:42.671739 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Nov 4 04:49:42.671806 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Nov 4 04:49:42.671872 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Nov 4 04:49:42.671940 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Nov 4 04:49:42.672006 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Nov 4 04:49:42.672087 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Nov 4 04:49:42.672168 kernel: pci 0000:0b:00.0: [15ad:07b0] type 00 class 0x020000 PCIe Endpoint Nov 4 04:49:42.672236 kernel: pci 0000:0b:00.0: BAR 0 [mem 0xfd4fc000-0xfd4fcfff] Nov 4 04:49:42.672302 kernel: pci 0000:0b:00.0: BAR 1 [mem 0xfd4fd000-0xfd4fdfff] Nov 4 04:49:42.672367 kernel: pci 0000:0b:00.0: BAR 2 [mem 0xfd4fe000-0xfd4fffff] Nov 4 04:49:42.672441 kernel: pci 0000:0b:00.0: BAR 3 [io 0x5000-0x500f] Nov 4 04:49:42.672507 kernel: pci 0000:0b:00.0: ROM [mem 0x00000000-0x0000ffff pref] Nov 4 04:49:42.672582 kernel: pci 0000:0b:00.0: supports D1 D2 Nov 4 04:49:42.672648 kernel: pci 0000:0b:00.0: PME# supported from D0 D1 D2 D3hot D3cold Nov 4 04:49:42.672714 kernel: pci 0000:0b:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Nov 4 04:49:42.672780 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Nov 4 04:49:42.672851 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Nov 4 04:49:42.672918 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Nov 4 04:49:42.672988 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Nov 4 04:49:42.673055 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Nov 4 04:49:42.673153 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Nov 4 04:49:42.673220 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Nov 4 04:49:42.673288 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Nov 4 04:49:42.673360 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Nov 4 04:49:42.673429 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Nov 4 04:49:42.673495 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Nov 4 04:49:42.673562 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Nov 4 04:49:42.673628 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Nov 4 04:49:42.673695 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Nov 4 04:49:42.673762 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Nov 4 04:49:42.673831 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Nov 4 04:49:42.673898 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Nov 4 04:49:42.673964 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Nov 4 04:49:42.674030 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Nov 4 04:49:42.674114 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Nov 4 04:49:42.674183 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Nov 4 04:49:42.674253 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Nov 4 04:49:42.674318 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Nov 4 04:49:42.674385 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Nov 4 04:49:42.674395 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 9 Nov 4 04:49:42.674401 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 0 Nov 4 04:49:42.674408 kernel: ACPI: PCI: Interrupt link LNKB disabled Nov 4 04:49:42.674417 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 4 04:49:42.674424 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 10 Nov 4 04:49:42.674431 kernel: iommu: Default domain type: Translated Nov 4 04:49:42.674437 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 4 04:49:42.674443 kernel: PCI: Using ACPI for IRQ routing Nov 4 04:49:42.674451 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 4 04:49:42.674458 kernel: e820: reserve RAM buffer [mem 0x0009ec00-0x0009ffff] Nov 4 04:49:42.674465 kernel: e820: reserve RAM buffer [mem 0x7fee0000-0x7fffffff] Nov 4 04:49:42.674529 kernel: pci 0000:00:0f.0: vgaarb: setting as boot VGA device Nov 4 04:49:42.674593 kernel: pci 0000:00:0f.0: vgaarb: bridge control possible Nov 4 04:49:42.674657 kernel: pci 0000:00:0f.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 4 04:49:42.674666 kernel: vgaarb: loaded Nov 4 04:49:42.674674 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 Nov 4 04:49:42.674680 kernel: hpet0: 16 comparators, 64-bit 14.318180 MHz counter Nov 4 04:49:42.674688 kernel: clocksource: Switched to clocksource tsc-early Nov 4 04:49:42.674695 kernel: VFS: Disk quotas dquot_6.6.0 Nov 4 04:49:42.674702 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 4 04:49:42.674708 kernel: pnp: PnP ACPI init Nov 4 04:49:42.674777 kernel: system 00:00: [io 0x1000-0x103f] has been reserved Nov 4 04:49:42.674839 kernel: system 00:00: [io 0x1040-0x104f] has been reserved Nov 4 04:49:42.674900 kernel: system 00:00: [io 0x0cf0-0x0cf1] has been reserved Nov 4 04:49:42.674967 kernel: system 00:04: [mem 0xfed00000-0xfed003ff] has been reserved Nov 4 04:49:42.675044 kernel: pnp 00:06: [dma 2] Nov 4 04:49:42.675120 kernel: system 00:07: [io 0xfce0-0xfcff] has been reserved Nov 4 04:49:42.675188 kernel: system 00:07: [mem 0xf0000000-0xf7ffffff] has been reserved Nov 4 04:49:42.675249 kernel: system 00:07: [mem 0xfe800000-0xfe9fffff] has been reserved Nov 4 04:49:42.675260 kernel: pnp: PnP ACPI: found 8 devices Nov 4 04:49:42.675267 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 4 04:49:42.675274 kernel: NET: Registered PF_INET protocol family Nov 4 04:49:42.675280 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 4 04:49:42.675287 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Nov 4 04:49:42.675294 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 4 04:49:42.675300 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Nov 4 04:49:42.675308 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Nov 4 04:49:42.675315 kernel: TCP: Hash tables configured (established 16384 bind 16384) Nov 4 04:49:42.675321 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Nov 4 04:49:42.675328 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Nov 4 04:49:42.675334 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 4 04:49:42.675341 kernel: NET: Registered PF_XDP protocol family Nov 4 04:49:42.675407 kernel: pci 0000:00:15.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Nov 4 04:49:42.675476 kernel: pci 0000:00:15.3: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Nov 4 04:49:42.675543 kernel: pci 0000:00:15.4: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Nov 4 04:49:42.675609 kernel: pci 0000:00:15.5: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Nov 4 04:49:42.675674 kernel: pci 0000:00:15.6: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Nov 4 04:49:42.675741 kernel: pci 0000:00:15.7: bridge window [io 0x1000-0x0fff] to [bus 0a] add_size 1000 Nov 4 04:49:42.675808 kernel: pci 0000:00:16.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0b] add_size 200000 add_align 100000 Nov 4 04:49:42.675876 kernel: pci 0000:00:16.3: bridge window [io 0x1000-0x0fff] to [bus 0e] add_size 1000 Nov 4 04:49:42.675943 kernel: pci 0000:00:16.4: bridge window [io 0x1000-0x0fff] to [bus 0f] add_size 1000 Nov 4 04:49:42.676010 kernel: pci 0000:00:16.5: bridge window [io 0x1000-0x0fff] to [bus 10] add_size 1000 Nov 4 04:49:42.676092 kernel: pci 0000:00:16.6: bridge window [io 0x1000-0x0fff] to [bus 11] add_size 1000 Nov 4 04:49:42.676160 kernel: pci 0000:00:16.7: bridge window [io 0x1000-0x0fff] to [bus 12] add_size 1000 Nov 4 04:49:42.676225 kernel: pci 0000:00:17.3: bridge window [io 0x1000-0x0fff] to [bus 16] add_size 1000 Nov 4 04:49:42.676294 kernel: pci 0000:00:17.4: bridge window [io 0x1000-0x0fff] to [bus 17] add_size 1000 Nov 4 04:49:42.676364 kernel: pci 0000:00:17.5: bridge window [io 0x1000-0x0fff] to [bus 18] add_size 1000 Nov 4 04:49:42.676429 kernel: pci 0000:00:17.6: bridge window [io 0x1000-0x0fff] to [bus 19] add_size 1000 Nov 4 04:49:42.676494 kernel: pci 0000:00:17.7: bridge window [io 0x1000-0x0fff] to [bus 1a] add_size 1000 Nov 4 04:49:42.676577 kernel: pci 0000:00:18.2: bridge window [io 0x1000-0x0fff] to [bus 1d] add_size 1000 Nov 4 04:49:42.676644 kernel: pci 0000:00:18.3: bridge window [io 0x1000-0x0fff] to [bus 1e] add_size 1000 Nov 4 04:49:42.676710 kernel: pci 0000:00:18.4: bridge window [io 0x1000-0x0fff] to [bus 1f] add_size 1000 Nov 4 04:49:42.678660 kernel: pci 0000:00:18.5: bridge window [io 0x1000-0x0fff] to [bus 20] add_size 1000 Nov 4 04:49:42.678745 kernel: pci 0000:00:18.6: bridge window [io 0x1000-0x0fff] to [bus 21] add_size 1000 Nov 4 04:49:42.678822 kernel: pci 0000:00:18.7: bridge window [io 0x1000-0x0fff] to [bus 22] add_size 1000 Nov 4 04:49:42.678897 kernel: pci 0000:00:15.0: bridge window [mem 0xc0000000-0xc01fffff 64bit pref]: assigned Nov 4 04:49:42.678971 kernel: pci 0000:00:16.0: bridge window [mem 0xc0200000-0xc03fffff 64bit pref]: assigned Nov 4 04:49:42.679039 kernel: pci 0000:00:15.3: bridge window [io size 0x1000]: can't assign; no space Nov 4 04:49:42.679121 kernel: pci 0000:00:15.3: bridge window [io size 0x1000]: failed to assign Nov 4 04:49:42.679189 kernel: pci 0000:00:15.4: bridge window [io size 0x1000]: can't assign; no space Nov 4 04:49:42.679434 kernel: pci 0000:00:15.4: bridge window [io size 0x1000]: failed to assign Nov 4 04:49:42.679507 kernel: pci 0000:00:15.5: bridge window [io size 0x1000]: can't assign; no space Nov 4 04:49:42.679574 kernel: pci 0000:00:15.5: bridge window [io size 0x1000]: failed to assign Nov 4 04:49:42.679642 kernel: pci 0000:00:15.6: bridge window [io size 0x1000]: can't assign; no space Nov 4 04:49:42.679712 kernel: pci 0000:00:15.6: bridge window [io size 0x1000]: failed to assign Nov 4 04:49:42.679780 kernel: pci 0000:00:15.7: bridge window [io size 0x1000]: can't assign; no space Nov 4 04:49:42.679846 kernel: pci 0000:00:15.7: bridge window [io size 0x1000]: failed to assign Nov 4 04:49:42.679912 kernel: pci 0000:00:16.3: bridge window [io size 0x1000]: can't assign; no space Nov 4 04:49:42.679977 kernel: pci 0000:00:16.3: bridge window [io size 0x1000]: failed to assign Nov 4 04:49:42.680043 kernel: pci 0000:00:16.4: bridge window [io size 0x1000]: can't assign; no space Nov 4 04:49:42.680124 kernel: pci 0000:00:16.4: bridge window [io size 0x1000]: failed to assign Nov 4 04:49:42.680193 kernel: pci 0000:00:16.5: bridge window [io size 0x1000]: can't assign; no space Nov 4 04:49:42.680260 kernel: pci 0000:00:16.5: bridge window [io size 0x1000]: failed to assign Nov 4 04:49:42.680331 kernel: pci 0000:00:16.6: bridge window [io size 0x1000]: can't assign; no space Nov 4 04:49:42.680404 kernel: pci 0000:00:16.6: bridge window [io size 0x1000]: failed to assign Nov 4 04:49:42.680472 kernel: pci 0000:00:16.7: bridge window [io size 0x1000]: can't assign; no space Nov 4 04:49:42.680539 kernel: pci 0000:00:16.7: bridge window [io size 0x1000]: failed to assign Nov 4 04:49:42.680608 kernel: pci 0000:00:17.3: bridge window [io size 0x1000]: can't assign; no space Nov 4 04:49:42.680674 kernel: pci 0000:00:17.3: bridge window [io size 0x1000]: failed to assign Nov 4 04:49:42.680739 kernel: pci 0000:00:17.4: bridge window [io size 0x1000]: can't assign; no space Nov 4 04:49:42.680803 kernel: pci 0000:00:17.4: bridge window [io size 0x1000]: failed to assign Nov 4 04:49:42.680870 kernel: pci 0000:00:17.5: bridge window [io size 0x1000]: can't assign; no space Nov 4 04:49:42.680935 kernel: pci 0000:00:17.5: bridge window [io size 0x1000]: failed to assign Nov 4 04:49:42.681004 kernel: pci 0000:00:17.6: bridge window [io size 0x1000]: can't assign; no space Nov 4 04:49:42.681079 kernel: pci 0000:00:17.6: bridge window [io size 0x1000]: failed to assign Nov 4 04:49:42.681154 kernel: pci 0000:00:17.7: bridge window [io size 0x1000]: can't assign; no space Nov 4 04:49:42.681221 kernel: pci 0000:00:17.7: bridge window [io size 0x1000]: failed to assign Nov 4 04:49:42.681287 kernel: pci 0000:00:18.2: bridge window [io size 0x1000]: can't assign; no space Nov 4 04:49:42.681353 kernel: pci 0000:00:18.2: bridge window [io size 0x1000]: failed to assign Nov 4 04:49:42.681418 kernel: pci 0000:00:18.3: bridge window [io size 0x1000]: can't assign; no space Nov 4 04:49:42.681492 kernel: pci 0000:00:18.3: bridge window [io size 0x1000]: failed to assign Nov 4 04:49:42.681574 kernel: pci 0000:00:18.4: bridge window [io size 0x1000]: can't assign; no space Nov 4 04:49:42.681640 kernel: pci 0000:00:18.4: bridge window [io size 0x1000]: failed to assign Nov 4 04:49:42.681714 kernel: pci 0000:00:18.5: bridge window [io size 0x1000]: can't assign; no space Nov 4 04:49:42.681786 kernel: pci 0000:00:18.5: bridge window [io size 0x1000]: failed to assign Nov 4 04:49:42.681853 kernel: pci 0000:00:18.6: bridge window [io size 0x1000]: can't assign; no space Nov 4 04:49:42.681921 kernel: pci 0000:00:18.6: bridge window [io size 0x1000]: failed to assign Nov 4 04:49:42.681986 kernel: pci 0000:00:18.7: bridge window [io size 0x1000]: can't assign; no space Nov 4 04:49:42.682051 kernel: pci 0000:00:18.7: bridge window [io size 0x1000]: failed to assign Nov 4 04:49:42.682125 kernel: pci 0000:00:18.7: bridge window [io size 0x1000]: can't assign; no space Nov 4 04:49:42.682191 kernel: pci 0000:00:18.7: bridge window [io size 0x1000]: failed to assign Nov 4 04:49:42.682256 kernel: pci 0000:00:18.6: bridge window [io size 0x1000]: can't assign; no space Nov 4 04:49:42.682321 kernel: pci 0000:00:18.6: bridge window [io size 0x1000]: failed to assign Nov 4 04:49:42.682389 kernel: pci 0000:00:18.5: bridge window [io size 0x1000]: can't assign; no space Nov 4 04:49:42.682454 kernel: pci 0000:00:18.5: bridge window [io size 0x1000]: failed to assign Nov 4 04:49:42.682518 kernel: pci 0000:00:18.4: bridge window [io size 0x1000]: can't assign; no space Nov 4 04:49:42.682583 kernel: pci 0000:00:18.4: bridge window [io size 0x1000]: failed to assign Nov 4 04:49:42.682648 kernel: pci 0000:00:18.3: bridge window [io size 0x1000]: can't assign; no space Nov 4 04:49:42.682713 kernel: pci 0000:00:18.3: bridge window [io size 0x1000]: failed to assign Nov 4 04:49:42.682779 kernel: pci 0000:00:18.2: bridge window [io size 0x1000]: can't assign; no space Nov 4 04:49:42.682844 kernel: pci 0000:00:18.2: bridge window [io size 0x1000]: failed to assign Nov 4 04:49:42.682912 kernel: pci 0000:00:17.7: bridge window [io size 0x1000]: can't assign; no space Nov 4 04:49:42.682978 kernel: pci 0000:00:17.7: bridge window [io size 0x1000]: failed to assign Nov 4 04:49:42.683045 kernel: pci 0000:00:17.6: bridge window [io size 0x1000]: can't assign; no space Nov 4 04:49:42.683134 kernel: pci 0000:00:17.6: bridge window [io size 0x1000]: failed to assign Nov 4 04:49:42.683201 kernel: pci 0000:00:17.5: bridge window [io size 0x1000]: can't assign; no space Nov 4 04:49:42.683267 kernel: pci 0000:00:17.5: bridge window [io size 0x1000]: failed to assign Nov 4 04:49:42.683334 kernel: pci 0000:00:17.4: bridge window [io size 0x1000]: can't assign; no space Nov 4 04:49:42.683403 kernel: pci 0000:00:17.4: bridge window [io size 0x1000]: failed to assign Nov 4 04:49:42.683469 kernel: pci 0000:00:17.3: bridge window [io size 0x1000]: can't assign; no space Nov 4 04:49:42.683534 kernel: pci 0000:00:17.3: bridge window [io size 0x1000]: failed to assign Nov 4 04:49:42.683600 kernel: pci 0000:00:16.7: bridge window [io size 0x1000]: can't assign; no space Nov 4 04:49:42.683666 kernel: pci 0000:00:16.7: bridge window [io size 0x1000]: failed to assign Nov 4 04:49:42.683741 kernel: pci 0000:00:16.6: bridge window [io size 0x1000]: can't assign; no space Nov 4 04:49:42.683807 kernel: pci 0000:00:16.6: bridge window [io size 0x1000]: failed to assign Nov 4 04:49:42.683876 kernel: pci 0000:00:16.5: bridge window [io size 0x1000]: can't assign; no space Nov 4 04:49:42.683943 kernel: pci 0000:00:16.5: bridge window [io size 0x1000]: failed to assign Nov 4 04:49:42.684009 kernel: pci 0000:00:16.4: bridge window [io size 0x1000]: can't assign; no space Nov 4 04:49:42.684084 kernel: pci 0000:00:16.4: bridge window [io size 0x1000]: failed to assign Nov 4 04:49:42.684150 kernel: pci 0000:00:16.3: bridge window [io size 0x1000]: can't assign; no space Nov 4 04:49:42.684215 kernel: pci 0000:00:16.3: bridge window [io size 0x1000]: failed to assign Nov 4 04:49:42.684283 kernel: pci 0000:00:15.7: bridge window [io size 0x1000]: can't assign; no space Nov 4 04:49:42.684348 kernel: pci 0000:00:15.7: bridge window [io size 0x1000]: failed to assign Nov 4 04:49:42.684412 kernel: pci 0000:00:15.6: bridge window [io size 0x1000]: can't assign; no space Nov 4 04:49:42.684477 kernel: pci 0000:00:15.6: bridge window [io size 0x1000]: failed to assign Nov 4 04:49:42.684541 kernel: pci 0000:00:15.5: bridge window [io size 0x1000]: can't assign; no space Nov 4 04:49:42.684605 kernel: pci 0000:00:15.5: bridge window [io size 0x1000]: failed to assign Nov 4 04:49:42.684673 kernel: pci 0000:00:15.4: bridge window [io size 0x1000]: can't assign; no space Nov 4 04:49:42.684737 kernel: pci 0000:00:15.4: bridge window [io size 0x1000]: failed to assign Nov 4 04:49:42.684802 kernel: pci 0000:00:15.3: bridge window [io size 0x1000]: can't assign; no space Nov 4 04:49:42.684867 kernel: pci 0000:00:15.3: bridge window [io size 0x1000]: failed to assign Nov 4 04:49:42.684933 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Nov 4 04:49:42.684998 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] Nov 4 04:49:42.685074 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Nov 4 04:49:42.685143 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Nov 4 04:49:42.685210 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Nov 4 04:49:42.685279 kernel: pci 0000:03:00.0: ROM [mem 0xfd500000-0xfd50ffff pref]: assigned Nov 4 04:49:42.685350 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Nov 4 04:49:42.685415 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Nov 4 04:49:42.685479 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Nov 4 04:49:42.685543 kernel: pci 0000:00:15.0: bridge window [mem 0xc0000000-0xc01fffff 64bit pref] Nov 4 04:49:42.685612 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Nov 4 04:49:42.685677 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Nov 4 04:49:42.685740 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Nov 4 04:49:42.685804 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Nov 4 04:49:42.685881 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Nov 4 04:49:42.685947 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Nov 4 04:49:42.686012 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Nov 4 04:49:42.686086 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Nov 4 04:49:42.686154 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Nov 4 04:49:42.686219 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Nov 4 04:49:42.686284 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Nov 4 04:49:42.686348 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Nov 4 04:49:42.686413 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Nov 4 04:49:42.686477 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Nov 4 04:49:42.686544 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Nov 4 04:49:42.686608 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Nov 4 04:49:42.686672 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Nov 4 04:49:42.686737 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Nov 4 04:49:42.686802 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Nov 4 04:49:42.686867 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Nov 4 04:49:42.686934 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Nov 4 04:49:42.687005 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Nov 4 04:49:42.687084 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Nov 4 04:49:42.687156 kernel: pci 0000:0b:00.0: ROM [mem 0xfd400000-0xfd40ffff pref]: assigned Nov 4 04:49:42.687223 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Nov 4 04:49:42.687288 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Nov 4 04:49:42.687355 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Nov 4 04:49:42.687420 kernel: pci 0000:00:16.0: bridge window [mem 0xc0200000-0xc03fffff 64bit pref] Nov 4 04:49:42.687486 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Nov 4 04:49:42.687551 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Nov 4 04:49:42.687617 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Nov 4 04:49:42.687683 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Nov 4 04:49:42.687748 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Nov 4 04:49:42.687812 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Nov 4 04:49:42.687879 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Nov 4 04:49:42.687943 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Nov 4 04:49:42.688008 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Nov 4 04:49:42.688086 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Nov 4 04:49:42.688154 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Nov 4 04:49:42.688219 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Nov 4 04:49:42.688284 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Nov 4 04:49:42.688356 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Nov 4 04:49:42.688422 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Nov 4 04:49:42.688487 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Nov 4 04:49:42.688553 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Nov 4 04:49:42.688618 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Nov 4 04:49:42.688686 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Nov 4 04:49:42.688751 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Nov 4 04:49:42.688818 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Nov 4 04:49:42.688882 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Nov 4 04:49:42.688954 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Nov 4 04:49:42.689022 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Nov 4 04:49:42.689102 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Nov 4 04:49:42.690605 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Nov 4 04:49:42.690684 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Nov 4 04:49:42.690756 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Nov 4 04:49:42.690823 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Nov 4 04:49:42.690891 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Nov 4 04:49:42.690958 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Nov 4 04:49:42.691026 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Nov 4 04:49:42.691109 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Nov 4 04:49:42.691179 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Nov 4 04:49:42.691254 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Nov 4 04:49:42.691321 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Nov 4 04:49:42.691385 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Nov 4 04:49:42.691451 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Nov 4 04:49:42.691517 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Nov 4 04:49:42.691585 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Nov 4 04:49:42.691651 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Nov 4 04:49:42.691719 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Nov 4 04:49:42.691785 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Nov 4 04:49:42.691857 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Nov 4 04:49:42.691925 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Nov 4 04:49:42.691993 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Nov 4 04:49:42.692058 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Nov 4 04:49:42.692506 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Nov 4 04:49:42.692576 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Nov 4 04:49:42.692644 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Nov 4 04:49:42.692713 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Nov 4 04:49:42.692783 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Nov 4 04:49:42.692849 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Nov 4 04:49:42.692913 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Nov 4 04:49:42.692980 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Nov 4 04:49:42.693044 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Nov 4 04:49:42.693117 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Nov 4 04:49:42.693183 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Nov 4 04:49:42.693252 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Nov 4 04:49:42.693316 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Nov 4 04:49:42.693397 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Nov 4 04:49:42.693465 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Nov 4 04:49:42.693532 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Nov 4 04:49:42.693596 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Nov 4 04:49:42.693666 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Nov 4 04:49:42.693735 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Nov 4 04:49:42.693820 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Nov 4 04:49:42.693891 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Nov 4 04:49:42.694135 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Nov 4 04:49:42.694205 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Nov 4 04:49:42.694277 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Nov 4 04:49:42.694343 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Nov 4 04:49:42.694408 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Nov 4 04:49:42.694477 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Nov 4 04:49:42.694544 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Nov 4 04:49:42.694610 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Nov 4 04:49:42.694675 kernel: pci_bus 0000:00: resource 4 [mem 0x000a0000-0x000bffff window] Nov 4 04:49:42.694735 kernel: pci_bus 0000:00: resource 5 [mem 0x000cc000-0x000dbfff window] Nov 4 04:49:42.694799 kernel: pci_bus 0000:00: resource 6 [mem 0xc0000000-0xfebfffff window] Nov 4 04:49:42.694858 kernel: pci_bus 0000:00: resource 7 [io 0x0000-0x0cf7 window] Nov 4 04:49:42.694920 kernel: pci_bus 0000:00: resource 8 [io 0x0d00-0xfeff window] Nov 4 04:49:42.694984 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x3fff] Nov 4 04:49:42.695058 kernel: pci_bus 0000:02: resource 1 [mem 0xfd600000-0xfdffffff] Nov 4 04:49:42.695139 kernel: pci_bus 0000:02: resource 2 [mem 0xe7b00000-0xe7ffffff 64bit pref] Nov 4 04:49:42.695199 kernel: pci_bus 0000:02: resource 4 [mem 0x000a0000-0x000bffff window] Nov 4 04:49:42.695263 kernel: pci_bus 0000:02: resource 5 [mem 0x000cc000-0x000dbfff window] Nov 4 04:49:42.695328 kernel: pci_bus 0000:02: resource 6 [mem 0xc0000000-0xfebfffff window] Nov 4 04:49:42.695394 kernel: pci_bus 0000:02: resource 7 [io 0x0000-0x0cf7 window] Nov 4 04:49:42.695456 kernel: pci_bus 0000:02: resource 8 [io 0x0d00-0xfeff window] Nov 4 04:49:42.695520 kernel: pci_bus 0000:03: resource 0 [io 0x4000-0x4fff] Nov 4 04:49:42.695581 kernel: pci_bus 0000:03: resource 1 [mem 0xfd500000-0xfd5fffff] Nov 4 04:49:42.695640 kernel: pci_bus 0000:03: resource 2 [mem 0xc0000000-0xc01fffff 64bit pref] Nov 4 04:49:42.695706 kernel: pci_bus 0000:04: resource 0 [io 0x8000-0x8fff] Nov 4 04:49:42.695772 kernel: pci_bus 0000:04: resource 1 [mem 0xfd100000-0xfd1fffff] Nov 4 04:49:42.695836 kernel: pci_bus 0000:04: resource 2 [mem 0xe7800000-0xe78fffff 64bit pref] Nov 4 04:49:42.695924 kernel: pci_bus 0000:05: resource 0 [io 0xc000-0xcfff] Nov 4 04:49:42.695986 kernel: pci_bus 0000:05: resource 1 [mem 0xfcd00000-0xfcdfffff] Nov 4 04:49:42.696045 kernel: pci_bus 0000:05: resource 2 [mem 0xe7400000-0xe74fffff 64bit pref] Nov 4 04:49:42.698153 kernel: pci_bus 0000:06: resource 1 [mem 0xfc900000-0xfc9fffff] Nov 4 04:49:42.698223 kernel: pci_bus 0000:06: resource 2 [mem 0xe7000000-0xe70fffff 64bit pref] Nov 4 04:49:42.698293 kernel: pci_bus 0000:07: resource 1 [mem 0xfc500000-0xfc5fffff] Nov 4 04:49:42.698365 kernel: pci_bus 0000:07: resource 2 [mem 0xe6c00000-0xe6cfffff 64bit pref] Nov 4 04:49:42.698432 kernel: pci_bus 0000:08: resource 1 [mem 0xfc100000-0xfc1fffff] Nov 4 04:49:42.698493 kernel: pci_bus 0000:08: resource 2 [mem 0xe6800000-0xe68fffff 64bit pref] Nov 4 04:49:42.698560 kernel: pci_bus 0000:09: resource 1 [mem 0xfbd00000-0xfbdfffff] Nov 4 04:49:42.698623 kernel: pci_bus 0000:09: resource 2 [mem 0xe6400000-0xe64fffff 64bit pref] Nov 4 04:49:42.698688 kernel: pci_bus 0000:0a: resource 1 [mem 0xfb900000-0xfb9fffff] Nov 4 04:49:42.698748 kernel: pci_bus 0000:0a: resource 2 [mem 0xe6000000-0xe60fffff 64bit pref] Nov 4 04:49:42.698813 kernel: pci_bus 0000:0b: resource 0 [io 0x5000-0x5fff] Nov 4 04:49:42.698875 kernel: pci_bus 0000:0b: resource 1 [mem 0xfd400000-0xfd4fffff] Nov 4 04:49:42.698935 kernel: pci_bus 0000:0b: resource 2 [mem 0xc0200000-0xc03fffff 64bit pref] Nov 4 04:49:42.698999 kernel: pci_bus 0000:0c: resource 0 [io 0x9000-0x9fff] Nov 4 04:49:42.700873 kernel: pci_bus 0000:0c: resource 1 [mem 0xfd000000-0xfd0fffff] Nov 4 04:49:42.700954 kernel: pci_bus 0000:0c: resource 2 [mem 0xe7700000-0xe77fffff 64bit pref] Nov 4 04:49:42.701025 kernel: pci_bus 0000:0d: resource 0 [io 0xd000-0xdfff] Nov 4 04:49:42.701106 kernel: pci_bus 0000:0d: resource 1 [mem 0xfcc00000-0xfccfffff] Nov 4 04:49:42.701169 kernel: pci_bus 0000:0d: resource 2 [mem 0xe7300000-0xe73fffff 64bit pref] Nov 4 04:49:42.701234 kernel: pci_bus 0000:0e: resource 1 [mem 0xfc800000-0xfc8fffff] Nov 4 04:49:42.701295 kernel: pci_bus 0000:0e: resource 2 [mem 0xe6f00000-0xe6ffffff 64bit pref] Nov 4 04:49:42.701359 kernel: pci_bus 0000:0f: resource 1 [mem 0xfc400000-0xfc4fffff] Nov 4 04:49:42.701422 kernel: pci_bus 0000:0f: resource 2 [mem 0xe6b00000-0xe6bfffff 64bit pref] Nov 4 04:49:42.701486 kernel: pci_bus 0000:10: resource 1 [mem 0xfc000000-0xfc0fffff] Nov 4 04:49:42.701547 kernel: pci_bus 0000:10: resource 2 [mem 0xe6700000-0xe67fffff 64bit pref] Nov 4 04:49:42.701610 kernel: pci_bus 0000:11: resource 1 [mem 0xfbc00000-0xfbcfffff] Nov 4 04:49:42.701694 kernel: pci_bus 0000:11: resource 2 [mem 0xe6300000-0xe63fffff 64bit pref] Nov 4 04:49:42.701763 kernel: pci_bus 0000:12: resource 1 [mem 0xfb800000-0xfb8fffff] Nov 4 04:49:42.701826 kernel: pci_bus 0000:12: resource 2 [mem 0xe5f00000-0xe5ffffff 64bit pref] Nov 4 04:49:42.701889 kernel: pci_bus 0000:13: resource 0 [io 0x6000-0x6fff] Nov 4 04:49:42.701949 kernel: pci_bus 0000:13: resource 1 [mem 0xfd300000-0xfd3fffff] Nov 4 04:49:42.702008 kernel: pci_bus 0000:13: resource 2 [mem 0xe7a00000-0xe7afffff 64bit pref] Nov 4 04:49:42.702175 kernel: pci_bus 0000:14: resource 0 [io 0xa000-0xafff] Nov 4 04:49:42.702243 kernel: pci_bus 0000:14: resource 1 [mem 0xfcf00000-0xfcffffff] Nov 4 04:49:42.702303 kernel: pci_bus 0000:14: resource 2 [mem 0xe7600000-0xe76fffff 64bit pref] Nov 4 04:49:42.702369 kernel: pci_bus 0000:15: resource 0 [io 0xe000-0xefff] Nov 4 04:49:42.702429 kernel: pci_bus 0000:15: resource 1 [mem 0xfcb00000-0xfcbfffff] Nov 4 04:49:42.702488 kernel: pci_bus 0000:15: resource 2 [mem 0xe7200000-0xe72fffff 64bit pref] Nov 4 04:49:42.702553 kernel: pci_bus 0000:16: resource 1 [mem 0xfc700000-0xfc7fffff] Nov 4 04:49:42.702615 kernel: pci_bus 0000:16: resource 2 [mem 0xe6e00000-0xe6efffff 64bit pref] Nov 4 04:49:42.702679 kernel: pci_bus 0000:17: resource 1 [mem 0xfc300000-0xfc3fffff] Nov 4 04:49:42.702739 kernel: pci_bus 0000:17: resource 2 [mem 0xe6a00000-0xe6afffff 64bit pref] Nov 4 04:49:42.702803 kernel: pci_bus 0000:18: resource 1 [mem 0xfbf00000-0xfbffffff] Nov 4 04:49:42.702863 kernel: pci_bus 0000:18: resource 2 [mem 0xe6600000-0xe66fffff 64bit pref] Nov 4 04:49:42.702930 kernel: pci_bus 0000:19: resource 1 [mem 0xfbb00000-0xfbbfffff] Nov 4 04:49:42.702990 kernel: pci_bus 0000:19: resource 2 [mem 0xe6200000-0xe62fffff 64bit pref] Nov 4 04:49:42.703056 kernel: pci_bus 0000:1a: resource 1 [mem 0xfb700000-0xfb7fffff] Nov 4 04:49:42.703131 kernel: pci_bus 0000:1a: resource 2 [mem 0xe5e00000-0xe5efffff 64bit pref] Nov 4 04:49:42.703196 kernel: pci_bus 0000:1b: resource 0 [io 0x7000-0x7fff] Nov 4 04:49:42.703260 kernel: pci_bus 0000:1b: resource 1 [mem 0xfd200000-0xfd2fffff] Nov 4 04:49:42.703319 kernel: pci_bus 0000:1b: resource 2 [mem 0xe7900000-0xe79fffff 64bit pref] Nov 4 04:49:42.703387 kernel: pci_bus 0000:1c: resource 0 [io 0xb000-0xbfff] Nov 4 04:49:42.703447 kernel: pci_bus 0000:1c: resource 1 [mem 0xfce00000-0xfcefffff] Nov 4 04:49:42.703506 kernel: pci_bus 0000:1c: resource 2 [mem 0xe7500000-0xe75fffff 64bit pref] Nov 4 04:49:42.703571 kernel: pci_bus 0000:1d: resource 1 [mem 0xfca00000-0xfcafffff] Nov 4 04:49:42.703635 kernel: pci_bus 0000:1d: resource 2 [mem 0xe7100000-0xe71fffff 64bit pref] Nov 4 04:49:42.703716 kernel: pci_bus 0000:1e: resource 1 [mem 0xfc600000-0xfc6fffff] Nov 4 04:49:42.703778 kernel: pci_bus 0000:1e: resource 2 [mem 0xe6d00000-0xe6dfffff 64bit pref] Nov 4 04:49:42.703842 kernel: pci_bus 0000:1f: resource 1 [mem 0xfc200000-0xfc2fffff] Nov 4 04:49:42.703902 kernel: pci_bus 0000:1f: resource 2 [mem 0xe6900000-0xe69fffff 64bit pref] Nov 4 04:49:42.703969 kernel: pci_bus 0000:20: resource 1 [mem 0xfbe00000-0xfbefffff] Nov 4 04:49:42.704029 kernel: pci_bus 0000:20: resource 2 [mem 0xe6500000-0xe65fffff 64bit pref] Nov 4 04:49:42.704103 kernel: pci_bus 0000:21: resource 1 [mem 0xfba00000-0xfbafffff] Nov 4 04:49:42.704164 kernel: pci_bus 0000:21: resource 2 [mem 0xe6100000-0xe61fffff 64bit pref] Nov 4 04:49:42.704228 kernel: pci_bus 0000:22: resource 1 [mem 0xfb600000-0xfb6fffff] Nov 4 04:49:42.704288 kernel: pci_bus 0000:22: resource 2 [mem 0xe5d00000-0xe5dfffff 64bit pref] Nov 4 04:49:42.704362 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Nov 4 04:49:42.704372 kernel: PCI: CLS 32 bytes, default 64 Nov 4 04:49:42.704379 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Nov 4 04:49:42.704386 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Nov 4 04:49:42.704393 kernel: clocksource: Switched to clocksource tsc Nov 4 04:49:42.704400 kernel: Initialise system trusted keyrings Nov 4 04:49:42.704408 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Nov 4 04:49:42.704415 kernel: Key type asymmetric registered Nov 4 04:49:42.704422 kernel: Asymmetric key parser 'x509' registered Nov 4 04:49:42.704429 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Nov 4 04:49:42.704435 kernel: io scheduler mq-deadline registered Nov 4 04:49:42.704442 kernel: io scheduler kyber registered Nov 4 04:49:42.704449 kernel: io scheduler bfq registered Nov 4 04:49:42.704532 kernel: pcieport 0000:00:15.0: PME: Signaling with IRQ 24 Nov 4 04:49:42.704608 kernel: pcieport 0000:00:15.0: pciehp: Slot #160 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 4 04:49:42.704684 kernel: pcieport 0000:00:15.1: PME: Signaling with IRQ 25 Nov 4 04:49:42.704751 kernel: pcieport 0000:00:15.1: pciehp: Slot #161 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 4 04:49:42.704823 kernel: pcieport 0000:00:15.2: PME: Signaling with IRQ 26 Nov 4 04:49:42.704891 kernel: pcieport 0000:00:15.2: pciehp: Slot #162 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 4 04:49:42.704960 kernel: pcieport 0000:00:15.3: PME: Signaling with IRQ 27 Nov 4 04:49:42.705025 kernel: pcieport 0000:00:15.3: pciehp: Slot #163 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 4 04:49:42.705102 kernel: pcieport 0000:00:15.4: PME: Signaling with IRQ 28 Nov 4 04:49:42.705170 kernel: pcieport 0000:00:15.4: pciehp: Slot #164 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 4 04:49:42.705236 kernel: pcieport 0000:00:15.5: PME: Signaling with IRQ 29 Nov 4 04:49:42.705301 kernel: pcieport 0000:00:15.5: pciehp: Slot #165 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 4 04:49:42.705371 kernel: pcieport 0000:00:15.6: PME: Signaling with IRQ 30 Nov 4 04:49:42.705437 kernel: pcieport 0000:00:15.6: pciehp: Slot #166 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 4 04:49:42.705503 kernel: pcieport 0000:00:15.7: PME: Signaling with IRQ 31 Nov 4 04:49:42.705570 kernel: pcieport 0000:00:15.7: pciehp: Slot #167 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 4 04:49:42.705636 kernel: pcieport 0000:00:16.0: PME: Signaling with IRQ 32 Nov 4 04:49:42.705701 kernel: pcieport 0000:00:16.0: pciehp: Slot #192 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 4 04:49:42.705775 kernel: pcieport 0000:00:16.1: PME: Signaling with IRQ 33 Nov 4 04:49:42.705841 kernel: pcieport 0000:00:16.1: pciehp: Slot #193 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 4 04:49:42.705908 kernel: pcieport 0000:00:16.2: PME: Signaling with IRQ 34 Nov 4 04:49:42.705974 kernel: pcieport 0000:00:16.2: pciehp: Slot #194 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 4 04:49:42.706040 kernel: pcieport 0000:00:16.3: PME: Signaling with IRQ 35 Nov 4 04:49:42.706121 kernel: pcieport 0000:00:16.3: pciehp: Slot #195 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 4 04:49:42.706199 kernel: pcieport 0000:00:16.4: PME: Signaling with IRQ 36 Nov 4 04:49:42.706268 kernel: pcieport 0000:00:16.4: pciehp: Slot #196 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 4 04:49:42.706335 kernel: pcieport 0000:00:16.5: PME: Signaling with IRQ 37 Nov 4 04:49:42.706400 kernel: pcieport 0000:00:16.5: pciehp: Slot #197 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 4 04:49:42.706467 kernel: pcieport 0000:00:16.6: PME: Signaling with IRQ 38 Nov 4 04:49:42.706538 kernel: pcieport 0000:00:16.6: pciehp: Slot #198 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 4 04:49:42.706624 kernel: pcieport 0000:00:16.7: PME: Signaling with IRQ 39 Nov 4 04:49:42.706691 kernel: pcieport 0000:00:16.7: pciehp: Slot #199 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 4 04:49:42.706759 kernel: pcieport 0000:00:17.0: PME: Signaling with IRQ 40 Nov 4 04:49:42.706826 kernel: pcieport 0000:00:17.0: pciehp: Slot #224 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 4 04:49:42.706892 kernel: pcieport 0000:00:17.1: PME: Signaling with IRQ 41 Nov 4 04:49:42.706957 kernel: pcieport 0000:00:17.1: pciehp: Slot #225 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 4 04:49:42.707027 kernel: pcieport 0000:00:17.2: PME: Signaling with IRQ 42 Nov 4 04:49:42.707112 kernel: pcieport 0000:00:17.2: pciehp: Slot #226 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 4 04:49:42.707180 kernel: pcieport 0000:00:17.3: PME: Signaling with IRQ 43 Nov 4 04:49:42.707246 kernel: pcieport 0000:00:17.3: pciehp: Slot #227 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 4 04:49:42.707330 kernel: pcieport 0000:00:17.4: PME: Signaling with IRQ 44 Nov 4 04:49:42.707398 kernel: pcieport 0000:00:17.4: pciehp: Slot #228 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 4 04:49:42.707468 kernel: pcieport 0000:00:17.5: PME: Signaling with IRQ 45 Nov 4 04:49:42.707534 kernel: pcieport 0000:00:17.5: pciehp: Slot #229 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 4 04:49:42.707600 kernel: pcieport 0000:00:17.6: PME: Signaling with IRQ 46 Nov 4 04:49:42.707667 kernel: pcieport 0000:00:17.6: pciehp: Slot #230 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 4 04:49:42.707736 kernel: pcieport 0000:00:17.7: PME: Signaling with IRQ 47 Nov 4 04:49:42.707804 kernel: pcieport 0000:00:17.7: pciehp: Slot #231 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 4 04:49:42.707875 kernel: pcieport 0000:00:18.0: PME: Signaling with IRQ 48 Nov 4 04:49:42.707943 kernel: pcieport 0000:00:18.0: pciehp: Slot #256 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 4 04:49:42.708012 kernel: pcieport 0000:00:18.1: PME: Signaling with IRQ 49 Nov 4 04:49:42.708384 kernel: pcieport 0000:00:18.1: pciehp: Slot #257 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 4 04:49:42.708459 kernel: pcieport 0000:00:18.2: PME: Signaling with IRQ 50 Nov 4 04:49:42.708547 kernel: pcieport 0000:00:18.2: pciehp: Slot #258 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 4 04:49:42.708625 kernel: pcieport 0000:00:18.3: PME: Signaling with IRQ 51 Nov 4 04:49:42.708701 kernel: pcieport 0000:00:18.3: pciehp: Slot #259 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 4 04:49:42.708784 kernel: pcieport 0000:00:18.4: PME: Signaling with IRQ 52 Nov 4 04:49:42.708860 kernel: pcieport 0000:00:18.4: pciehp: Slot #260 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 4 04:49:42.708933 kernel: pcieport 0000:00:18.5: PME: Signaling with IRQ 53 Nov 4 04:49:42.709000 kernel: pcieport 0000:00:18.5: pciehp: Slot #261 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 4 04:49:42.709079 kernel: pcieport 0000:00:18.6: PME: Signaling with IRQ 54 Nov 4 04:49:42.709152 kernel: pcieport 0000:00:18.6: pciehp: Slot #262 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 4 04:49:42.709221 kernel: pcieport 0000:00:18.7: PME: Signaling with IRQ 55 Nov 4 04:49:42.709288 kernel: pcieport 0000:00:18.7: pciehp: Slot #263 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 4 04:49:42.709301 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 4 04:49:42.709308 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 4 04:49:42.709316 kernel: 00:05: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 4 04:49:42.709323 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBC,PNP0f13:MOUS] at 0x60,0x64 irq 1,12 Nov 4 04:49:42.709336 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 4 04:49:42.709344 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 4 04:49:42.709416 kernel: rtc_cmos 00:01: registered as rtc0 Nov 4 04:49:42.709478 kernel: rtc_cmos 00:01: setting system clock to 2025-11-04T04:49:41 UTC (1762231781) Nov 4 04:49:42.709488 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Nov 4 04:49:42.709550 kernel: rtc_cmos 00:01: alarms up to one month, y3k, 114 bytes nvram Nov 4 04:49:42.709559 kernel: intel_pstate: CPU model not supported Nov 4 04:49:42.709566 kernel: NET: Registered PF_INET6 protocol family Nov 4 04:49:42.709573 kernel: Segment Routing with IPv6 Nov 4 04:49:42.709580 kernel: In-situ OAM (IOAM) with IPv6 Nov 4 04:49:42.709587 kernel: NET: Registered PF_PACKET protocol family Nov 4 04:49:42.709595 kernel: Key type dns_resolver registered Nov 4 04:49:42.709603 kernel: IPI shorthand broadcast: enabled Nov 4 04:49:42.709610 kernel: sched_clock: Marking stable (1870003392, 170271187)->(2054697255, -14422676) Nov 4 04:49:42.709617 kernel: registered taskstats version 1 Nov 4 04:49:42.709624 kernel: Loading compiled-in X.509 certificates Nov 4 04:49:42.709631 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.54-flatcar: dafbe857b8ef9eaad4381fdddb57853ce023547e' Nov 4 04:49:42.709638 kernel: Demotion targets for Node 0: null Nov 4 04:49:42.709645 kernel: Key type .fscrypt registered Nov 4 04:49:42.709653 kernel: Key type fscrypt-provisioning registered Nov 4 04:49:42.709660 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 4 04:49:42.709666 kernel: ima: Allocated hash algorithm: sha1 Nov 4 04:49:42.709673 kernel: ima: No architecture policies found Nov 4 04:49:42.709680 kernel: clk: Disabling unused clocks Nov 4 04:49:42.709687 kernel: Freeing unused kernel image (initmem) memory: 15360K Nov 4 04:49:42.709694 kernel: Write protecting the kernel read-only data: 45056k Nov 4 04:49:42.709702 kernel: Freeing unused kernel image (rodata/data gap) memory: 828K Nov 4 04:49:42.709709 kernel: Run /init as init process Nov 4 04:49:42.709716 kernel: with arguments: Nov 4 04:49:42.709723 kernel: /init Nov 4 04:49:42.709730 kernel: with environment: Nov 4 04:49:42.709736 kernel: HOME=/ Nov 4 04:49:42.709743 kernel: TERM=linux Nov 4 04:49:42.709749 kernel: SCSI subsystem initialized Nov 4 04:49:42.709757 kernel: VMware PVSCSI driver - version 1.0.7.0-k Nov 4 04:49:42.709764 kernel: vmw_pvscsi: using 64bit dma Nov 4 04:49:42.709771 kernel: vmw_pvscsi: max_id: 16 Nov 4 04:49:42.709778 kernel: vmw_pvscsi: setting ring_pages to 8 Nov 4 04:49:42.709784 kernel: vmw_pvscsi: enabling reqCallThreshold Nov 4 04:49:42.709791 kernel: vmw_pvscsi: driver-based request coalescing enabled Nov 4 04:49:42.709798 kernel: vmw_pvscsi: using MSI-X Nov 4 04:49:42.709877 kernel: scsi host0: VMware PVSCSI storage adapter rev 2, req/cmp/msg rings: 8/8/1 pages, cmd_per_lun=254 Nov 4 04:49:42.709948 kernel: vmw_pvscsi 0000:03:00.0: VMware PVSCSI rev 2 host #0 Nov 4 04:49:42.710028 kernel: scsi 0:0:0:0: Direct-Access VMware Virtual disk 2.0 PQ: 0 ANSI: 6 Nov 4 04:49:42.710110 kernel: sd 0:0:0:0: [sda] 25804800 512-byte logical blocks: (13.2 GB/12.3 GiB) Nov 4 04:49:42.710181 kernel: sd 0:0:0:0: [sda] Write Protect is off Nov 4 04:49:42.710254 kernel: sd 0:0:0:0: [sda] Mode Sense: 31 00 00 00 Nov 4 04:49:42.710323 kernel: sd 0:0:0:0: [sda] Cache data unavailable Nov 4 04:49:42.710409 kernel: sd 0:0:0:0: [sda] Assuming drive cache: write through Nov 4 04:49:42.710420 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 4 04:49:42.710488 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Nov 4 04:49:42.710498 kernel: libata version 3.00 loaded. Nov 4 04:49:42.710565 kernel: ata_piix 0000:00:07.1: version 2.13 Nov 4 04:49:42.710639 kernel: scsi host1: ata_piix Nov 4 04:49:42.710709 kernel: scsi host2: ata_piix Nov 4 04:49:42.710719 kernel: ata1: PATA max UDMA/33 cmd 0x1f0 ctl 0x3f6 bmdma 0x1060 irq 14 lpm-pol 0 Nov 4 04:49:42.710727 kernel: ata2: PATA max UDMA/33 cmd 0x170 ctl 0x376 bmdma 0x1068 irq 15 lpm-pol 0 Nov 4 04:49:42.710734 kernel: ata2.00: ATAPI: VMware Virtual IDE CDROM Drive, 00000001, max UDMA/33 Nov 4 04:49:42.710809 kernel: scsi 2:0:0:0: CD-ROM NECVMWar VMware IDE CDR10 1.00 PQ: 0 ANSI: 5 Nov 4 04:49:42.710883 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 1x/1x writer dvd-ram cd/rw xa/form2 cdda tray Nov 4 04:49:42.710893 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Nov 4 04:49:42.710900 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 4 04:49:42.710907 kernel: device-mapper: uevent: version 1.0.3 Nov 4 04:49:42.710914 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Nov 4 04:49:42.711003 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Nov 4 04:49:42.711016 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Nov 4 04:49:42.711023 kernel: raid6: avx2x4 gen() 44770 MB/s Nov 4 04:49:42.711030 kernel: raid6: avx2x2 gen() 40312 MB/s Nov 4 04:49:42.711037 kernel: raid6: avx2x1 gen() 27224 MB/s Nov 4 04:49:42.711044 kernel: raid6: using algorithm avx2x4 gen() 44770 MB/s Nov 4 04:49:42.711051 kernel: raid6: .... xor() 8892 MB/s, rmw enabled Nov 4 04:49:42.711058 kernel: raid6: using avx2x2 recovery algorithm Nov 4 04:49:42.711079 kernel: xor: automatically using best checksumming function avx Nov 4 04:49:42.711087 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 4 04:49:42.711094 kernel: BTRFS: device fsid 6f0a5369-79b6-4a87-b9a6-85ec05be306c devid 1 transid 36 /dev/mapper/usr (254:0) scanned by mount (196) Nov 4 04:49:42.711102 kernel: BTRFS info (device dm-0): first mount of filesystem 6f0a5369-79b6-4a87-b9a6-85ec05be306c Nov 4 04:49:42.711109 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 4 04:49:42.711116 kernel: BTRFS info (device dm-0): enabling ssd optimizations Nov 4 04:49:42.711123 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 4 04:49:42.711132 kernel: BTRFS info (device dm-0): enabling free space tree Nov 4 04:49:42.711138 kernel: loop: module loaded Nov 4 04:49:42.711145 kernel: loop0: detected capacity change from 0 to 100136 Nov 4 04:49:42.711152 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 4 04:49:42.711160 systemd[1]: Successfully made /usr/ read-only. Nov 4 04:49:42.711169 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 4 04:49:42.711178 systemd[1]: Detected virtualization vmware. Nov 4 04:49:42.711185 systemd[1]: Detected architecture x86-64. Nov 4 04:49:42.711192 systemd[1]: Running in initrd. Nov 4 04:49:42.711199 systemd[1]: No hostname configured, using default hostname. Nov 4 04:49:42.711206 systemd[1]: Hostname set to . Nov 4 04:49:42.711213 systemd[1]: Initializing machine ID from random generator. Nov 4 04:49:42.711220 systemd[1]: Queued start job for default target initrd.target. Nov 4 04:49:42.711228 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Nov 4 04:49:42.711235 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 4 04:49:42.711243 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 4 04:49:42.711250 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 4 04:49:42.711258 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 4 04:49:42.711265 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 4 04:49:42.711273 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 4 04:49:42.711281 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 4 04:49:42.711288 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 4 04:49:42.711295 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Nov 4 04:49:42.711303 systemd[1]: Reached target paths.target - Path Units. Nov 4 04:49:42.711310 systemd[1]: Reached target slices.target - Slice Units. Nov 4 04:49:42.711318 systemd[1]: Reached target swap.target - Swaps. Nov 4 04:49:42.711325 systemd[1]: Reached target timers.target - Timer Units. Nov 4 04:49:42.711332 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 4 04:49:42.711339 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 4 04:49:42.711346 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 4 04:49:42.711353 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Nov 4 04:49:42.711360 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 4 04:49:42.711369 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 4 04:49:42.711376 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 4 04:49:42.711383 systemd[1]: Reached target sockets.target - Socket Units. Nov 4 04:49:42.711391 systemd[1]: Starting afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments... Nov 4 04:49:42.711398 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 4 04:49:42.711405 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 4 04:49:42.711412 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 4 04:49:42.711421 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Nov 4 04:49:42.711428 systemd[1]: Starting systemd-fsck-usr.service... Nov 4 04:49:42.711435 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 4 04:49:42.711442 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 4 04:49:42.711450 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 4 04:49:42.711458 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 4 04:49:42.711465 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 4 04:49:42.711472 systemd[1]: Finished systemd-fsck-usr.service. Nov 4 04:49:42.711479 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 4 04:49:42.711503 systemd-journald[330]: Collecting audit messages is disabled. Nov 4 04:49:42.711522 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 4 04:49:42.711530 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 4 04:49:42.711537 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 4 04:49:42.711546 kernel: Bridge firewalling registered Nov 4 04:49:42.711553 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 4 04:49:42.711561 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 4 04:49:42.711568 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 4 04:49:42.711575 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 4 04:49:42.711583 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 4 04:49:42.711590 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 4 04:49:42.711599 systemd-journald[330]: Journal started Nov 4 04:49:42.711614 systemd-journald[330]: Runtime Journal (/run/log/journal/91fc6730045c4abfae3e61be11abd215) is 4.8M, max 38.4M, 33.6M free. Nov 4 04:49:42.660206 systemd-modules-load[334]: Inserted module 'br_netfilter' Nov 4 04:49:42.714967 systemd[1]: Started systemd-journald.service - Journal Service. Nov 4 04:49:42.716860 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 4 04:49:42.717542 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 4 04:49:42.743268 systemd-tmpfiles[363]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Nov 4 04:49:42.761941 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 4 04:49:42.782916 systemd-resolved[351]: Positive Trust Anchors: Nov 4 04:49:42.782926 systemd-resolved[351]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 4 04:49:42.782928 systemd-resolved[351]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Nov 4 04:49:42.782950 systemd-resolved[351]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 4 04:49:42.802208 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 4 04:49:42.804150 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 4 04:49:42.804863 systemd-resolved[351]: Defaulting to hostname 'linux'. Nov 4 04:49:42.805528 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 4 04:49:42.805700 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 4 04:49:42.811149 systemd[1]: Finished afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Nov 4 04:49:42.818533 dracut-cmdline[375]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 ip=139.178.70.105::139.178.70.97:28::ens192:off:1.1.1.1:1.0.0.1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=c479bf273e218e23ca82ede45f2bfcd1a1714a33fe5860e964ed0aea09538f01 Nov 4 04:49:42.891077 kernel: Loading iSCSI transport class v2.0-870. Nov 4 04:49:42.915083 kernel: iscsi: registered transport (tcp) Nov 4 04:49:42.942091 kernel: iscsi: registered transport (qla4xxx) Nov 4 04:49:42.942151 kernel: QLogic iSCSI HBA Driver Nov 4 04:49:42.969215 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 4 04:49:42.987072 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 4 04:49:42.988381 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 4 04:49:43.013042 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 4 04:49:43.014126 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 4 04:49:43.016229 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 4 04:49:43.037632 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 4 04:49:43.038730 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 4 04:49:43.057920 systemd-udevd[616]: Using default interface naming scheme 'v257'. Nov 4 04:49:43.065373 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 4 04:49:43.067636 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 4 04:49:43.079338 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 4 04:49:43.081152 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 4 04:49:43.084975 dracut-pre-trigger[700]: rd.md=0: removing MD RAID activation Nov 4 04:49:43.107893 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 4 04:49:43.109155 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 4 04:49:43.114989 systemd-networkd[722]: lo: Link UP Nov 4 04:49:43.115331 systemd-networkd[722]: lo: Gained carrier Nov 4 04:49:43.116188 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 4 04:49:43.116350 systemd[1]: Reached target network.target - Network. Nov 4 04:49:43.195437 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 4 04:49:43.196664 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 4 04:49:43.298087 kernel: VMware vmxnet3 virtual NIC driver - version 1.9.0.0-k-NAPI Nov 4 04:49:43.301179 kernel: vmxnet3 0000:0b:00.0: # of Tx queues : 2, # of Rx queues : 2 Nov 4 04:49:43.305141 kernel: vmxnet3 0000:0b:00.0 eth0: NIC Link is Up 10000 Mbps Nov 4 04:49:43.343443 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_disk EFI-SYSTEM. Nov 4 04:49:43.343760 (udev-worker)[765]: id: Truncating stdout of 'dmi_memory_id' up to 16384 byte. Nov 4 04:49:43.356095 kernel: vmxnet3 0000:0b:00.0 ens192: renamed from eth0 Nov 4 04:49:43.357119 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. Nov 4 04:49:43.359746 systemd-networkd[722]: eth0: Interface name change detected, renamed to ens192. Nov 4 04:49:43.363493 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_disk USR-A. Nov 4 04:49:43.370073 kernel: cryptd: max_cpu_qlen set to 1000 Nov 4 04:49:43.370114 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input2 Nov 4 04:49:43.370755 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_disk ROOT. Nov 4 04:49:43.373418 systemd-networkd[722]: ens192: Configuring with /etc/systemd/network/10-dracut-cmdline-99.network. Nov 4 04:49:43.374205 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 4 04:49:43.376134 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Nov 4 04:49:43.376280 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Nov 4 04:49:43.375743 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 4 04:49:43.375820 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 4 04:49:43.377502 systemd-networkd[722]: ens192: Link UP Nov 4 04:49:43.377505 systemd-networkd[722]: ens192: Gained carrier Nov 4 04:49:43.379298 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 4 04:49:43.386317 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 4 04:49:43.389600 kernel: AES CTR mode by8 optimization enabled Nov 4 04:49:43.420992 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 4 04:49:43.491295 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 4 04:49:43.491923 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 4 04:49:43.492247 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 4 04:49:43.492505 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 4 04:49:43.493300 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 4 04:49:43.504584 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 4 04:49:44.437213 systemd-networkd[722]: ens192: Gained IPv6LL Nov 4 04:49:44.448623 disk-uuid[849]: Warning: The kernel is still using the old partition table. Nov 4 04:49:44.448623 disk-uuid[849]: The new table will be used at the next reboot or after you Nov 4 04:49:44.448623 disk-uuid[849]: run partprobe(8) or kpartx(8) Nov 4 04:49:44.448623 disk-uuid[849]: The operation has completed successfully. Nov 4 04:49:44.455356 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 4 04:49:44.455433 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 4 04:49:44.456318 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 4 04:49:44.475771 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (882) Nov 4 04:49:44.475807 kernel: BTRFS info (device sda6): first mount of filesystem c6585032-901f-4e89-912e-5749e07725ea Nov 4 04:49:44.475816 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 4 04:49:44.480409 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 4 04:49:44.480443 kernel: BTRFS info (device sda6): enabling free space tree Nov 4 04:49:44.485072 kernel: BTRFS info (device sda6): last unmount of filesystem c6585032-901f-4e89-912e-5749e07725ea Nov 4 04:49:44.485154 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 4 04:49:44.486125 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 4 04:49:44.623839 ignition[901]: Ignition 2.22.0 Nov 4 04:49:44.623848 ignition[901]: Stage: fetch-offline Nov 4 04:49:44.623873 ignition[901]: no configs at "/usr/lib/ignition/base.d" Nov 4 04:49:44.623878 ignition[901]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Nov 4 04:49:44.623930 ignition[901]: parsed url from cmdline: "" Nov 4 04:49:44.623932 ignition[901]: no config URL provided Nov 4 04:49:44.623934 ignition[901]: reading system config file "/usr/lib/ignition/user.ign" Nov 4 04:49:44.623939 ignition[901]: no config at "/usr/lib/ignition/user.ign" Nov 4 04:49:44.624295 ignition[901]: config successfully fetched Nov 4 04:49:44.624314 ignition[901]: parsing config with SHA512: f354d6f2f435257ba16570eb1417705631a2b52d90450ed660b3eafb057019b4dd27d113e2dc91b1e0dbe19f92a8c9e9868de6ba9ca52901127d0e24d426ef00 Nov 4 04:49:44.626602 unknown[901]: fetched base config from "system" Nov 4 04:49:44.626921 ignition[901]: fetch-offline: fetch-offline passed Nov 4 04:49:44.626611 unknown[901]: fetched user config from "vmware" Nov 4 04:49:44.626969 ignition[901]: Ignition finished successfully Nov 4 04:49:44.628562 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 4 04:49:44.628911 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Nov 4 04:49:44.629619 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 4 04:49:44.648912 ignition[908]: Ignition 2.22.0 Nov 4 04:49:44.648921 ignition[908]: Stage: kargs Nov 4 04:49:44.649014 ignition[908]: no configs at "/usr/lib/ignition/base.d" Nov 4 04:49:44.649019 ignition[908]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Nov 4 04:49:44.649667 ignition[908]: kargs: kargs passed Nov 4 04:49:44.649700 ignition[908]: Ignition finished successfully Nov 4 04:49:44.651267 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 4 04:49:44.652092 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 4 04:49:44.669932 ignition[914]: Ignition 2.22.0 Nov 4 04:49:44.669941 ignition[914]: Stage: disks Nov 4 04:49:44.670028 ignition[914]: no configs at "/usr/lib/ignition/base.d" Nov 4 04:49:44.670033 ignition[914]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Nov 4 04:49:44.670498 ignition[914]: disks: disks passed Nov 4 04:49:44.670525 ignition[914]: Ignition finished successfully Nov 4 04:49:44.671696 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 4 04:49:44.672035 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 4 04:49:44.672285 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 4 04:49:44.672530 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 4 04:49:44.672714 systemd[1]: Reached target sysinit.target - System Initialization. Nov 4 04:49:44.672800 systemd[1]: Reached target basic.target - Basic System. Nov 4 04:49:44.673615 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 4 04:49:44.695831 systemd-fsck[922]: ROOT: clean, 15/1631200 files, 112378/1617920 blocks Nov 4 04:49:44.697200 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 4 04:49:44.698044 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 4 04:49:44.790071 kernel: EXT4-fs (sda9): mounted filesystem c35327fb-3cdd-496e-85aa-9e1b4133507f r/w with ordered data mode. Quota mode: none. Nov 4 04:49:44.790528 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 4 04:49:44.790997 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 4 04:49:44.792760 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 4 04:49:44.795108 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 4 04:49:44.795496 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 4 04:49:44.795673 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 4 04:49:44.795687 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 4 04:49:44.799904 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 4 04:49:44.800786 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 4 04:49:44.810088 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (930) Nov 4 04:49:44.810125 kernel: BTRFS info (device sda6): first mount of filesystem c6585032-901f-4e89-912e-5749e07725ea Nov 4 04:49:44.812670 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 4 04:49:44.818362 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 4 04:49:44.818395 kernel: BTRFS info (device sda6): enabling free space tree Nov 4 04:49:44.819615 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 4 04:49:44.851902 initrd-setup-root[954]: cut: /sysroot/etc/passwd: No such file or directory Nov 4 04:49:44.854750 initrd-setup-root[961]: cut: /sysroot/etc/group: No such file or directory Nov 4 04:49:44.856691 initrd-setup-root[968]: cut: /sysroot/etc/shadow: No such file or directory Nov 4 04:49:44.859141 initrd-setup-root[975]: cut: /sysroot/etc/gshadow: No such file or directory Nov 4 04:49:44.924124 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 4 04:49:44.925149 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 4 04:49:44.926139 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 4 04:49:44.934301 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 4 04:49:44.936074 kernel: BTRFS info (device sda6): last unmount of filesystem c6585032-901f-4e89-912e-5749e07725ea Nov 4 04:49:44.950762 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 4 04:49:44.956187 ignition[1043]: INFO : Ignition 2.22.0 Nov 4 04:49:44.956187 ignition[1043]: INFO : Stage: mount Nov 4 04:49:44.956187 ignition[1043]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 4 04:49:44.956187 ignition[1043]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Nov 4 04:49:44.956187 ignition[1043]: INFO : mount: mount passed Nov 4 04:49:44.956187 ignition[1043]: INFO : Ignition finished successfully Nov 4 04:49:44.957871 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 4 04:49:44.958598 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 4 04:49:45.791473 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 4 04:49:45.814073 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (1055) Nov 4 04:49:45.816465 kernel: BTRFS info (device sda6): first mount of filesystem c6585032-901f-4e89-912e-5749e07725ea Nov 4 04:49:45.816486 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 4 04:49:45.821079 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 4 04:49:45.821104 kernel: BTRFS info (device sda6): enabling free space tree Nov 4 04:49:45.820913 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 4 04:49:45.841891 ignition[1071]: INFO : Ignition 2.22.0 Nov 4 04:49:45.841891 ignition[1071]: INFO : Stage: files Nov 4 04:49:45.842321 ignition[1071]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 4 04:49:45.842321 ignition[1071]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Nov 4 04:49:45.842651 ignition[1071]: DEBUG : files: compiled without relabeling support, skipping Nov 4 04:49:45.843294 ignition[1071]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 4 04:49:45.843294 ignition[1071]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 4 04:49:45.877643 ignition[1071]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 4 04:49:45.877848 ignition[1071]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 4 04:49:45.878018 ignition[1071]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 4 04:49:45.877878 unknown[1071]: wrote ssh authorized keys file for user: core Nov 4 04:49:45.904212 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 4 04:49:45.904419 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Nov 4 04:49:45.947026 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 4 04:49:46.054092 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 4 04:49:46.054092 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 4 04:49:46.054092 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 4 04:49:46.054092 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 4 04:49:46.055131 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 4 04:49:46.055131 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 4 04:49:46.055131 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 4 04:49:46.055131 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 4 04:49:46.055131 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 4 04:49:46.056230 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 4 04:49:46.056585 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 4 04:49:46.056585 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 4 04:49:46.060886 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 4 04:49:46.060886 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 4 04:49:46.060886 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-x86-64.raw: attempt #1 Nov 4 04:49:46.614669 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 4 04:49:48.973862 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 4 04:49:48.974407 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/systemd/network/00-vmware.network" Nov 4 04:49:48.975019 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/systemd/network/00-vmware.network" Nov 4 04:49:48.975211 ignition[1071]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Nov 4 04:49:48.975576 ignition[1071]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 4 04:49:48.976021 ignition[1071]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 4 04:49:48.976021 ignition[1071]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Nov 4 04:49:48.976353 ignition[1071]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Nov 4 04:49:48.976353 ignition[1071]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 4 04:49:48.976353 ignition[1071]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 4 04:49:48.976353 ignition[1071]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Nov 4 04:49:48.976353 ignition[1071]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Nov 4 04:49:48.998966 ignition[1071]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Nov 4 04:49:49.001288 ignition[1071]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Nov 4 04:49:49.001612 ignition[1071]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Nov 4 04:49:49.001817 ignition[1071]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Nov 4 04:49:49.001817 ignition[1071]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Nov 4 04:49:49.002350 ignition[1071]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 4 04:49:49.002350 ignition[1071]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 4 04:49:49.002350 ignition[1071]: INFO : files: files passed Nov 4 04:49:49.002350 ignition[1071]: INFO : Ignition finished successfully Nov 4 04:49:49.003767 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 4 04:49:49.004942 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 4 04:49:49.006142 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 4 04:49:49.012474 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 4 04:49:49.012910 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 4 04:49:49.017527 initrd-setup-root-after-ignition[1105]: grep: Nov 4 04:49:49.017949 initrd-setup-root-after-ignition[1109]: grep: Nov 4 04:49:49.017949 initrd-setup-root-after-ignition[1105]: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 4 04:49:49.017949 initrd-setup-root-after-ignition[1105]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 4 04:49:49.018637 initrd-setup-root-after-ignition[1109]: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 4 04:49:49.019319 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 4 04:49:49.019705 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 4 04:49:49.020432 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 4 04:49:49.040310 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 4 04:49:49.040385 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 4 04:49:49.040679 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 4 04:49:49.040808 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 4 04:49:49.041176 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 4 04:49:49.041684 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 4 04:49:49.056819 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 4 04:49:49.057760 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 4 04:49:49.068934 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Nov 4 04:49:49.069029 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 4 04:49:49.069251 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 4 04:49:49.069460 systemd[1]: Stopped target timers.target - Timer Units. Nov 4 04:49:49.069661 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 4 04:49:49.069737 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 4 04:49:49.070122 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 4 04:49:49.070285 systemd[1]: Stopped target basic.target - Basic System. Nov 4 04:49:49.070477 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 4 04:49:49.070669 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 4 04:49:49.070878 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 4 04:49:49.071112 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Nov 4 04:49:49.071308 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 4 04:49:49.071520 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 4 04:49:49.071735 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 4 04:49:49.071949 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 4 04:49:49.072153 systemd[1]: Stopped target swap.target - Swaps. Nov 4 04:49:49.072325 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 4 04:49:49.072396 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 4 04:49:49.072671 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 4 04:49:49.072912 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 4 04:49:49.073121 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 4 04:49:49.073167 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 4 04:49:49.073321 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 4 04:49:49.073384 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 4 04:49:49.073657 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 4 04:49:49.073722 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 4 04:49:49.073955 systemd[1]: Stopped target paths.target - Path Units. Nov 4 04:49:49.074114 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 4 04:49:49.074159 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 4 04:49:49.074359 systemd[1]: Stopped target slices.target - Slice Units. Nov 4 04:49:49.074556 systemd[1]: Stopped target sockets.target - Socket Units. Nov 4 04:49:49.074743 systemd[1]: iscsid.socket: Deactivated successfully. Nov 4 04:49:49.074793 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 4 04:49:49.074952 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 4 04:49:49.074998 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 4 04:49:49.075182 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 4 04:49:49.075251 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 4 04:49:49.075511 systemd[1]: ignition-files.service: Deactivated successfully. Nov 4 04:49:49.075580 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 4 04:49:49.077199 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 4 04:49:49.079113 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 4 04:49:49.079350 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 4 04:49:49.079528 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 4 04:49:49.079890 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 4 04:49:49.080086 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 4 04:49:49.080445 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 4 04:49:49.080606 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 4 04:49:49.083259 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 4 04:49:49.083309 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 4 04:49:49.095299 ignition[1130]: INFO : Ignition 2.22.0 Nov 4 04:49:49.095299 ignition[1130]: INFO : Stage: umount Nov 4 04:49:49.095895 ignition[1130]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 4 04:49:49.095895 ignition[1130]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Nov 4 04:49:49.096724 ignition[1130]: INFO : umount: umount passed Nov 4 04:49:49.096887 ignition[1130]: INFO : Ignition finished successfully Nov 4 04:49:49.097987 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 4 04:49:49.098253 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 4 04:49:49.098583 systemd[1]: Stopped target network.target - Network. Nov 4 04:49:49.098784 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 4 04:49:49.098930 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 4 04:49:49.100322 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 4 04:49:49.100513 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 4 04:49:49.100731 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 4 04:49:49.101129 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 4 04:49:49.101243 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 4 04:49:49.101269 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 4 04:49:49.101428 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 4 04:49:49.101554 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 4 04:49:49.107332 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 4 04:49:49.107593 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 4 04:49:49.108944 systemd[1]: Stopped target network-pre.target - Preparation for Network. Nov 4 04:49:49.109238 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 4 04:49:49.109397 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 4 04:49:49.110397 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 4 04:49:49.112090 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 4 04:49:49.112127 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 4 04:49:49.112791 systemd[1]: afterburn-network-kargs.service: Deactivated successfully. Nov 4 04:49:49.112817 systemd[1]: Stopped afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Nov 4 04:49:49.112980 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 4 04:49:49.113643 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 4 04:49:49.114018 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 4 04:49:49.115716 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 4 04:49:49.115761 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 4 04:49:49.116265 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 4 04:49:49.116290 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 4 04:49:49.124909 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 4 04:49:49.125927 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 4 04:49:49.126006 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 4 04:49:49.126445 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 4 04:49:49.126480 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 4 04:49:49.126653 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 4 04:49:49.126670 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 4 04:49:49.126888 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 4 04:49:49.126915 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 4 04:49:49.128132 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 4 04:49:49.128173 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 4 04:49:49.128312 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 4 04:49:49.128338 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 4 04:49:49.129432 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 4 04:49:49.132375 systemd[1]: systemd-network-generator.service: Deactivated successfully. Nov 4 04:49:49.132412 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Nov 4 04:49:49.132602 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 4 04:49:49.132626 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 4 04:49:49.132782 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Nov 4 04:49:49.132806 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 4 04:49:49.132986 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 4 04:49:49.133009 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 4 04:49:49.133178 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 4 04:49:49.133200 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 4 04:49:49.143872 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 4 04:49:49.143929 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 4 04:49:49.145952 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 4 04:49:49.146203 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 4 04:49:49.221635 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 4 04:49:49.221699 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 4 04:49:49.222125 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 4 04:49:49.222252 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 4 04:49:49.222285 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 4 04:49:49.222879 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 4 04:49:49.239239 systemd[1]: Switching root. Nov 4 04:49:49.280862 systemd-journald[330]: Journal stopped Nov 4 04:49:51.166912 systemd-journald[330]: Received SIGTERM from PID 1 (systemd). Nov 4 04:49:51.166937 kernel: SELinux: policy capability network_peer_controls=1 Nov 4 04:49:51.166946 kernel: SELinux: policy capability open_perms=1 Nov 4 04:49:51.166953 kernel: SELinux: policy capability extended_socket_class=1 Nov 4 04:49:51.166959 kernel: SELinux: policy capability always_check_network=0 Nov 4 04:49:51.166965 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 4 04:49:51.166973 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 4 04:49:51.166980 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 4 04:49:51.166986 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 4 04:49:51.166992 kernel: SELinux: policy capability userspace_initial_context=0 Nov 4 04:49:51.166999 kernel: audit: type=1403 audit(1762231790.506:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 4 04:49:51.167008 systemd[1]: Successfully loaded SELinux policy in 114.135ms. Nov 4 04:49:51.167017 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 3.944ms. Nov 4 04:49:51.167025 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 4 04:49:51.167033 systemd[1]: Detected virtualization vmware. Nov 4 04:49:51.167040 systemd[1]: Detected architecture x86-64. Nov 4 04:49:51.167048 systemd[1]: Detected first boot. Nov 4 04:49:51.167056 systemd[1]: Initializing machine ID from random generator. Nov 4 04:49:51.167184 kernel: vmw_vmci 0000:00:07.7: Using capabilities 0xc Nov 4 04:49:51.167196 kernel: Guest personality initialized and is active Nov 4 04:49:51.167203 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Nov 4 04:49:51.167212 kernel: Initialized host personality Nov 4 04:49:51.167220 zram_generator::config[1176]: No configuration found. Nov 4 04:49:51.167228 kernel: NET: Registered PF_VSOCK protocol family Nov 4 04:49:51.167235 systemd[1]: Populated /etc with preset unit settings. Nov 4 04:49:51.167244 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Nov 4 04:49:51.167253 systemd[1]: COREOS_CUSTOM_PUBLIC_IPV4=$(ip addr show ens192 | grep -v "inet 10." | grep -Po "inet \K[\d.]+")" > ${OUTPUT}" Nov 4 04:49:51.167261 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 4 04:49:51.167268 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 4 04:49:51.167275 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 4 04:49:51.167283 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 4 04:49:51.167291 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 4 04:49:51.167300 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 4 04:49:51.167307 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 4 04:49:51.167315 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 4 04:49:51.167322 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 4 04:49:51.167332 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 4 04:49:51.167342 systemd[1]: Created slice user.slice - User and Session Slice. Nov 4 04:49:51.167350 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 4 04:49:51.167359 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 4 04:49:51.167369 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 4 04:49:51.167377 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 4 04:49:51.167385 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 4 04:49:51.167393 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 4 04:49:51.167400 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 4 04:49:51.167410 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 4 04:49:51.167418 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 4 04:49:51.167425 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 4 04:49:51.167433 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 4 04:49:51.167440 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 4 04:49:51.167448 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 4 04:49:51.167457 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 4 04:49:51.167465 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 4 04:49:51.167473 systemd[1]: Reached target slices.target - Slice Units. Nov 4 04:49:51.167480 systemd[1]: Reached target swap.target - Swaps. Nov 4 04:49:51.167488 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 4 04:49:51.167495 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 4 04:49:51.167504 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Nov 4 04:49:51.167513 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 4 04:49:51.167520 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 4 04:49:51.167528 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 4 04:49:51.167536 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 4 04:49:51.167544 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 4 04:49:51.167552 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 4 04:49:51.167560 systemd[1]: Mounting media.mount - External Media Directory... Nov 4 04:49:51.167568 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 4 04:49:51.167575 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 4 04:49:51.167583 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 4 04:49:51.167591 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 4 04:49:51.167600 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 4 04:49:51.167608 systemd[1]: Reached target machines.target - Containers. Nov 4 04:49:51.167615 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 4 04:49:51.167623 systemd[1]: Starting ignition-delete-config.service - Ignition (delete config)... Nov 4 04:49:51.167631 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 4 04:49:51.167638 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 4 04:49:51.167647 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 4 04:49:51.167655 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 4 04:49:51.167662 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 4 04:49:51.167670 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 4 04:49:51.167677 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 4 04:49:51.167685 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 4 04:49:51.167694 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 4 04:49:51.167703 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 4 04:49:51.167711 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 4 04:49:51.167719 systemd[1]: Stopped systemd-fsck-usr.service. Nov 4 04:49:51.167727 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 4 04:49:51.167735 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 4 04:49:51.167743 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 4 04:49:51.167750 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 4 04:49:51.167760 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 4 04:49:51.167768 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Nov 4 04:49:51.167776 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 4 04:49:51.167784 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 4 04:49:51.167793 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 4 04:49:51.167801 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 4 04:49:51.167810 systemd[1]: Mounted media.mount - External Media Directory. Nov 4 04:49:51.167818 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 4 04:49:51.167825 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 4 04:49:51.167833 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 4 04:49:51.167841 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 4 04:49:51.167849 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 4 04:49:51.167856 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 4 04:49:51.167866 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 4 04:49:51.167874 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 4 04:49:51.167882 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 4 04:49:51.167889 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 4 04:49:51.167897 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 4 04:49:51.167905 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 4 04:49:51.167913 kernel: fuse: init (API version 7.41) Nov 4 04:49:51.167922 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 4 04:49:51.167930 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 4 04:49:51.167937 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 4 04:49:51.167945 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 4 04:49:51.167953 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 4 04:49:51.167961 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 4 04:49:51.167969 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 4 04:49:51.167979 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Nov 4 04:49:51.167987 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 4 04:49:51.167998 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 4 04:49:51.168007 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 4 04:49:51.168015 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 4 04:49:51.168023 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Nov 4 04:49:51.168032 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 4 04:49:51.168041 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 4 04:49:51.168049 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 4 04:49:51.170965 systemd-journald[1266]: Collecting audit messages is disabled. Nov 4 04:49:51.171006 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 4 04:49:51.171016 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 4 04:49:51.171025 systemd-journald[1266]: Journal started Nov 4 04:49:51.171169 systemd-journald[1266]: Runtime Journal (/run/log/journal/b5fcf0f09a5545d4bbba797dce30b69d) is 4.8M, max 38.4M, 33.6M free. Nov 4 04:49:50.953998 systemd[1]: Queued start job for default target multi-user.target. Nov 4 04:49:50.966188 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Nov 4 04:49:50.966475 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 4 04:49:51.171737 jq[1246]: true Nov 4 04:49:51.172297 jq[1292]: true Nov 4 04:49:51.178073 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 4 04:49:51.178124 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 4 04:49:51.190010 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 4 04:49:51.191335 systemd[1]: Started systemd-journald.service - Journal Service. Nov 4 04:49:51.195269 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Nov 4 04:49:51.197768 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 4 04:49:51.199381 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 4 04:49:51.199670 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 4 04:49:51.203076 kernel: loop1: detected capacity change from 0 to 219144 Nov 4 04:49:51.209077 kernel: ACPI: bus type drm_connector registered Nov 4 04:49:51.211923 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 4 04:49:51.212172 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 4 04:49:51.214618 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 4 04:49:51.217916 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 4 04:49:51.223361 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Nov 4 04:49:51.226499 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 4 04:49:51.250802 systemd-tmpfiles[1300]: ACLs are not supported, ignoring. Nov 4 04:49:51.250817 systemd-tmpfiles[1300]: ACLs are not supported, ignoring. Nov 4 04:49:51.254145 systemd-journald[1266]: Time spent on flushing to /var/log/journal/b5fcf0f09a5545d4bbba797dce30b69d is 98.593ms for 1756 entries. Nov 4 04:49:51.254145 systemd-journald[1266]: System Journal (/var/log/journal/b5fcf0f09a5545d4bbba797dce30b69d) is 8M, max 588.1M, 580.1M free. Nov 4 04:49:51.355394 systemd-journald[1266]: Received client request to flush runtime journal. Nov 4 04:49:51.355419 kernel: loop2: detected capacity change from 0 to 111544 Nov 4 04:49:51.290423 ignition[1301]: Ignition 2.22.0 Nov 4 04:49:51.261617 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 4 04:49:51.290626 ignition[1301]: deleting config from guestinfo properties Nov 4 04:49:51.264154 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 4 04:49:51.328917 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Nov 4 04:49:51.356660 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 4 04:49:51.359614 ignition[1301]: Successfully deleted config Nov 4 04:49:51.367142 kernel: loop3: detected capacity change from 0 to 119080 Nov 4 04:49:51.367047 systemd[1]: Finished ignition-delete-config.service - Ignition (delete config). Nov 4 04:49:51.385748 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 4 04:49:51.388248 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 4 04:49:51.391156 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 4 04:49:51.405592 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 4 04:49:51.410669 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 4 04:49:51.413227 kernel: loop4: detected capacity change from 0 to 2960 Nov 4 04:49:51.414873 systemd-tmpfiles[1346]: ACLs are not supported, ignoring. Nov 4 04:49:51.415046 systemd-tmpfiles[1346]: ACLs are not supported, ignoring. Nov 4 04:49:51.419467 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 4 04:49:51.439028 kernel: loop5: detected capacity change from 0 to 219144 Nov 4 04:49:51.442767 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 4 04:49:51.491108 kernel: loop6: detected capacity change from 0 to 111544 Nov 4 04:49:51.500813 systemd-resolved[1345]: Positive Trust Anchors: Nov 4 04:49:51.501207 systemd-resolved[1345]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 4 04:49:51.501254 systemd-resolved[1345]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Nov 4 04:49:51.501305 systemd-resolved[1345]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 4 04:49:51.505027 systemd-resolved[1345]: Defaulting to hostname 'linux'. Nov 4 04:49:51.506661 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 4 04:49:51.506846 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 4 04:49:51.513090 kernel: loop7: detected capacity change from 0 to 119080 Nov 4 04:49:51.530084 kernel: loop1: detected capacity change from 0 to 2960 Nov 4 04:49:51.537958 (sd-merge)[1353]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw', 'oem-vmware.raw'. Nov 4 04:49:51.541207 (sd-merge)[1353]: Merged extensions into '/usr'. Nov 4 04:49:51.545191 systemd[1]: Reload requested from client PID 1299 ('systemd-sysext') (unit systemd-sysext.service)... Nov 4 04:49:51.545292 systemd[1]: Reloading... Nov 4 04:49:51.610146 zram_generator::config[1387]: No configuration found. Nov 4 04:49:51.701768 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Nov 4 04:49:51.748280 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 4 04:49:51.748534 systemd[1]: Reloading finished in 202 ms. Nov 4 04:49:51.763608 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 4 04:49:51.764004 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 4 04:49:51.769008 systemd[1]: Starting ensure-sysext.service... Nov 4 04:49:51.769982 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 4 04:49:51.773034 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 4 04:49:51.788443 systemd[1]: Reload requested from client PID 1443 ('systemctl') (unit ensure-sysext.service)... Nov 4 04:49:51.788453 systemd[1]: Reloading... Nov 4 04:49:51.795943 systemd-tmpfiles[1444]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Nov 4 04:49:51.795963 systemd-tmpfiles[1444]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Nov 4 04:49:51.796172 systemd-tmpfiles[1444]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 4 04:49:51.796351 systemd-tmpfiles[1444]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 4 04:49:51.796970 systemd-tmpfiles[1444]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 4 04:49:51.799191 systemd-tmpfiles[1444]: ACLs are not supported, ignoring. Nov 4 04:49:51.799231 systemd-tmpfiles[1444]: ACLs are not supported, ignoring. Nov 4 04:49:51.801979 systemd-udevd[1445]: Using default interface naming scheme 'v257'. Nov 4 04:49:51.839099 zram_generator::config[1490]: No configuration found. Nov 4 04:49:51.843041 systemd-tmpfiles[1444]: Detected autofs mount point /boot during canonicalization of boot. Nov 4 04:49:51.843048 systemd-tmpfiles[1444]: Skipping /boot Nov 4 04:49:51.848358 systemd-tmpfiles[1444]: Detected autofs mount point /boot during canonicalization of boot. Nov 4 04:49:51.848365 systemd-tmpfiles[1444]: Skipping /boot Nov 4 04:49:51.942074 kernel: mousedev: PS/2 mouse device common for all mice Nov 4 04:49:51.945536 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Nov 4 04:49:51.949072 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Nov 4 04:49:51.963075 kernel: ACPI: button: Power Button [PWRF] Nov 4 04:49:52.015809 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Nov 4 04:49:52.015915 systemd[1]: Reloading finished in 227 ms. Nov 4 04:49:52.022443 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 4 04:49:52.026095 kernel: piix4_smbus 0000:00:07.3: SMBus Host Controller not enabled! Nov 4 04:49:52.028239 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 4 04:49:52.036895 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 4 04:49:52.040270 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 4 04:49:52.048772 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 4 04:49:52.050176 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 4 04:49:52.051584 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 4 04:49:52.055213 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 4 04:49:52.060256 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 4 04:49:52.061224 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 4 04:49:52.065535 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 4 04:49:52.071339 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 4 04:49:52.071495 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 4 04:49:52.071560 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 4 04:49:52.071620 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 4 04:49:52.073617 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 4 04:49:52.073717 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 4 04:49:52.073781 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 4 04:49:52.073838 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 4 04:49:52.077420 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 4 04:49:52.080988 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 4 04:49:52.081200 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 4 04:49:52.081270 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 4 04:49:52.081372 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 4 04:49:52.083996 systemd[1]: Finished ensure-sysext.service. Nov 4 04:49:52.087623 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 4 04:49:52.094320 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 4 04:49:52.133850 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 4 04:49:52.136240 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 4 04:49:52.141873 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. Nov 4 04:49:52.146407 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 4 04:49:52.151688 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 4 04:49:52.152114 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 4 04:49:52.152437 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 4 04:49:52.154894 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 4 04:49:52.155023 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 4 04:49:52.155305 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 4 04:49:52.155411 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 4 04:49:52.155800 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 4 04:49:52.170113 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 4 04:49:52.182702 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 4 04:49:52.195634 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 4 04:49:52.195875 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 4 04:49:52.209076 augenrules[1616]: No rules Nov 4 04:49:52.207381 systemd-networkd[1561]: lo: Link UP Nov 4 04:49:52.207384 systemd-networkd[1561]: lo: Gained carrier Nov 4 04:49:52.207457 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 4 04:49:52.208513 systemd[1]: Reached target time-set.target - System Time Set. Nov 4 04:49:52.209611 systemd[1]: audit-rules.service: Deactivated successfully. Nov 4 04:49:52.209775 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 4 04:49:52.211893 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 4 04:49:52.212238 systemd[1]: Reached target network.target - Network. Nov 4 04:49:52.212453 systemd-networkd[1561]: ens192: Configuring with /etc/systemd/network/00-vmware.network. Nov 4 04:49:52.218074 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Nov 4 04:49:52.218237 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Nov 4 04:49:52.219149 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Nov 4 04:49:52.222279 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 4 04:49:52.223268 systemd-networkd[1561]: ens192: Link UP Nov 4 04:49:52.225351 systemd-networkd[1561]: ens192: Gained carrier Nov 4 04:49:52.229726 systemd-timesyncd[1575]: Network configuration changed, trying to establish connection. Nov 4 04:51:30.694965 systemd-resolved[1345]: Clock change detected. Flushing caches. Nov 4 04:51:30.695351 systemd-timesyncd[1575]: Contacted time server 69.176.84.38:123 (1.flatcar.pool.ntp.org). Nov 4 04:51:30.695382 systemd-timesyncd[1575]: Initial clock synchronization to Tue 2025-11-04 04:51:30.694932 UTC. Nov 4 04:51:30.703799 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Nov 4 04:51:30.718370 (udev-worker)[1543]: id: Truncating stdout of 'dmi_memory_id' up to 16384 byte. Nov 4 04:51:30.753376 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 4 04:51:30.881546 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 4 04:51:31.192195 ldconfig[1559]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 4 04:51:31.193780 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 4 04:51:31.194884 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 4 04:51:31.207746 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 4 04:51:31.208000 systemd[1]: Reached target sysinit.target - System Initialization. Nov 4 04:51:31.208219 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 4 04:51:31.208347 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 4 04:51:31.208461 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Nov 4 04:51:31.208643 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 4 04:51:31.208789 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 4 04:51:31.208902 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 4 04:51:31.209009 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 4 04:51:31.209029 systemd[1]: Reached target paths.target - Path Units. Nov 4 04:51:31.209116 systemd[1]: Reached target timers.target - Timer Units. Nov 4 04:51:31.210406 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 4 04:51:31.211490 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 4 04:51:31.213036 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Nov 4 04:51:31.213338 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Nov 4 04:51:31.213457 systemd[1]: Reached target ssh-access.target - SSH Access Available. Nov 4 04:51:31.216302 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 4 04:51:31.216591 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Nov 4 04:51:31.217102 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 4 04:51:31.217636 systemd[1]: Reached target sockets.target - Socket Units. Nov 4 04:51:31.217731 systemd[1]: Reached target basic.target - Basic System. Nov 4 04:51:31.217853 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 4 04:51:31.217873 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 4 04:51:31.218618 systemd[1]: Starting containerd.service - containerd container runtime... Nov 4 04:51:31.221258 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 4 04:51:31.222272 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 4 04:51:31.223736 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 4 04:51:31.226300 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 4 04:51:31.226432 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 4 04:51:31.229051 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Nov 4 04:51:31.232608 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 4 04:51:31.235187 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 4 04:51:31.237174 jq[1643]: false Nov 4 04:51:31.240234 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 4 04:51:31.243374 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 4 04:51:31.246508 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 4 04:51:31.247175 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 4 04:51:31.247668 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 4 04:51:31.250329 systemd[1]: Starting update-engine.service - Update Engine... Nov 4 04:51:31.253000 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 4 04:51:31.255956 systemd[1]: Starting vgauthd.service - VGAuth Service for open-vm-tools... Nov 4 04:51:31.261244 extend-filesystems[1644]: Found /dev/sda6 Nov 4 04:51:31.263201 jq[1655]: true Nov 4 04:51:31.263546 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 4 04:51:31.264053 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 4 04:51:31.264298 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 4 04:51:31.264820 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 4 04:51:31.265132 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 4 04:51:31.265720 google_oslogin_nss_cache[1645]: oslogin_cache_refresh[1645]: Refreshing passwd entry cache Nov 4 04:51:31.266739 oslogin_cache_refresh[1645]: Refreshing passwd entry cache Nov 4 04:51:31.273540 extend-filesystems[1644]: Found /dev/sda9 Nov 4 04:51:31.279734 google_oslogin_nss_cache[1645]: oslogin_cache_refresh[1645]: Failure getting users, quitting Nov 4 04:51:31.279734 google_oslogin_nss_cache[1645]: oslogin_cache_refresh[1645]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Nov 4 04:51:31.279734 google_oslogin_nss_cache[1645]: oslogin_cache_refresh[1645]: Refreshing group entry cache Nov 4 04:51:31.279391 oslogin_cache_refresh[1645]: Failure getting users, quitting Nov 4 04:51:31.279412 oslogin_cache_refresh[1645]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Nov 4 04:51:31.279449 oslogin_cache_refresh[1645]: Refreshing group entry cache Nov 4 04:51:31.284221 extend-filesystems[1644]: Checking size of /dev/sda9 Nov 4 04:51:31.286156 google_oslogin_nss_cache[1645]: oslogin_cache_refresh[1645]: Failure getting groups, quitting Nov 4 04:51:31.286156 google_oslogin_nss_cache[1645]: oslogin_cache_refresh[1645]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Nov 4 04:51:31.285969 oslogin_cache_refresh[1645]: Failure getting groups, quitting Nov 4 04:51:31.285977 oslogin_cache_refresh[1645]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Nov 4 04:51:31.290215 jq[1663]: true Nov 4 04:51:31.288505 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Nov 4 04:51:31.295333 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Nov 4 04:51:31.299557 tar[1661]: linux-amd64/LICENSE Nov 4 04:51:31.300485 tar[1661]: linux-amd64/helm Nov 4 04:51:31.307204 extend-filesystems[1644]: Resized partition /dev/sda9 Nov 4 04:51:31.307322 systemd[1]: Started vgauthd.service - VGAuth Service for open-vm-tools. Nov 4 04:51:31.307926 systemd[1]: motdgen.service: Deactivated successfully. Nov 4 04:51:31.316326 update_engine[1654]: I20251104 04:51:31.313113 1654 main.cc:92] Flatcar Update Engine starting Nov 4 04:51:31.311305 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 4 04:51:31.318158 extend-filesystems[1695]: resize2fs 1.47.3 (8-Jul-2025) Nov 4 04:51:31.318271 systemd[1]: Starting vmtoolsd.service - Service for virtual machines hosted on VMware... Nov 4 04:51:31.329394 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 1635323 blocks Nov 4 04:51:31.329439 kernel: EXT4-fs (sda9): resized filesystem to 1635323 Nov 4 04:51:31.343837 unknown[1694]: Pref_Init: Using '/etc/vmware-tools/vgauth.conf' as preferences filepath Nov 4 04:51:31.349156 extend-filesystems[1695]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Nov 4 04:51:31.349156 extend-filesystems[1695]: old_desc_blocks = 1, new_desc_blocks = 1 Nov 4 04:51:31.349156 extend-filesystems[1695]: The filesystem on /dev/sda9 is now 1635323 (4k) blocks long. Nov 4 04:51:31.349897 extend-filesystems[1644]: Resized filesystem in /dev/sda9 Nov 4 04:51:31.350346 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 4 04:51:31.351407 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 4 04:51:31.354764 unknown[1694]: Core dump limit set to -1 Nov 4 04:51:31.356397 systemd[1]: Started vmtoolsd.service - Service for virtual machines hosted on VMware. Nov 4 04:51:31.371835 dbus-daemon[1641]: [system] SELinux support is enabled Nov 4 04:51:31.371998 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 4 04:51:31.373717 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 4 04:51:31.373733 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 4 04:51:31.373863 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 4 04:51:31.373873 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 4 04:51:31.377887 systemd[1]: Started update-engine.service - Update Engine. Nov 4 04:51:31.378295 update_engine[1654]: I20251104 04:51:31.378245 1654 update_check_scheduler.cc:74] Next update check in 3m30s Nov 4 04:51:31.383165 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 4 04:51:31.418086 bash[1718]: Updated "/home/core/.ssh/authorized_keys" Nov 4 04:51:31.418748 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 4 04:51:31.419218 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Nov 4 04:51:31.440841 systemd-logind[1653]: Watching system buttons on /dev/input/event2 (Power Button) Nov 4 04:51:31.440856 systemd-logind[1653]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 4 04:51:31.444326 systemd-logind[1653]: New seat seat0. Nov 4 04:51:31.453292 systemd[1]: Started systemd-logind.service - User Login Management. Nov 4 04:51:31.555602 locksmithd[1717]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 4 04:51:31.570129 containerd[1675]: time="2025-11-04T04:51:31Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Nov 4 04:51:31.571065 containerd[1675]: time="2025-11-04T04:51:31.571048961Z" level=info msg="starting containerd" revision=75cb2b7193e4e490e9fbdc236c0e811ccaba3376 version=v2.1.4 Nov 4 04:51:31.583731 containerd[1675]: time="2025-11-04T04:51:31.583698744Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="8.584µs" Nov 4 04:51:31.583731 containerd[1675]: time="2025-11-04T04:51:31.583724325Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Nov 4 04:51:31.583811 containerd[1675]: time="2025-11-04T04:51:31.583756569Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Nov 4 04:51:31.583811 containerd[1675]: time="2025-11-04T04:51:31.583765727Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Nov 4 04:51:31.583880 containerd[1675]: time="2025-11-04T04:51:31.583867819Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Nov 4 04:51:31.583901 containerd[1675]: time="2025-11-04T04:51:31.583880536Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 4 04:51:31.583954 containerd[1675]: time="2025-11-04T04:51:31.583940864Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 4 04:51:31.583971 containerd[1675]: time="2025-11-04T04:51:31.583955437Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 4 04:51:31.584114 containerd[1675]: time="2025-11-04T04:51:31.584101944Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 4 04:51:31.584153 containerd[1675]: time="2025-11-04T04:51:31.584112995Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 4 04:51:31.584153 containerd[1675]: time="2025-11-04T04:51:31.584120095Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 4 04:51:31.584153 containerd[1675]: time="2025-11-04T04:51:31.584125461Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Nov 4 04:51:31.585304 containerd[1675]: time="2025-11-04T04:51:31.585287990Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Nov 4 04:51:31.585351 containerd[1675]: time="2025-11-04T04:51:31.585343231Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Nov 4 04:51:31.585879 containerd[1675]: time="2025-11-04T04:51:31.585426104Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Nov 4 04:51:31.585879 containerd[1675]: time="2025-11-04T04:51:31.585618322Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 4 04:51:31.585879 containerd[1675]: time="2025-11-04T04:51:31.585637598Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 4 04:51:31.585879 containerd[1675]: time="2025-11-04T04:51:31.585644143Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Nov 4 04:51:31.585879 containerd[1675]: time="2025-11-04T04:51:31.585664917Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Nov 4 04:51:31.585879 containerd[1675]: time="2025-11-04T04:51:31.585774365Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Nov 4 04:51:31.585879 containerd[1675]: time="2025-11-04T04:51:31.585806019Z" level=info msg="metadata content store policy set" policy=shared Nov 4 04:51:31.631511 containerd[1675]: time="2025-11-04T04:51:31.631456147Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Nov 4 04:51:31.633194 containerd[1675]: time="2025-11-04T04:51:31.631604698Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Nov 4 04:51:31.633194 containerd[1675]: time="2025-11-04T04:51:31.631719283Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Nov 4 04:51:31.633194 containerd[1675]: time="2025-11-04T04:51:31.631731329Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Nov 4 04:51:31.633194 containerd[1675]: time="2025-11-04T04:51:31.631740806Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Nov 4 04:51:31.633194 containerd[1675]: time="2025-11-04T04:51:31.631749261Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Nov 4 04:51:31.633194 containerd[1675]: time="2025-11-04T04:51:31.631756405Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Nov 4 04:51:31.633194 containerd[1675]: time="2025-11-04T04:51:31.631761776Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Nov 4 04:51:31.633194 containerd[1675]: time="2025-11-04T04:51:31.631768273Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Nov 4 04:51:31.633194 containerd[1675]: time="2025-11-04T04:51:31.631779533Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Nov 4 04:51:31.633194 containerd[1675]: time="2025-11-04T04:51:31.631788388Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Nov 4 04:51:31.633194 containerd[1675]: time="2025-11-04T04:51:31.631794655Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Nov 4 04:51:31.633194 containerd[1675]: time="2025-11-04T04:51:31.631800337Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Nov 4 04:51:31.633194 containerd[1675]: time="2025-11-04T04:51:31.631806991Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Nov 4 04:51:31.633404 containerd[1675]: time="2025-11-04T04:51:31.631893833Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Nov 4 04:51:31.633404 containerd[1675]: time="2025-11-04T04:51:31.631905503Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Nov 4 04:51:31.633404 containerd[1675]: time="2025-11-04T04:51:31.631915015Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Nov 4 04:51:31.633404 containerd[1675]: time="2025-11-04T04:51:31.631924179Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Nov 4 04:51:31.633404 containerd[1675]: time="2025-11-04T04:51:31.631934948Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Nov 4 04:51:31.633404 containerd[1675]: time="2025-11-04T04:51:31.631941831Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Nov 4 04:51:31.633404 containerd[1675]: time="2025-11-04T04:51:31.631948669Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Nov 4 04:51:31.633404 containerd[1675]: time="2025-11-04T04:51:31.631954357Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Nov 4 04:51:31.633404 containerd[1675]: time="2025-11-04T04:51:31.631960620Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Nov 4 04:51:31.633404 containerd[1675]: time="2025-11-04T04:51:31.631968335Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Nov 4 04:51:31.633404 containerd[1675]: time="2025-11-04T04:51:31.631974266Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Nov 4 04:51:31.633404 containerd[1675]: time="2025-11-04T04:51:31.631991177Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Nov 4 04:51:31.633404 containerd[1675]: time="2025-11-04T04:51:31.632022706Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Nov 4 04:51:31.633404 containerd[1675]: time="2025-11-04T04:51:31.632031106Z" level=info msg="Start snapshots syncer" Nov 4 04:51:31.633404 containerd[1675]: time="2025-11-04T04:51:31.632041823Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Nov 4 04:51:31.634499 containerd[1675]: time="2025-11-04T04:51:31.634470868Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Nov 4 04:51:31.636963 containerd[1675]: time="2025-11-04T04:51:31.636000982Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Nov 4 04:51:31.636963 containerd[1675]: time="2025-11-04T04:51:31.636065006Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Nov 4 04:51:31.636963 containerd[1675]: time="2025-11-04T04:51:31.636172123Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Nov 4 04:51:31.636963 containerd[1675]: time="2025-11-04T04:51:31.636186497Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Nov 4 04:51:31.636963 containerd[1675]: time="2025-11-04T04:51:31.636193489Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Nov 4 04:51:31.636963 containerd[1675]: time="2025-11-04T04:51:31.636200547Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Nov 4 04:51:31.636963 containerd[1675]: time="2025-11-04T04:51:31.636207774Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Nov 4 04:51:31.636963 containerd[1675]: time="2025-11-04T04:51:31.636214212Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Nov 4 04:51:31.636963 containerd[1675]: time="2025-11-04T04:51:31.636220542Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Nov 4 04:51:31.636963 containerd[1675]: time="2025-11-04T04:51:31.636226904Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Nov 4 04:51:31.636963 containerd[1675]: time="2025-11-04T04:51:31.636233895Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Nov 4 04:51:31.636963 containerd[1675]: time="2025-11-04T04:51:31.636252762Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 4 04:51:31.636963 containerd[1675]: time="2025-11-04T04:51:31.636266188Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 4 04:51:31.636963 containerd[1675]: time="2025-11-04T04:51:31.636271894Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 4 04:51:31.637196 containerd[1675]: time="2025-11-04T04:51:31.636277327Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 4 04:51:31.637196 containerd[1675]: time="2025-11-04T04:51:31.636281900Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Nov 4 04:51:31.637196 containerd[1675]: time="2025-11-04T04:51:31.636294815Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Nov 4 04:51:31.637196 containerd[1675]: time="2025-11-04T04:51:31.636303054Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Nov 4 04:51:31.637196 containerd[1675]: time="2025-11-04T04:51:31.636310727Z" level=info msg="runtime interface created" Nov 4 04:51:31.637196 containerd[1675]: time="2025-11-04T04:51:31.636314252Z" level=info msg="created NRI interface" Nov 4 04:51:31.637196 containerd[1675]: time="2025-11-04T04:51:31.636318854Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Nov 4 04:51:31.637196 containerd[1675]: time="2025-11-04T04:51:31.636328494Z" level=info msg="Connect containerd service" Nov 4 04:51:31.637196 containerd[1675]: time="2025-11-04T04:51:31.636339803Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 4 04:51:31.637196 containerd[1675]: time="2025-11-04T04:51:31.636779970Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 4 04:51:31.742320 sshd_keygen[1684]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 4 04:51:31.769412 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 4 04:51:31.773993 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 4 04:51:31.787068 systemd[1]: issuegen.service: Deactivated successfully. Nov 4 04:51:31.787222 containerd[1675]: time="2025-11-04T04:51:31.787202334Z" level=info msg="Start subscribing containerd event" Nov 4 04:51:31.787280 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 4 04:51:31.787451 containerd[1675]: time="2025-11-04T04:51:31.787357764Z" level=info msg="Start recovering state" Nov 4 04:51:31.788273 containerd[1675]: time="2025-11-04T04:51:31.788096924Z" level=info msg="Start event monitor" Nov 4 04:51:31.788273 containerd[1675]: time="2025-11-04T04:51:31.788111830Z" level=info msg="Start cni network conf syncer for default" Nov 4 04:51:31.788273 containerd[1675]: time="2025-11-04T04:51:31.788116531Z" level=info msg="Start streaming server" Nov 4 04:51:31.788273 containerd[1675]: time="2025-11-04T04:51:31.788124303Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Nov 4 04:51:31.788273 containerd[1675]: time="2025-11-04T04:51:31.788128456Z" level=info msg="runtime interface starting up..." Nov 4 04:51:31.788273 containerd[1675]: time="2025-11-04T04:51:31.788131861Z" level=info msg="starting plugins..." Nov 4 04:51:31.788273 containerd[1675]: time="2025-11-04T04:51:31.788171185Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Nov 4 04:51:31.788718 containerd[1675]: time="2025-11-04T04:51:31.788501952Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 4 04:51:31.788786 containerd[1675]: time="2025-11-04T04:51:31.788776875Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 4 04:51:31.789318 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 4 04:51:31.789548 containerd[1675]: time="2025-11-04T04:51:31.789493330Z" level=info msg="containerd successfully booted in 0.219594s" Nov 4 04:51:31.789543 systemd[1]: Started containerd.service - containerd container runtime. Nov 4 04:51:31.792976 tar[1661]: linux-amd64/README.md Nov 4 04:51:31.800626 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 4 04:51:31.803739 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 4 04:51:31.804878 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 4 04:51:31.807345 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 4 04:51:31.807575 systemd[1]: Reached target getty.target - Login Prompts. Nov 4 04:51:32.617250 systemd-networkd[1561]: ens192: Gained IPv6LL Nov 4 04:51:32.618664 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 4 04:51:32.619160 systemd[1]: Reached target network-online.target - Network is Online. Nov 4 04:51:32.620619 systemd[1]: Starting coreos-metadata.service - VMware metadata agent... Nov 4 04:51:32.624270 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 04:51:32.631816 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 4 04:51:32.657807 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 4 04:51:32.664515 systemd[1]: coreos-metadata.service: Deactivated successfully. Nov 4 04:51:32.664894 systemd[1]: Finished coreos-metadata.service - VMware metadata agent. Nov 4 04:51:32.665526 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 4 04:51:34.019603 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 04:51:34.020092 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 4 04:51:34.021280 systemd[1]: Startup finished in 2.663s (kernel) + 8.092s (initrd) + 5.174s (userspace) = 15.931s. Nov 4 04:51:34.022885 (kubelet)[1846]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 4 04:51:34.399290 login[1813]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Nov 4 04:51:34.400932 login[1814]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Nov 4 04:51:34.405406 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 4 04:51:34.406170 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 4 04:51:34.413981 systemd-logind[1653]: New session 1 of user core. Nov 4 04:51:34.418247 systemd-logind[1653]: New session 2 of user core. Nov 4 04:51:34.427239 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 4 04:51:34.433348 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 4 04:51:34.441767 (systemd)[1858]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 4 04:51:34.443311 systemd-logind[1653]: New session c1 of user core. Nov 4 04:51:34.561927 systemd[1858]: Queued start job for default target default.target. Nov 4 04:51:34.568103 systemd[1858]: Created slice app.slice - User Application Slice. Nov 4 04:51:34.568123 systemd[1858]: Reached target paths.target - Paths. Nov 4 04:51:34.568160 systemd[1858]: Reached target timers.target - Timers. Nov 4 04:51:34.568848 systemd[1858]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 4 04:51:34.579542 systemd[1858]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 4 04:51:34.579616 systemd[1858]: Reached target sockets.target - Sockets. Nov 4 04:51:34.579650 systemd[1858]: Reached target basic.target - Basic System. Nov 4 04:51:34.579672 systemd[1858]: Reached target default.target - Main User Target. Nov 4 04:51:34.579688 systemd[1858]: Startup finished in 131ms. Nov 4 04:51:34.579727 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 4 04:51:34.588294 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 4 04:51:34.589007 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 4 04:51:34.650013 kubelet[1846]: E1104 04:51:34.649948 1846 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 4 04:51:34.651464 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 4 04:51:34.651988 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 4 04:51:34.653160 systemd[1]: kubelet.service: Consumed 619ms CPU time, 258.1M memory peak. Nov 4 04:51:44.881428 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 4 04:51:44.884267 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 04:51:45.173413 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 04:51:45.185402 (kubelet)[1896]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 4 04:51:45.214073 kubelet[1896]: E1104 04:51:45.214040 1896 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 4 04:51:45.216490 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 4 04:51:45.216576 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 4 04:51:45.216858 systemd[1]: kubelet.service: Consumed 104ms CPU time, 107.8M memory peak. Nov 4 04:51:55.381447 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 4 04:51:55.383085 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 04:51:55.713099 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 04:51:55.719457 (kubelet)[1911]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 4 04:51:55.757518 kubelet[1911]: E1104 04:51:55.757484 1911 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 4 04:51:55.758885 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 4 04:51:55.759019 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 4 04:51:55.759363 systemd[1]: kubelet.service: Consumed 109ms CPU time, 110.4M memory peak. Nov 4 04:52:01.484999 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 4 04:52:01.486360 systemd[1]: Started sshd@0-139.178.70.105:22-147.75.109.163:44338.service - OpenSSH per-connection server daemon (147.75.109.163:44338). Nov 4 04:52:01.538530 sshd[1919]: Accepted publickey for core from 147.75.109.163 port 44338 ssh2: RSA SHA256:0ivB5Oq6GZf2x7NCrEMZD9tOsD3Rg6IbL/XhZjctde4 Nov 4 04:52:01.539281 sshd-session[1919]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 04:52:01.541916 systemd-logind[1653]: New session 3 of user core. Nov 4 04:52:01.553352 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 4 04:52:01.563165 systemd[1]: Started sshd@1-139.178.70.105:22-147.75.109.163:44346.service - OpenSSH per-connection server daemon (147.75.109.163:44346). Nov 4 04:52:01.605030 sshd[1925]: Accepted publickey for core from 147.75.109.163 port 44346 ssh2: RSA SHA256:0ivB5Oq6GZf2x7NCrEMZD9tOsD3Rg6IbL/XhZjctde4 Nov 4 04:52:01.605795 sshd-session[1925]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 04:52:01.608586 systemd-logind[1653]: New session 4 of user core. Nov 4 04:52:01.615297 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 4 04:52:01.622518 sshd[1928]: Connection closed by 147.75.109.163 port 44346 Nov 4 04:52:01.622478 sshd-session[1925]: pam_unix(sshd:session): session closed for user core Nov 4 04:52:01.635347 systemd[1]: sshd@1-139.178.70.105:22-147.75.109.163:44346.service: Deactivated successfully. Nov 4 04:52:01.636307 systemd[1]: session-4.scope: Deactivated successfully. Nov 4 04:52:01.637076 systemd-logind[1653]: Session 4 logged out. Waiting for processes to exit. Nov 4 04:52:01.638360 systemd[1]: Started sshd@2-139.178.70.105:22-147.75.109.163:44354.service - OpenSSH per-connection server daemon (147.75.109.163:44354). Nov 4 04:52:01.638995 systemd-logind[1653]: Removed session 4. Nov 4 04:52:01.680818 sshd[1934]: Accepted publickey for core from 147.75.109.163 port 44354 ssh2: RSA SHA256:0ivB5Oq6GZf2x7NCrEMZD9tOsD3Rg6IbL/XhZjctde4 Nov 4 04:52:01.681527 sshd-session[1934]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 04:52:01.684188 systemd-logind[1653]: New session 5 of user core. Nov 4 04:52:01.691217 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 4 04:52:01.695863 sshd[1937]: Connection closed by 147.75.109.163 port 44354 Nov 4 04:52:01.696208 sshd-session[1934]: pam_unix(sshd:session): session closed for user core Nov 4 04:52:01.705346 systemd[1]: sshd@2-139.178.70.105:22-147.75.109.163:44354.service: Deactivated successfully. Nov 4 04:52:01.706217 systemd[1]: session-5.scope: Deactivated successfully. Nov 4 04:52:01.706657 systemd-logind[1653]: Session 5 logged out. Waiting for processes to exit. Nov 4 04:52:01.709270 systemd[1]: Started sshd@3-139.178.70.105:22-147.75.109.163:44364.service - OpenSSH per-connection server daemon (147.75.109.163:44364). Nov 4 04:52:01.710184 systemd-logind[1653]: Removed session 5. Nov 4 04:52:01.766417 sshd[1943]: Accepted publickey for core from 147.75.109.163 port 44364 ssh2: RSA SHA256:0ivB5Oq6GZf2x7NCrEMZD9tOsD3Rg6IbL/XhZjctde4 Nov 4 04:52:01.767151 sshd-session[1943]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 04:52:01.770415 systemd-logind[1653]: New session 6 of user core. Nov 4 04:52:01.779229 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 4 04:52:01.786347 sshd[1946]: Connection closed by 147.75.109.163 port 44364 Nov 4 04:52:01.786120 sshd-session[1943]: pam_unix(sshd:session): session closed for user core Nov 4 04:52:01.796795 systemd[1]: sshd@3-139.178.70.105:22-147.75.109.163:44364.service: Deactivated successfully. Nov 4 04:52:01.798105 systemd[1]: session-6.scope: Deactivated successfully. Nov 4 04:52:01.798837 systemd-logind[1653]: Session 6 logged out. Waiting for processes to exit. Nov 4 04:52:01.800747 systemd[1]: Started sshd@4-139.178.70.105:22-147.75.109.163:44380.service - OpenSSH per-connection server daemon (147.75.109.163:44380). Nov 4 04:52:01.801506 systemd-logind[1653]: Removed session 6. Nov 4 04:52:01.844161 sshd[1952]: Accepted publickey for core from 147.75.109.163 port 44380 ssh2: RSA SHA256:0ivB5Oq6GZf2x7NCrEMZD9tOsD3Rg6IbL/XhZjctde4 Nov 4 04:52:01.844924 sshd-session[1952]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 04:52:01.847725 systemd-logind[1653]: New session 7 of user core. Nov 4 04:52:01.855280 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 4 04:52:01.872238 sudo[1956]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 4 04:52:01.872610 sudo[1956]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 4 04:52:01.881299 sudo[1956]: pam_unix(sudo:session): session closed for user root Nov 4 04:52:01.882089 sshd[1955]: Connection closed by 147.75.109.163 port 44380 Nov 4 04:52:01.882442 sshd-session[1952]: pam_unix(sshd:session): session closed for user core Nov 4 04:52:01.892781 systemd[1]: sshd@4-139.178.70.105:22-147.75.109.163:44380.service: Deactivated successfully. Nov 4 04:52:01.893782 systemd[1]: session-7.scope: Deactivated successfully. Nov 4 04:52:01.895222 systemd-logind[1653]: Session 7 logged out. Waiting for processes to exit. Nov 4 04:52:01.895916 systemd[1]: Started sshd@5-139.178.70.105:22-147.75.109.163:44394.service - OpenSSH per-connection server daemon (147.75.109.163:44394). Nov 4 04:52:01.896757 systemd-logind[1653]: Removed session 7. Nov 4 04:52:01.935024 sshd[1962]: Accepted publickey for core from 147.75.109.163 port 44394 ssh2: RSA SHA256:0ivB5Oq6GZf2x7NCrEMZD9tOsD3Rg6IbL/XhZjctde4 Nov 4 04:52:01.935772 sshd-session[1962]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 04:52:01.939236 systemd-logind[1653]: New session 8 of user core. Nov 4 04:52:01.945221 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 4 04:52:01.952307 sudo[1967]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 4 04:52:01.952460 sudo[1967]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 4 04:52:01.954671 sudo[1967]: pam_unix(sudo:session): session closed for user root Nov 4 04:52:01.958037 sudo[1966]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Nov 4 04:52:01.958325 sudo[1966]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 4 04:52:01.963972 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 4 04:52:01.984978 augenrules[1989]: No rules Nov 4 04:52:01.985764 systemd[1]: audit-rules.service: Deactivated successfully. Nov 4 04:52:01.985905 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 4 04:52:01.986719 sudo[1966]: pam_unix(sudo:session): session closed for user root Nov 4 04:52:01.988516 sshd[1965]: Connection closed by 147.75.109.163 port 44394 Nov 4 04:52:01.989527 sshd-session[1962]: pam_unix(sshd:session): session closed for user core Nov 4 04:52:01.997673 systemd[1]: sshd@5-139.178.70.105:22-147.75.109.163:44394.service: Deactivated successfully. Nov 4 04:52:01.998674 systemd[1]: session-8.scope: Deactivated successfully. Nov 4 04:52:01.999155 systemd-logind[1653]: Session 8 logged out. Waiting for processes to exit. Nov 4 04:52:02.000360 systemd[1]: Started sshd@6-139.178.70.105:22-147.75.109.163:44402.service - OpenSSH per-connection server daemon (147.75.109.163:44402). Nov 4 04:52:02.001025 systemd-logind[1653]: Removed session 8. Nov 4 04:52:02.035212 sshd[1998]: Accepted publickey for core from 147.75.109.163 port 44402 ssh2: RSA SHA256:0ivB5Oq6GZf2x7NCrEMZD9tOsD3Rg6IbL/XhZjctde4 Nov 4 04:52:02.035529 sshd-session[1998]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 04:52:02.038231 systemd-logind[1653]: New session 9 of user core. Nov 4 04:52:02.048263 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 4 04:52:02.055485 sudo[2002]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 4 04:52:02.055631 sudo[2002]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 4 04:52:02.382103 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 4 04:52:02.391440 (dockerd)[2019]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 4 04:52:02.617712 dockerd[2019]: time="2025-11-04T04:52:02.617678092Z" level=info msg="Starting up" Nov 4 04:52:02.618336 dockerd[2019]: time="2025-11-04T04:52:02.618320806Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Nov 4 04:52:02.626186 dockerd[2019]: time="2025-11-04T04:52:02.626157925Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Nov 4 04:52:02.634833 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2949379895-merged.mount: Deactivated successfully. Nov 4 04:52:02.655058 dockerd[2019]: time="2025-11-04T04:52:02.654984725Z" level=info msg="Loading containers: start." Nov 4 04:52:02.662183 kernel: Initializing XFRM netlink socket Nov 4 04:52:02.809723 systemd-networkd[1561]: docker0: Link UP Nov 4 04:52:02.810997 dockerd[2019]: time="2025-11-04T04:52:02.810976792Z" level=info msg="Loading containers: done." Nov 4 04:52:02.820022 dockerd[2019]: time="2025-11-04T04:52:02.819998739Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 4 04:52:02.820095 dockerd[2019]: time="2025-11-04T04:52:02.820043583Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Nov 4 04:52:02.820095 dockerd[2019]: time="2025-11-04T04:52:02.820081271Z" level=info msg="Initializing buildkit" Nov 4 04:52:02.829566 dockerd[2019]: time="2025-11-04T04:52:02.829546618Z" level=info msg="Completed buildkit initialization" Nov 4 04:52:02.835031 dockerd[2019]: time="2025-11-04T04:52:02.835014641Z" level=info msg="Daemon has completed initialization" Nov 4 04:52:02.835153 dockerd[2019]: time="2025-11-04T04:52:02.835097091Z" level=info msg="API listen on /run/docker.sock" Nov 4 04:52:02.835218 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 4 04:52:03.417926 containerd[1675]: time="2025-11-04T04:52:03.417872809Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\"" Nov 4 04:52:03.631854 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1338061254-merged.mount: Deactivated successfully. Nov 4 04:52:04.023582 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4046419940.mount: Deactivated successfully. Nov 4 04:52:04.758497 containerd[1675]: time="2025-11-04T04:52:04.758470823Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:52:04.758964 containerd[1675]: time="2025-11-04T04:52:04.758936659Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.1: active requests=0, bytes read=25393733" Nov 4 04:52:04.759428 containerd[1675]: time="2025-11-04T04:52:04.759407059Z" level=info msg="ImageCreate event name:\"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:52:04.762878 containerd[1675]: time="2025-11-04T04:52:04.762852514Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:52:04.763618 containerd[1675]: time="2025-11-04T04:52:04.763531545Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.1\" with image id \"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.1\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902\", size \"27061991\" in 1.345629893s" Nov 4 04:52:04.763618 containerd[1675]: time="2025-11-04T04:52:04.763550578Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\" returns image reference \"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\"" Nov 4 04:52:04.763879 containerd[1675]: time="2025-11-04T04:52:04.763865005Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\"" Nov 4 04:52:05.881603 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Nov 4 04:52:05.883299 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 04:52:06.310189 containerd[1675]: time="2025-11-04T04:52:06.310085342Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:52:06.315000 containerd[1675]: time="2025-11-04T04:52:06.314975043Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.1: active requests=0, bytes read=21154354" Nov 4 04:52:06.383979 containerd[1675]: time="2025-11-04T04:52:06.383143165Z" level=info msg="ImageCreate event name:\"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:52:06.433700 containerd[1675]: time="2025-11-04T04:52:06.433639138Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:52:06.435389 containerd[1675]: time="2025-11-04T04:52:06.435332942Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.1\" with image id \"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.1\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89\", size \"22820214\" in 1.671449039s" Nov 4 04:52:06.435389 containerd[1675]: time="2025-11-04T04:52:06.435362775Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\" returns image reference \"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\"" Nov 4 04:52:06.436265 containerd[1675]: time="2025-11-04T04:52:06.436216139Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\"" Nov 4 04:52:06.516933 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 04:52:06.520166 (kubelet)[2301]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 4 04:52:06.545492 kubelet[2301]: E1104 04:52:06.545449 2301 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 4 04:52:06.546797 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 4 04:52:06.546933 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 4 04:52:06.547325 systemd[1]: kubelet.service: Consumed 100ms CPU time, 108.2M memory peak. Nov 4 04:52:07.845382 containerd[1675]: time="2025-11-04T04:52:07.845342819Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:52:07.849528 containerd[1675]: time="2025-11-04T04:52:07.849402750Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.1: active requests=0, bytes read=3234" Nov 4 04:52:07.853725 containerd[1675]: time="2025-11-04T04:52:07.853709900Z" level=info msg="ImageCreate event name:\"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:52:07.855937 containerd[1675]: time="2025-11-04T04:52:07.855922332Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:52:07.856392 containerd[1675]: time="2025-11-04T04:52:07.856379604Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.1\" with image id \"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.1\", repo digest \"registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500\", size \"17385568\" in 1.420139262s" Nov 4 04:52:07.856443 containerd[1675]: time="2025-11-04T04:52:07.856434846Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\" returns image reference \"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\"" Nov 4 04:52:07.856713 containerd[1675]: time="2025-11-04T04:52:07.856695970Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\"" Nov 4 04:52:09.168452 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1309657990.mount: Deactivated successfully. Nov 4 04:52:09.491977 containerd[1675]: time="2025-11-04T04:52:09.491909950Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:52:09.493057 containerd[1675]: time="2025-11-04T04:52:09.493037895Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.1: active requests=0, bytes read=14376276" Nov 4 04:52:09.493333 containerd[1675]: time="2025-11-04T04:52:09.493320737Z" level=info msg="ImageCreate event name:\"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:52:09.494535 containerd[1675]: time="2025-11-04T04:52:09.494523021Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:52:09.494842 containerd[1675]: time="2025-11-04T04:52:09.494758684Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.1\" with image id \"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\", repo tag \"registry.k8s.io/kube-proxy:v1.34.1\", repo digest \"registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a\", size \"25963718\" in 1.638042831s" Nov 4 04:52:09.495078 containerd[1675]: time="2025-11-04T04:52:09.495069176Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\" returns image reference \"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\"" Nov 4 04:52:09.495381 containerd[1675]: time="2025-11-04T04:52:09.495367243Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Nov 4 04:52:11.389076 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4116179631.mount: Deactivated successfully. Nov 4 04:52:12.020727 containerd[1675]: time="2025-11-04T04:52:12.020688173Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:52:12.031379 containerd[1675]: time="2025-11-04T04:52:12.031130415Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22124610" Nov 4 04:52:12.044929 containerd[1675]: time="2025-11-04T04:52:12.044443894Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:52:12.067003 containerd[1675]: time="2025-11-04T04:52:12.066936632Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:52:12.067401 containerd[1675]: time="2025-11-04T04:52:12.067286870Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 2.571813951s" Nov 4 04:52:12.067401 containerd[1675]: time="2025-11-04T04:52:12.067304915Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Nov 4 04:52:12.067594 containerd[1675]: time="2025-11-04T04:52:12.067576537Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Nov 4 04:52:12.877078 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2175108865.mount: Deactivated successfully. Nov 4 04:52:12.896321 containerd[1675]: time="2025-11-04T04:52:12.896292536Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:52:12.898530 containerd[1675]: time="2025-11-04T04:52:12.898509780Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=0" Nov 4 04:52:12.901378 containerd[1675]: time="2025-11-04T04:52:12.901357951Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:52:12.903654 containerd[1675]: time="2025-11-04T04:52:12.903630887Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:52:12.904127 containerd[1675]: time="2025-11-04T04:52:12.903884616Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 836.291378ms" Nov 4 04:52:12.904127 containerd[1675]: time="2025-11-04T04:52:12.903903871Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Nov 4 04:52:12.904207 containerd[1675]: time="2025-11-04T04:52:12.904194954Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Nov 4 04:52:16.631408 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Nov 4 04:52:16.633441 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 04:52:16.836515 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 04:52:16.844315 (kubelet)[2424]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 4 04:52:17.101545 update_engine[1654]: I20251104 04:52:17.100948 1654 update_attempter.cc:509] Updating boot flags... Nov 4 04:52:17.154549 kubelet[2424]: E1104 04:52:17.154488 2424 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 4 04:52:17.157332 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 4 04:52:17.157414 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 4 04:52:17.157622 systemd[1]: kubelet.service: Consumed 121ms CPU time, 103.6M memory peak. Nov 4 04:52:19.693706 containerd[1675]: time="2025-11-04T04:52:19.693659393Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:52:19.947730 containerd[1675]: time="2025-11-04T04:52:19.947622285Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=73504897" Nov 4 04:52:20.071667 containerd[1675]: time="2025-11-04T04:52:20.071626604Z" level=info msg="ImageCreate event name:\"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:52:20.081273 containerd[1675]: time="2025-11-04T04:52:20.081247287Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:52:20.081854 containerd[1675]: time="2025-11-04T04:52:20.081708395Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"74311308\" in 7.177496109s" Nov 4 04:52:20.081854 containerd[1675]: time="2025-11-04T04:52:20.081732382Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\"" Nov 4 04:52:22.504596 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 04:52:22.504998 systemd[1]: kubelet.service: Consumed 121ms CPU time, 103.6M memory peak. Nov 4 04:52:22.506529 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 04:52:22.521748 systemd[1]: Reload requested from client PID 2480 ('systemctl') (unit session-9.scope)... Nov 4 04:52:22.521827 systemd[1]: Reloading... Nov 4 04:52:22.591161 zram_generator::config[2527]: No configuration found. Nov 4 04:52:22.656865 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Nov 4 04:52:22.727018 systemd[1]: Reloading finished in 204 ms. Nov 4 04:52:22.798710 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 4 04:52:22.798775 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 4 04:52:22.798967 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 04:52:22.800243 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 04:52:23.157506 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 04:52:23.160708 (kubelet)[2591]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 4 04:52:23.220129 kubelet[2591]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 4 04:52:23.220129 kubelet[2591]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 4 04:52:23.220379 kubelet[2591]: I1104 04:52:23.220147 2591 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 4 04:52:23.979894 kubelet[2591]: I1104 04:52:23.979868 2591 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Nov 4 04:52:23.979894 kubelet[2591]: I1104 04:52:23.979887 2591 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 4 04:52:23.979998 kubelet[2591]: I1104 04:52:23.979906 2591 watchdog_linux.go:95] "Systemd watchdog is not enabled" Nov 4 04:52:23.979998 kubelet[2591]: I1104 04:52:23.979910 2591 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 4 04:52:23.980059 kubelet[2591]: I1104 04:52:23.980051 2591 server.go:956] "Client rotation is on, will bootstrap in background" Nov 4 04:52:23.991064 kubelet[2591]: I1104 04:52:23.991048 2591 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 4 04:52:23.994995 kubelet[2591]: E1104 04:52:23.994965 2591 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://139.178.70.105:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 139.178.70.105:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 4 04:52:24.006449 kubelet[2591]: I1104 04:52:24.006441 2591 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 4 04:52:24.016159 kubelet[2591]: I1104 04:52:24.016017 2591 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Nov 4 04:52:24.016688 kubelet[2591]: I1104 04:52:24.016661 2591 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 4 04:52:24.017758 kubelet[2591]: I1104 04:52:24.016685 2591 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 4 04:52:24.017843 kubelet[2591]: I1104 04:52:24.017759 2591 topology_manager.go:138] "Creating topology manager with none policy" Nov 4 04:52:24.017843 kubelet[2591]: I1104 04:52:24.017766 2591 container_manager_linux.go:306] "Creating device plugin manager" Nov 4 04:52:24.017843 kubelet[2591]: I1104 04:52:24.017820 2591 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Nov 4 04:52:24.018570 kubelet[2591]: I1104 04:52:24.018559 2591 state_mem.go:36] "Initialized new in-memory state store" Nov 4 04:52:24.018750 kubelet[2591]: I1104 04:52:24.018682 2591 kubelet.go:475] "Attempting to sync node with API server" Nov 4 04:52:24.018750 kubelet[2591]: I1104 04:52:24.018691 2591 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 4 04:52:24.018750 kubelet[2591]: I1104 04:52:24.018707 2591 kubelet.go:387] "Adding apiserver pod source" Nov 4 04:52:24.018750 kubelet[2591]: I1104 04:52:24.018718 2591 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 4 04:52:24.019074 kubelet[2591]: E1104 04:52:24.019061 2591 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://139.178.70.105:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 139.178.70.105:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 4 04:52:24.020799 kubelet[2591]: E1104 04:52:24.020784 2591 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://139.178.70.105:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.70.105:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 4 04:52:24.021549 kubelet[2591]: I1104 04:52:24.021536 2591 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.1.4" apiVersion="v1" Nov 4 04:52:24.023900 kubelet[2591]: I1104 04:52:24.023886 2591 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 4 04:52:24.023942 kubelet[2591]: I1104 04:52:24.023906 2591 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Nov 4 04:52:24.026398 kubelet[2591]: W1104 04:52:24.026384 2591 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 4 04:52:24.029856 kubelet[2591]: I1104 04:52:24.029679 2591 server.go:1262] "Started kubelet" Nov 4 04:52:24.031466 kubelet[2591]: I1104 04:52:24.031381 2591 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 4 04:52:24.035033 kubelet[2591]: E1104 04:52:24.034064 2591 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://139.178.70.105:6443/api/v1/namespaces/default/events\": dial tcp 139.178.70.105:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1874b49dcbbbe79e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-11-04 04:52:24.029661086 +0000 UTC m=+0.867045050,LastTimestamp:2025-11-04 04:52:24.029661086 +0000 UTC m=+0.867045050,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Nov 4 04:52:24.035264 kubelet[2591]: I1104 04:52:24.035158 2591 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 4 04:52:24.036026 kubelet[2591]: I1104 04:52:24.035516 2591 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Nov 4 04:52:24.037308 kubelet[2591]: I1104 04:52:24.037294 2591 volume_manager.go:313] "Starting Kubelet Volume Manager" Nov 4 04:52:24.037542 kubelet[2591]: E1104 04:52:24.037410 2591 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 4 04:52:24.042794 kubelet[2591]: I1104 04:52:24.042785 2591 server.go:310] "Adding debug handlers to kubelet server" Nov 4 04:52:24.044951 kubelet[2591]: I1104 04:52:24.044934 2591 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 4 04:52:24.044993 kubelet[2591]: I1104 04:52:24.044963 2591 reconciler.go:29] "Reconciler: start to sync state" Nov 4 04:52:24.046808 kubelet[2591]: I1104 04:52:24.046740 2591 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 4 04:52:24.046808 kubelet[2591]: I1104 04:52:24.046770 2591 server_v1.go:49] "podresources" method="list" useActivePods=true Nov 4 04:52:24.046897 kubelet[2591]: I1104 04:52:24.046868 2591 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 4 04:52:24.049201 kubelet[2591]: I1104 04:52:24.049185 2591 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 4 04:52:24.052632 kubelet[2591]: E1104 04:52:24.051921 2591 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.105:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.105:6443: connect: connection refused" interval="200ms" Nov 4 04:52:24.052632 kubelet[2591]: E1104 04:52:24.052396 2591 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://139.178.70.105:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.70.105:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 4 04:52:24.054862 kubelet[2591]: I1104 04:52:24.053969 2591 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Nov 4 04:52:24.054862 kubelet[2591]: I1104 04:52:24.053988 2591 status_manager.go:244] "Starting to sync pod status with apiserver" Nov 4 04:52:24.054862 kubelet[2591]: I1104 04:52:24.054001 2591 kubelet.go:2427] "Starting kubelet main sync loop" Nov 4 04:52:24.054862 kubelet[2591]: E1104 04:52:24.054023 2591 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 4 04:52:24.054862 kubelet[2591]: E1104 04:52:24.054302 2591 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://139.178.70.105:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 139.178.70.105:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 4 04:52:24.057032 kubelet[2591]: I1104 04:52:24.056906 2591 factory.go:223] Registration of the systemd container factory successfully Nov 4 04:52:24.057032 kubelet[2591]: I1104 04:52:24.056953 2591 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 4 04:52:24.057896 kubelet[2591]: I1104 04:52:24.057834 2591 factory.go:223] Registration of the containerd container factory successfully Nov 4 04:52:24.062604 kubelet[2591]: E1104 04:52:24.062580 2591 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 4 04:52:24.070997 kubelet[2591]: I1104 04:52:24.070976 2591 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 4 04:52:24.070997 kubelet[2591]: I1104 04:52:24.070987 2591 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 4 04:52:24.070997 kubelet[2591]: I1104 04:52:24.070997 2591 state_mem.go:36] "Initialized new in-memory state store" Nov 4 04:52:24.071666 kubelet[2591]: I1104 04:52:24.071654 2591 policy_none.go:49] "None policy: Start" Nov 4 04:52:24.071666 kubelet[2591]: I1104 04:52:24.071665 2591 memory_manager.go:187] "Starting memorymanager" policy="None" Nov 4 04:52:24.071713 kubelet[2591]: I1104 04:52:24.071670 2591 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Nov 4 04:52:24.072043 kubelet[2591]: I1104 04:52:24.072032 2591 policy_none.go:47] "Start" Nov 4 04:52:24.076639 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 4 04:52:24.087787 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 4 04:52:24.090994 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 4 04:52:24.098638 kubelet[2591]: E1104 04:52:24.098622 2591 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 4 04:52:24.099273 kubelet[2591]: I1104 04:52:24.099261 2591 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 4 04:52:24.099311 kubelet[2591]: I1104 04:52:24.099271 2591 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 4 04:52:24.101199 kubelet[2591]: E1104 04:52:24.101186 2591 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 4 04:52:24.101247 kubelet[2591]: E1104 04:52:24.101213 2591 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Nov 4 04:52:24.103723 kubelet[2591]: I1104 04:52:24.103712 2591 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 4 04:52:24.164203 systemd[1]: Created slice kubepods-burstable-podce161b3b11c90b0b844f2e4f86b4e8cd.slice - libcontainer container kubepods-burstable-podce161b3b11c90b0b844f2e4f86b4e8cd.slice. Nov 4 04:52:24.174609 kubelet[2591]: E1104 04:52:24.174597 2591 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 4 04:52:24.176600 systemd[1]: Created slice kubepods-burstable-podfac59e02038f28de050dcbeaa080a2cb.slice - libcontainer container kubepods-burstable-podfac59e02038f28de050dcbeaa080a2cb.slice. Nov 4 04:52:24.177798 kubelet[2591]: E1104 04:52:24.177785 2591 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 4 04:52:24.188945 systemd[1]: Created slice kubepods-burstable-pod72ae43bf624d285361487631af8a6ba6.slice - libcontainer container kubepods-burstable-pod72ae43bf624d285361487631af8a6ba6.slice. Nov 4 04:52:24.190265 kubelet[2591]: E1104 04:52:24.190253 2591 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 4 04:52:24.201059 kubelet[2591]: I1104 04:52:24.201048 2591 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 4 04:52:24.203814 kubelet[2591]: E1104 04:52:24.203800 2591 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.70.105:6443/api/v1/nodes\": dial tcp 139.178.70.105:6443: connect: connection refused" node="localhost" Nov 4 04:52:24.246958 kubelet[2591]: I1104 04:52:24.246180 2591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fac59e02038f28de050dcbeaa080a2cb-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"fac59e02038f28de050dcbeaa080a2cb\") " pod="kube-system/kube-apiserver-localhost" Nov 4 04:52:24.246958 kubelet[2591]: I1104 04:52:24.246202 2591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fac59e02038f28de050dcbeaa080a2cb-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"fac59e02038f28de050dcbeaa080a2cb\") " pod="kube-system/kube-apiserver-localhost" Nov 4 04:52:24.246958 kubelet[2591]: I1104 04:52:24.246215 2591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 4 04:52:24.246958 kubelet[2591]: I1104 04:52:24.246225 2591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 4 04:52:24.246958 kubelet[2591]: I1104 04:52:24.246235 2591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 4 04:52:24.247254 kubelet[2591]: I1104 04:52:24.246243 2591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 4 04:52:24.247254 kubelet[2591]: I1104 04:52:24.246252 2591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 4 04:52:24.247254 kubelet[2591]: I1104 04:52:24.246266 2591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fac59e02038f28de050dcbeaa080a2cb-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"fac59e02038f28de050dcbeaa080a2cb\") " pod="kube-system/kube-apiserver-localhost" Nov 4 04:52:24.247254 kubelet[2591]: I1104 04:52:24.246274 2591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72ae43bf624d285361487631af8a6ba6-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72ae43bf624d285361487631af8a6ba6\") " pod="kube-system/kube-scheduler-localhost" Nov 4 04:52:24.252659 kubelet[2591]: E1104 04:52:24.252643 2591 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.105:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.105:6443: connect: connection refused" interval="400ms" Nov 4 04:52:24.405101 kubelet[2591]: I1104 04:52:24.405071 2591 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 4 04:52:24.405354 kubelet[2591]: E1104 04:52:24.405337 2591 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.70.105:6443/api/v1/nodes\": dial tcp 139.178.70.105:6443: connect: connection refused" node="localhost" Nov 4 04:52:24.477421 containerd[1675]: time="2025-11-04T04:52:24.477212432Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:ce161b3b11c90b0b844f2e4f86b4e8cd,Namespace:kube-system,Attempt:0,}" Nov 4 04:52:24.479177 containerd[1675]: time="2025-11-04T04:52:24.479032024Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:fac59e02038f28de050dcbeaa080a2cb,Namespace:kube-system,Attempt:0,}" Nov 4 04:52:24.492361 containerd[1675]: time="2025-11-04T04:52:24.492187495Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72ae43bf624d285361487631af8a6ba6,Namespace:kube-system,Attempt:0,}" Nov 4 04:52:24.653350 kubelet[2591]: E1104 04:52:24.653320 2591 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.105:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.105:6443: connect: connection refused" interval="800ms" Nov 4 04:52:24.806968 kubelet[2591]: I1104 04:52:24.806953 2591 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 4 04:52:24.807268 kubelet[2591]: E1104 04:52:24.807252 2591 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.70.105:6443/api/v1/nodes\": dial tcp 139.178.70.105:6443: connect: connection refused" node="localhost" Nov 4 04:52:24.911178 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3825686672.mount: Deactivated successfully. Nov 4 04:52:24.914154 containerd[1675]: time="2025-11-04T04:52:24.913926376Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 4 04:52:24.914351 containerd[1675]: time="2025-11-04T04:52:24.914333926Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Nov 4 04:52:24.915754 containerd[1675]: time="2025-11-04T04:52:24.915737619Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 4 04:52:24.916416 containerd[1675]: time="2025-11-04T04:52:24.916401117Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Nov 4 04:52:24.916986 containerd[1675]: time="2025-11-04T04:52:24.916968289Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 4 04:52:24.921586 containerd[1675]: time="2025-11-04T04:52:24.921569360Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 4 04:52:24.922466 containerd[1675]: time="2025-11-04T04:52:24.922449579Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Nov 4 04:52:24.922842 containerd[1675]: time="2025-11-04T04:52:24.922825390Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 4 04:52:24.923951 containerd[1675]: time="2025-11-04T04:52:24.923934608Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 442.468102ms" Nov 4 04:52:24.924779 containerd[1675]: time="2025-11-04T04:52:24.924094088Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 445.56426ms" Nov 4 04:52:24.925824 containerd[1675]: time="2025-11-04T04:52:24.925806543Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 432.86136ms" Nov 4 04:52:25.015180 kubelet[2591]: E1104 04:52:25.013234 2591 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://139.178.70.105:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 139.178.70.105:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 4 04:52:25.015297 containerd[1675]: time="2025-11-04T04:52:25.013523026Z" level=info msg="connecting to shim fa20feadeb34126d8093a51e40feeba922924cfc33339421249bce60a745b622" address="unix:///run/containerd/s/d897287de10026552c06def3234fc8b0c8dd7dbbb646925fdd32b5355e69f518" namespace=k8s.io protocol=ttrpc version=3 Nov 4 04:52:25.023284 containerd[1675]: time="2025-11-04T04:52:25.023255004Z" level=info msg="connecting to shim 9e6f21bd795cce6bbceb0c89643c7e518d1d4b942a2f3720e4f6e658e8debea7" address="unix:///run/containerd/s/d5f9581c227067eb7a825f95bc9ad9936e86955a8392ff1c7138f12961ce96cc" namespace=k8s.io protocol=ttrpc version=3 Nov 4 04:52:25.024022 containerd[1675]: time="2025-11-04T04:52:25.023254996Z" level=info msg="connecting to shim 9fd0959d0baa52603083c884bef3743c5a933cb028d46853392cd6f8c26841d8" address="unix:///run/containerd/s/3c15c7fc7933014d18ae5a137ef83e3a305a473cd8dfc7be5adfbd4c21645fb9" namespace=k8s.io protocol=ttrpc version=3 Nov 4 04:52:25.094253 systemd[1]: Started cri-containerd-9e6f21bd795cce6bbceb0c89643c7e518d1d4b942a2f3720e4f6e658e8debea7.scope - libcontainer container 9e6f21bd795cce6bbceb0c89643c7e518d1d4b942a2f3720e4f6e658e8debea7. Nov 4 04:52:25.095974 systemd[1]: Started cri-containerd-9fd0959d0baa52603083c884bef3743c5a933cb028d46853392cd6f8c26841d8.scope - libcontainer container 9fd0959d0baa52603083c884bef3743c5a933cb028d46853392cd6f8c26841d8. Nov 4 04:52:25.097762 systemd[1]: Started cri-containerd-fa20feadeb34126d8093a51e40feeba922924cfc33339421249bce60a745b622.scope - libcontainer container fa20feadeb34126d8093a51e40feeba922924cfc33339421249bce60a745b622. Nov 4 04:52:25.142272 kubelet[2591]: E1104 04:52:25.142246 2591 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://139.178.70.105:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.70.105:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 4 04:52:25.142620 containerd[1675]: time="2025-11-04T04:52:25.142598911Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:fac59e02038f28de050dcbeaa080a2cb,Namespace:kube-system,Attempt:0,} returns sandbox id \"9fd0959d0baa52603083c884bef3743c5a933cb028d46853392cd6f8c26841d8\"" Nov 4 04:52:25.146337 containerd[1675]: time="2025-11-04T04:52:25.146290217Z" level=info msg="CreateContainer within sandbox \"9fd0959d0baa52603083c884bef3743c5a933cb028d46853392cd6f8c26841d8\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 4 04:52:25.148051 containerd[1675]: time="2025-11-04T04:52:25.148034976Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:ce161b3b11c90b0b844f2e4f86b4e8cd,Namespace:kube-system,Attempt:0,} returns sandbox id \"fa20feadeb34126d8093a51e40feeba922924cfc33339421249bce60a745b622\"" Nov 4 04:52:25.150305 containerd[1675]: time="2025-11-04T04:52:25.150289611Z" level=info msg="CreateContainer within sandbox \"fa20feadeb34126d8093a51e40feeba922924cfc33339421249bce60a745b622\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 4 04:52:25.155880 containerd[1675]: time="2025-11-04T04:52:25.155829528Z" level=info msg="Container 9359045f7e08d842831e6563d4192794b1ef3b522abdcf38c5007acb10be64a1: CDI devices from CRI Config.CDIDevices: []" Nov 4 04:52:25.157797 containerd[1675]: time="2025-11-04T04:52:25.157766970Z" level=info msg="Container af15cbeb6664f7b05f994b2bee5d29eaef82487446e21cba64507326687197bf: CDI devices from CRI Config.CDIDevices: []" Nov 4 04:52:25.166496 containerd[1675]: time="2025-11-04T04:52:25.166387089Z" level=info msg="CreateContainer within sandbox \"9fd0959d0baa52603083c884bef3743c5a933cb028d46853392cd6f8c26841d8\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"af15cbeb6664f7b05f994b2bee5d29eaef82487446e21cba64507326687197bf\"" Nov 4 04:52:25.167115 containerd[1675]: time="2025-11-04T04:52:25.167104533Z" level=info msg="StartContainer for \"af15cbeb6664f7b05f994b2bee5d29eaef82487446e21cba64507326687197bf\"" Nov 4 04:52:25.168789 containerd[1675]: time="2025-11-04T04:52:25.168775967Z" level=info msg="connecting to shim af15cbeb6664f7b05f994b2bee5d29eaef82487446e21cba64507326687197bf" address="unix:///run/containerd/s/3c15c7fc7933014d18ae5a137ef83e3a305a473cd8dfc7be5adfbd4c21645fb9" protocol=ttrpc version=3 Nov 4 04:52:25.169253 containerd[1675]: time="2025-11-04T04:52:25.169233884Z" level=info msg="CreateContainer within sandbox \"fa20feadeb34126d8093a51e40feeba922924cfc33339421249bce60a745b622\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"9359045f7e08d842831e6563d4192794b1ef3b522abdcf38c5007acb10be64a1\"" Nov 4 04:52:25.169937 containerd[1675]: time="2025-11-04T04:52:25.169806653Z" level=info msg="StartContainer for \"9359045f7e08d842831e6563d4192794b1ef3b522abdcf38c5007acb10be64a1\"" Nov 4 04:52:25.172221 containerd[1675]: time="2025-11-04T04:52:25.172206932Z" level=info msg="connecting to shim 9359045f7e08d842831e6563d4192794b1ef3b522abdcf38c5007acb10be64a1" address="unix:///run/containerd/s/d897287de10026552c06def3234fc8b0c8dd7dbbb646925fdd32b5355e69f518" protocol=ttrpc version=3 Nov 4 04:52:25.172566 containerd[1675]: time="2025-11-04T04:52:25.172549799Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72ae43bf624d285361487631af8a6ba6,Namespace:kube-system,Attempt:0,} returns sandbox id \"9e6f21bd795cce6bbceb0c89643c7e518d1d4b942a2f3720e4f6e658e8debea7\"" Nov 4 04:52:25.177238 containerd[1675]: time="2025-11-04T04:52:25.177221660Z" level=info msg="CreateContainer within sandbox \"9e6f21bd795cce6bbceb0c89643c7e518d1d4b942a2f3720e4f6e658e8debea7\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 4 04:52:25.186534 containerd[1675]: time="2025-11-04T04:52:25.186515525Z" level=info msg="Container 4b41b10bcdfeb40cc9f91f60256482722395a72b965507f37cbdbc0f5b973135: CDI devices from CRI Config.CDIDevices: []" Nov 4 04:52:25.189029 containerd[1675]: time="2025-11-04T04:52:25.189011408Z" level=info msg="CreateContainer within sandbox \"9e6f21bd795cce6bbceb0c89643c7e518d1d4b942a2f3720e4f6e658e8debea7\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"4b41b10bcdfeb40cc9f91f60256482722395a72b965507f37cbdbc0f5b973135\"" Nov 4 04:52:25.189277 containerd[1675]: time="2025-11-04T04:52:25.189265395Z" level=info msg="StartContainer for \"4b41b10bcdfeb40cc9f91f60256482722395a72b965507f37cbdbc0f5b973135\"" Nov 4 04:52:25.189815 containerd[1675]: time="2025-11-04T04:52:25.189800927Z" level=info msg="connecting to shim 4b41b10bcdfeb40cc9f91f60256482722395a72b965507f37cbdbc0f5b973135" address="unix:///run/containerd/s/d5f9581c227067eb7a825f95bc9ad9936e86955a8392ff1c7138f12961ce96cc" protocol=ttrpc version=3 Nov 4 04:52:25.190259 systemd[1]: Started cri-containerd-9359045f7e08d842831e6563d4192794b1ef3b522abdcf38c5007acb10be64a1.scope - libcontainer container 9359045f7e08d842831e6563d4192794b1ef3b522abdcf38c5007acb10be64a1. Nov 4 04:52:25.191779 systemd[1]: Started cri-containerd-af15cbeb6664f7b05f994b2bee5d29eaef82487446e21cba64507326687197bf.scope - libcontainer container af15cbeb6664f7b05f994b2bee5d29eaef82487446e21cba64507326687197bf. Nov 4 04:52:25.208219 systemd[1]: Started cri-containerd-4b41b10bcdfeb40cc9f91f60256482722395a72b965507f37cbdbc0f5b973135.scope - libcontainer container 4b41b10bcdfeb40cc9f91f60256482722395a72b965507f37cbdbc0f5b973135. Nov 4 04:52:25.234660 containerd[1675]: time="2025-11-04T04:52:25.234635929Z" level=info msg="StartContainer for \"af15cbeb6664f7b05f994b2bee5d29eaef82487446e21cba64507326687197bf\" returns successfully" Nov 4 04:52:25.238825 kubelet[2591]: E1104 04:52:25.238798 2591 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://139.178.70.105:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.70.105:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 4 04:52:25.258302 containerd[1675]: time="2025-11-04T04:52:25.258254617Z" level=info msg="StartContainer for \"9359045f7e08d842831e6563d4192794b1ef3b522abdcf38c5007acb10be64a1\" returns successfully" Nov 4 04:52:25.266946 containerd[1675]: time="2025-11-04T04:52:25.266909597Z" level=info msg="StartContainer for \"4b41b10bcdfeb40cc9f91f60256482722395a72b965507f37cbdbc0f5b973135\" returns successfully" Nov 4 04:52:25.393909 kubelet[2591]: E1104 04:52:25.393887 2591 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://139.178.70.105:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 139.178.70.105:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 4 04:52:25.455454 kubelet[2591]: E1104 04:52:25.455375 2591 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.105:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.105:6443: connect: connection refused" interval="1.6s" Nov 4 04:52:25.608775 kubelet[2591]: I1104 04:52:25.608751 2591 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 4 04:52:25.609027 kubelet[2591]: E1104 04:52:25.609014 2591 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.70.105:6443/api/v1/nodes\": dial tcp 139.178.70.105:6443: connect: connection refused" node="localhost" Nov 4 04:52:26.068707 kubelet[2591]: E1104 04:52:26.068598 2591 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 4 04:52:26.070366 kubelet[2591]: E1104 04:52:26.070254 2591 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 4 04:52:26.071958 kubelet[2591]: E1104 04:52:26.071949 2591 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 4 04:52:27.057597 kubelet[2591]: E1104 04:52:27.057564 2591 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Nov 4 04:52:27.074582 kubelet[2591]: E1104 04:52:27.074563 2591 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 4 04:52:27.074767 kubelet[2591]: E1104 04:52:27.074755 2591 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 4 04:52:27.097920 kubelet[2591]: E1104 04:52:27.097840 2591 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.1874b49dcbbbe79e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-11-04 04:52:24.029661086 +0000 UTC m=+0.867045050,LastTimestamp:2025-11-04 04:52:24.029661086 +0000 UTC m=+0.867045050,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Nov 4 04:52:27.210511 kubelet[2591]: I1104 04:52:27.210492 2591 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 4 04:52:27.221176 kubelet[2591]: I1104 04:52:27.221151 2591 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 4 04:52:27.221176 kubelet[2591]: E1104 04:52:27.221177 2591 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Nov 4 04:52:27.235248 kubelet[2591]: E1104 04:52:27.235227 2591 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 4 04:52:27.336185 kubelet[2591]: E1104 04:52:27.336076 2591 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 4 04:52:27.359121 kubelet[2591]: E1104 04:52:27.359096 2591 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 4 04:52:27.437007 kubelet[2591]: E1104 04:52:27.436979 2591 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 4 04:52:27.537649 kubelet[2591]: E1104 04:52:27.537627 2591 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 4 04:52:27.638061 kubelet[2591]: E1104 04:52:27.638034 2591 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 4 04:52:27.738587 kubelet[2591]: E1104 04:52:27.738556 2591 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 4 04:52:27.839345 kubelet[2591]: E1104 04:52:27.839310 2591 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 4 04:52:27.942402 kubelet[2591]: I1104 04:52:27.942173 2591 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 4 04:52:27.948843 kubelet[2591]: I1104 04:52:27.948819 2591 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 4 04:52:27.952664 kubelet[2591]: I1104 04:52:27.952510 2591 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 4 04:52:28.024914 kubelet[2591]: I1104 04:52:28.024512 2591 apiserver.go:52] "Watching apiserver" Nov 4 04:52:28.046817 kubelet[2591]: I1104 04:52:28.046666 2591 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 4 04:52:29.275245 systemd[1]: Reload requested from client PID 2873 ('systemctl') (unit session-9.scope)... Nov 4 04:52:29.275255 systemd[1]: Reloading... Nov 4 04:52:29.332162 zram_generator::config[2917]: No configuration found. Nov 4 04:52:29.426065 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Nov 4 04:52:29.508342 systemd[1]: Reloading finished in 232 ms. Nov 4 04:52:29.527949 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 04:52:29.543819 systemd[1]: kubelet.service: Deactivated successfully. Nov 4 04:52:29.544000 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 04:52:29.544039 systemd[1]: kubelet.service: Consumed 1.025s CPU time, 122.1M memory peak. Nov 4 04:52:29.545325 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 04:52:30.043125 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 04:52:30.049475 (kubelet)[2985]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 4 04:52:30.097872 kubelet[2985]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 4 04:52:30.097872 kubelet[2985]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 4 04:52:30.098070 kubelet[2985]: I1104 04:52:30.097915 2985 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 4 04:52:30.101767 kubelet[2985]: I1104 04:52:30.101643 2985 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Nov 4 04:52:30.101767 kubelet[2985]: I1104 04:52:30.101659 2985 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 4 04:52:30.107651 kubelet[2985]: I1104 04:52:30.106797 2985 watchdog_linux.go:95] "Systemd watchdog is not enabled" Nov 4 04:52:30.107651 kubelet[2985]: I1104 04:52:30.106808 2985 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 4 04:52:30.107651 kubelet[2985]: I1104 04:52:30.106952 2985 server.go:956] "Client rotation is on, will bootstrap in background" Nov 4 04:52:30.107766 kubelet[2985]: I1104 04:52:30.107656 2985 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Nov 4 04:52:30.112009 kubelet[2985]: I1104 04:52:30.111996 2985 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 4 04:52:30.113959 kubelet[2985]: I1104 04:52:30.113942 2985 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 4 04:52:30.116394 kubelet[2985]: I1104 04:52:30.116368 2985 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Nov 4 04:52:30.119313 kubelet[2985]: I1104 04:52:30.118812 2985 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 4 04:52:30.119313 kubelet[2985]: I1104 04:52:30.118846 2985 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 4 04:52:30.119313 kubelet[2985]: I1104 04:52:30.119012 2985 topology_manager.go:138] "Creating topology manager with none policy" Nov 4 04:52:30.119313 kubelet[2985]: I1104 04:52:30.119019 2985 container_manager_linux.go:306] "Creating device plugin manager" Nov 4 04:52:30.119487 kubelet[2985]: I1104 04:52:30.119041 2985 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Nov 4 04:52:30.120049 kubelet[2985]: I1104 04:52:30.120033 2985 state_mem.go:36] "Initialized new in-memory state store" Nov 4 04:52:30.120328 kubelet[2985]: I1104 04:52:30.120233 2985 kubelet.go:475] "Attempting to sync node with API server" Nov 4 04:52:30.120328 kubelet[2985]: I1104 04:52:30.120246 2985 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 4 04:52:30.120328 kubelet[2985]: I1104 04:52:30.120264 2985 kubelet.go:387] "Adding apiserver pod source" Nov 4 04:52:30.120328 kubelet[2985]: I1104 04:52:30.120278 2985 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 4 04:52:30.123671 kubelet[2985]: I1104 04:52:30.123652 2985 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.1.4" apiVersion="v1" Nov 4 04:52:30.125546 kubelet[2985]: I1104 04:52:30.124529 2985 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 4 04:52:30.125546 kubelet[2985]: I1104 04:52:30.124550 2985 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Nov 4 04:52:30.126383 kubelet[2985]: I1104 04:52:30.126373 2985 server.go:1262] "Started kubelet" Nov 4 04:52:30.127492 kubelet[2985]: I1104 04:52:30.127478 2985 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 4 04:52:30.129185 kubelet[2985]: I1104 04:52:30.129168 2985 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 4 04:52:30.131161 kubelet[2985]: I1104 04:52:30.130029 2985 server.go:310] "Adding debug handlers to kubelet server" Nov 4 04:52:30.133596 kubelet[2985]: I1104 04:52:30.133572 2985 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 4 04:52:30.133714 kubelet[2985]: I1104 04:52:30.133704 2985 server_v1.go:49] "podresources" method="list" useActivePods=true Nov 4 04:52:30.133891 kubelet[2985]: I1104 04:52:30.133883 2985 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 4 04:52:30.134536 kubelet[2985]: I1104 04:52:30.134522 2985 volume_manager.go:313] "Starting Kubelet Volume Manager" Nov 4 04:52:30.134738 kubelet[2985]: E1104 04:52:30.134724 2985 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 4 04:52:30.135102 kubelet[2985]: I1104 04:52:30.134838 2985 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 4 04:52:30.137271 kubelet[2985]: I1104 04:52:30.137253 2985 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 4 04:52:30.137351 kubelet[2985]: I1104 04:52:30.137324 2985 reconciler.go:29] "Reconciler: start to sync state" Nov 4 04:52:30.139208 kubelet[2985]: I1104 04:52:30.139187 2985 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Nov 4 04:52:30.140336 kubelet[2985]: I1104 04:52:30.140324 2985 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Nov 4 04:52:30.140386 kubelet[2985]: I1104 04:52:30.140339 2985 status_manager.go:244] "Starting to sync pod status with apiserver" Nov 4 04:52:30.140386 kubelet[2985]: I1104 04:52:30.140357 2985 kubelet.go:2427] "Starting kubelet main sync loop" Nov 4 04:52:30.140428 kubelet[2985]: E1104 04:52:30.140389 2985 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 4 04:52:30.140914 kubelet[2985]: I1104 04:52:30.140906 2985 factory.go:223] Registration of the systemd container factory successfully Nov 4 04:52:30.141011 kubelet[2985]: I1104 04:52:30.141000 2985 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 4 04:52:30.143155 kubelet[2985]: I1104 04:52:30.142520 2985 factory.go:223] Registration of the containerd container factory successfully Nov 4 04:52:30.185361 kubelet[2985]: I1104 04:52:30.185345 2985 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 4 04:52:30.185631 kubelet[2985]: I1104 04:52:30.185621 2985 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 4 04:52:30.185679 kubelet[2985]: I1104 04:52:30.185674 2985 state_mem.go:36] "Initialized new in-memory state store" Nov 4 04:52:30.186272 kubelet[2985]: I1104 04:52:30.186263 2985 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 4 04:52:30.186378 kubelet[2985]: I1104 04:52:30.186362 2985 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 4 04:52:30.186415 kubelet[2985]: I1104 04:52:30.186410 2985 policy_none.go:49] "None policy: Start" Nov 4 04:52:30.186507 kubelet[2985]: I1104 04:52:30.186500 2985 memory_manager.go:187] "Starting memorymanager" policy="None" Nov 4 04:52:30.186543 kubelet[2985]: I1104 04:52:30.186539 2985 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Nov 4 04:52:30.186937 kubelet[2985]: I1104 04:52:30.186929 2985 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Nov 4 04:52:30.187031 kubelet[2985]: I1104 04:52:30.187026 2985 policy_none.go:47] "Start" Nov 4 04:52:30.189462 kubelet[2985]: E1104 04:52:30.189446 2985 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 4 04:52:30.189753 kubelet[2985]: I1104 04:52:30.189746 2985 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 4 04:52:30.189815 kubelet[2985]: I1104 04:52:30.189795 2985 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 4 04:52:30.189953 kubelet[2985]: I1104 04:52:30.189938 2985 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 4 04:52:30.192289 kubelet[2985]: E1104 04:52:30.191204 2985 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 4 04:52:30.241339 kubelet[2985]: I1104 04:52:30.241309 2985 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 4 04:52:30.242367 kubelet[2985]: I1104 04:52:30.241691 2985 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 4 04:52:30.243503 kubelet[2985]: I1104 04:52:30.243490 2985 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 4 04:52:30.246500 kubelet[2985]: E1104 04:52:30.246481 2985 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Nov 4 04:52:30.247249 kubelet[2985]: E1104 04:52:30.247197 2985 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Nov 4 04:52:30.248103 kubelet[2985]: E1104 04:52:30.247915 2985 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Nov 4 04:52:30.291016 kubelet[2985]: I1104 04:52:30.290942 2985 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 4 04:52:30.302082 kubelet[2985]: I1104 04:52:30.302017 2985 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Nov 4 04:52:30.303040 kubelet[2985]: I1104 04:52:30.303006 2985 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 4 04:52:30.438658 kubelet[2985]: I1104 04:52:30.438634 2985 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fac59e02038f28de050dcbeaa080a2cb-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"fac59e02038f28de050dcbeaa080a2cb\") " pod="kube-system/kube-apiserver-localhost" Nov 4 04:52:30.438658 kubelet[2985]: I1104 04:52:30.438657 2985 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fac59e02038f28de050dcbeaa080a2cb-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"fac59e02038f28de050dcbeaa080a2cb\") " pod="kube-system/kube-apiserver-localhost" Nov 4 04:52:30.438882 kubelet[2985]: I1104 04:52:30.438672 2985 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 4 04:52:30.438882 kubelet[2985]: I1104 04:52:30.438680 2985 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 4 04:52:30.438882 kubelet[2985]: I1104 04:52:30.438690 2985 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 4 04:52:30.438882 kubelet[2985]: I1104 04:52:30.438715 2985 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72ae43bf624d285361487631af8a6ba6-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72ae43bf624d285361487631af8a6ba6\") " pod="kube-system/kube-scheduler-localhost" Nov 4 04:52:30.438882 kubelet[2985]: I1104 04:52:30.438736 2985 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fac59e02038f28de050dcbeaa080a2cb-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"fac59e02038f28de050dcbeaa080a2cb\") " pod="kube-system/kube-apiserver-localhost" Nov 4 04:52:30.438983 kubelet[2985]: I1104 04:52:30.438749 2985 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 4 04:52:30.438983 kubelet[2985]: I1104 04:52:30.438758 2985 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 4 04:52:31.124207 kubelet[2985]: I1104 04:52:31.124160 2985 apiserver.go:52] "Watching apiserver" Nov 4 04:52:31.175635 kubelet[2985]: I1104 04:52:31.175310 2985 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 4 04:52:31.175717 kubelet[2985]: I1104 04:52:31.175696 2985 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 4 04:52:31.175949 kubelet[2985]: I1104 04:52:31.175862 2985 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 4 04:52:31.187097 kubelet[2985]: E1104 04:52:31.187078 2985 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Nov 4 04:52:31.187832 kubelet[2985]: E1104 04:52:31.187813 2985 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Nov 4 04:52:31.187970 kubelet[2985]: E1104 04:52:31.187935 2985 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Nov 4 04:52:31.192125 kubelet[2985]: I1104 04:52:31.191879 2985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=4.191865841 podStartE2EDuration="4.191865841s" podCreationTimestamp="2025-11-04 04:52:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-04 04:52:31.191466914 +0000 UTC m=+1.133725398" watchObservedRunningTime="2025-11-04 04:52:31.191865841 +0000 UTC m=+1.134124318" Nov 4 04:52:31.207253 kubelet[2985]: I1104 04:52:31.207006 2985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=4.206992901 podStartE2EDuration="4.206992901s" podCreationTimestamp="2025-11-04 04:52:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-04 04:52:31.19953993 +0000 UTC m=+1.141798416" watchObservedRunningTime="2025-11-04 04:52:31.206992901 +0000 UTC m=+1.149251386" Nov 4 04:52:31.216461 kubelet[2985]: I1104 04:52:31.216417 2985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=4.216405213 podStartE2EDuration="4.216405213s" podCreationTimestamp="2025-11-04 04:52:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-04 04:52:31.207182413 +0000 UTC m=+1.149440896" watchObservedRunningTime="2025-11-04 04:52:31.216405213 +0000 UTC m=+1.158663697" Nov 4 04:52:31.238341 kubelet[2985]: I1104 04:52:31.238310 2985 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 4 04:52:34.950156 kubelet[2985]: I1104 04:52:34.950114 2985 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 4 04:52:34.950659 containerd[1675]: time="2025-11-04T04:52:34.950413739Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 4 04:52:34.951513 kubelet[2985]: I1104 04:52:34.950936 2985 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 4 04:52:35.951387 systemd[1]: Created slice kubepods-besteffort-podb2fe57c6_af6e_4099_9b4c_8c7847f03492.slice - libcontainer container kubepods-besteffort-podb2fe57c6_af6e_4099_9b4c_8c7847f03492.slice. Nov 4 04:52:36.000596 kubelet[2985]: I1104 04:52:36.000496 2985 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b2fe57c6-af6e-4099-9b4c-8c7847f03492-lib-modules\") pod \"kube-proxy-gkc5k\" (UID: \"b2fe57c6-af6e-4099-9b4c-8c7847f03492\") " pod="kube-system/kube-proxy-gkc5k" Nov 4 04:52:36.000596 kubelet[2985]: I1104 04:52:36.000528 2985 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fc74m\" (UniqueName: \"kubernetes.io/projected/b2fe57c6-af6e-4099-9b4c-8c7847f03492-kube-api-access-fc74m\") pod \"kube-proxy-gkc5k\" (UID: \"b2fe57c6-af6e-4099-9b4c-8c7847f03492\") " pod="kube-system/kube-proxy-gkc5k" Nov 4 04:52:36.000596 kubelet[2985]: I1104 04:52:36.000542 2985 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b2fe57c6-af6e-4099-9b4c-8c7847f03492-kube-proxy\") pod \"kube-proxy-gkc5k\" (UID: \"b2fe57c6-af6e-4099-9b4c-8c7847f03492\") " pod="kube-system/kube-proxy-gkc5k" Nov 4 04:52:36.000596 kubelet[2985]: I1104 04:52:36.000553 2985 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b2fe57c6-af6e-4099-9b4c-8c7847f03492-xtables-lock\") pod \"kube-proxy-gkc5k\" (UID: \"b2fe57c6-af6e-4099-9b4c-8c7847f03492\") " pod="kube-system/kube-proxy-gkc5k" Nov 4 04:52:36.068598 systemd[1]: Created slice kubepods-besteffort-podb8d28c8e_4930_4c4b_9634_9609e5be5842.slice - libcontainer container kubepods-besteffort-podb8d28c8e_4930_4c4b_9634_9609e5be5842.slice. Nov 4 04:52:36.101540 kubelet[2985]: I1104 04:52:36.101485 2985 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/b8d28c8e-4930-4c4b-9634-9609e5be5842-var-lib-calico\") pod \"tigera-operator-65cdcdfd6d-pxffr\" (UID: \"b8d28c8e-4930-4c4b-9634-9609e5be5842\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-pxffr" Nov 4 04:52:36.101540 kubelet[2985]: I1104 04:52:36.101526 2985 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7srtr\" (UniqueName: \"kubernetes.io/projected/b8d28c8e-4930-4c4b-9634-9609e5be5842-kube-api-access-7srtr\") pod \"tigera-operator-65cdcdfd6d-pxffr\" (UID: \"b8d28c8e-4930-4c4b-9634-9609e5be5842\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-pxffr" Nov 4 04:52:36.265300 containerd[1675]: time="2025-11-04T04:52:36.265190637Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gkc5k,Uid:b2fe57c6-af6e-4099-9b4c-8c7847f03492,Namespace:kube-system,Attempt:0,}" Nov 4 04:52:36.279281 containerd[1675]: time="2025-11-04T04:52:36.279246352Z" level=info msg="connecting to shim f2673c4bb52623db5de1d0e80939bff2d06b8f2db90fac63cbff9f89da80aa91" address="unix:///run/containerd/s/4856091d69fef2416d81838d5584164b2a8e650c2596efafe97b0ac5b67bff87" namespace=k8s.io protocol=ttrpc version=3 Nov 4 04:52:36.302295 systemd[1]: Started cri-containerd-f2673c4bb52623db5de1d0e80939bff2d06b8f2db90fac63cbff9f89da80aa91.scope - libcontainer container f2673c4bb52623db5de1d0e80939bff2d06b8f2db90fac63cbff9f89da80aa91. Nov 4 04:52:36.318928 containerd[1675]: time="2025-11-04T04:52:36.318902154Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gkc5k,Uid:b2fe57c6-af6e-4099-9b4c-8c7847f03492,Namespace:kube-system,Attempt:0,} returns sandbox id \"f2673c4bb52623db5de1d0e80939bff2d06b8f2db90fac63cbff9f89da80aa91\"" Nov 4 04:52:36.329054 containerd[1675]: time="2025-11-04T04:52:36.328994583Z" level=info msg="CreateContainer within sandbox \"f2673c4bb52623db5de1d0e80939bff2d06b8f2db90fac63cbff9f89da80aa91\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 4 04:52:36.355282 containerd[1675]: time="2025-11-04T04:52:36.355243638Z" level=info msg="Container a408a00180a0c83d42f18d37468786ff65922b9dd233107e1c5c1297d7c12483: CDI devices from CRI Config.CDIDevices: []" Nov 4 04:52:36.360481 containerd[1675]: time="2025-11-04T04:52:36.360457377Z" level=info msg="CreateContainer within sandbox \"f2673c4bb52623db5de1d0e80939bff2d06b8f2db90fac63cbff9f89da80aa91\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"a408a00180a0c83d42f18d37468786ff65922b9dd233107e1c5c1297d7c12483\"" Nov 4 04:52:36.361048 containerd[1675]: time="2025-11-04T04:52:36.361017142Z" level=info msg="StartContainer for \"a408a00180a0c83d42f18d37468786ff65922b9dd233107e1c5c1297d7c12483\"" Nov 4 04:52:36.362418 containerd[1675]: time="2025-11-04T04:52:36.362383545Z" level=info msg="connecting to shim a408a00180a0c83d42f18d37468786ff65922b9dd233107e1c5c1297d7c12483" address="unix:///run/containerd/s/4856091d69fef2416d81838d5584164b2a8e650c2596efafe97b0ac5b67bff87" protocol=ttrpc version=3 Nov 4 04:52:36.372458 containerd[1675]: time="2025-11-04T04:52:36.372428867Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-pxffr,Uid:b8d28c8e-4930-4c4b-9634-9609e5be5842,Namespace:tigera-operator,Attempt:0,}" Nov 4 04:52:36.383306 systemd[1]: Started cri-containerd-a408a00180a0c83d42f18d37468786ff65922b9dd233107e1c5c1297d7c12483.scope - libcontainer container a408a00180a0c83d42f18d37468786ff65922b9dd233107e1c5c1297d7c12483. Nov 4 04:52:36.390922 containerd[1675]: time="2025-11-04T04:52:36.390892146Z" level=info msg="connecting to shim 13351d36aa250966e00866a7ad4a3181b1aede62f11aab6422b08819640262a6" address="unix:///run/containerd/s/44fcaf2cb617e7d5967217d882ced93a126578aa91ecd93664094c83597086cc" namespace=k8s.io protocol=ttrpc version=3 Nov 4 04:52:36.410285 systemd[1]: Started cri-containerd-13351d36aa250966e00866a7ad4a3181b1aede62f11aab6422b08819640262a6.scope - libcontainer container 13351d36aa250966e00866a7ad4a3181b1aede62f11aab6422b08819640262a6. Nov 4 04:52:36.439976 containerd[1675]: time="2025-11-04T04:52:36.439953341Z" level=info msg="StartContainer for \"a408a00180a0c83d42f18d37468786ff65922b9dd233107e1c5c1297d7c12483\" returns successfully" Nov 4 04:52:36.464594 containerd[1675]: time="2025-11-04T04:52:36.464564597Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-pxffr,Uid:b8d28c8e-4930-4c4b-9634-9609e5be5842,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"13351d36aa250966e00866a7ad4a3181b1aede62f11aab6422b08819640262a6\"" Nov 4 04:52:36.465821 containerd[1675]: time="2025-11-04T04:52:36.465800764Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Nov 4 04:52:37.117942 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1319788279.mount: Deactivated successfully. Nov 4 04:52:37.802253 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2573818906.mount: Deactivated successfully. Nov 4 04:52:38.914160 containerd[1675]: time="2025-11-04T04:52:38.913847229Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=23558205" Nov 4 04:52:38.915751 containerd[1675]: time="2025-11-04T04:52:38.915045171Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 2.449226545s" Nov 4 04:52:38.915751 containerd[1675]: time="2025-11-04T04:52:38.915064084Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Nov 4 04:52:38.917706 containerd[1675]: time="2025-11-04T04:52:38.917667539Z" level=info msg="CreateContainer within sandbox \"13351d36aa250966e00866a7ad4a3181b1aede62f11aab6422b08819640262a6\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 4 04:52:38.926558 containerd[1675]: time="2025-11-04T04:52:38.926530317Z" level=info msg="Container 8c54a63228ced27a67870ac6b78df6379fdef598fdfb46b354d38b933507bf0e: CDI devices from CRI Config.CDIDevices: []" Nov 4 04:52:38.939923 containerd[1675]: time="2025-11-04T04:52:38.939866056Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:52:38.940381 containerd[1675]: time="2025-11-04T04:52:38.940340060Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:52:38.940874 containerd[1675]: time="2025-11-04T04:52:38.940857754Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:52:38.942185 containerd[1675]: time="2025-11-04T04:52:38.942168698Z" level=info msg="CreateContainer within sandbox \"13351d36aa250966e00866a7ad4a3181b1aede62f11aab6422b08819640262a6\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"8c54a63228ced27a67870ac6b78df6379fdef598fdfb46b354d38b933507bf0e\"" Nov 4 04:52:38.943013 containerd[1675]: time="2025-11-04T04:52:38.942995863Z" level=info msg="StartContainer for \"8c54a63228ced27a67870ac6b78df6379fdef598fdfb46b354d38b933507bf0e\"" Nov 4 04:52:38.944065 containerd[1675]: time="2025-11-04T04:52:38.944025782Z" level=info msg="connecting to shim 8c54a63228ced27a67870ac6b78df6379fdef598fdfb46b354d38b933507bf0e" address="unix:///run/containerd/s/44fcaf2cb617e7d5967217d882ced93a126578aa91ecd93664094c83597086cc" protocol=ttrpc version=3 Nov 4 04:52:38.961297 systemd[1]: Started cri-containerd-8c54a63228ced27a67870ac6b78df6379fdef598fdfb46b354d38b933507bf0e.scope - libcontainer container 8c54a63228ced27a67870ac6b78df6379fdef598fdfb46b354d38b933507bf0e. Nov 4 04:52:38.982406 containerd[1675]: time="2025-11-04T04:52:38.982380896Z" level=info msg="StartContainer for \"8c54a63228ced27a67870ac6b78df6379fdef598fdfb46b354d38b933507bf0e\" returns successfully" Nov 4 04:52:39.196579 kubelet[2985]: I1104 04:52:39.196479 2985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-gkc5k" podStartSLOduration=4.196467151 podStartE2EDuration="4.196467151s" podCreationTimestamp="2025-11-04 04:52:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-04 04:52:37.192698544 +0000 UTC m=+7.134957028" watchObservedRunningTime="2025-11-04 04:52:39.196467151 +0000 UTC m=+9.138725634" Nov 4 04:52:42.151853 kubelet[2985]: I1104 04:52:42.151777 2985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-65cdcdfd6d-pxffr" podStartSLOduration=3.701706807 podStartE2EDuration="6.151767893s" podCreationTimestamp="2025-11-04 04:52:36 +0000 UTC" firstStartedPulling="2025-11-04 04:52:36.465441125 +0000 UTC m=+6.407699597" lastFinishedPulling="2025-11-04 04:52:38.915502208 +0000 UTC m=+8.857760683" observedRunningTime="2025-11-04 04:52:39.19725714 +0000 UTC m=+9.139515629" watchObservedRunningTime="2025-11-04 04:52:42.151767893 +0000 UTC m=+12.094026382" Nov 4 04:52:44.096233 sudo[2002]: pam_unix(sudo:session): session closed for user root Nov 4 04:52:44.097102 sshd[2001]: Connection closed by 147.75.109.163 port 44402 Nov 4 04:52:44.098741 sshd-session[1998]: pam_unix(sshd:session): session closed for user core Nov 4 04:52:44.100846 systemd[1]: sshd@6-139.178.70.105:22-147.75.109.163:44402.service: Deactivated successfully. Nov 4 04:52:44.103129 systemd[1]: session-9.scope: Deactivated successfully. Nov 4 04:52:44.103558 systemd[1]: session-9.scope: Consumed 3.434s CPU time, 153.1M memory peak. Nov 4 04:52:44.106169 systemd-logind[1653]: Session 9 logged out. Waiting for processes to exit. Nov 4 04:52:44.107920 systemd-logind[1653]: Removed session 9. Nov 4 04:52:48.146618 systemd[1]: Created slice kubepods-besteffort-pod2daf1748_3d84_45ee_a6f7_260c81302b35.slice - libcontainer container kubepods-besteffort-pod2daf1748_3d84_45ee_a6f7_260c81302b35.slice. Nov 4 04:52:48.176900 kubelet[2985]: I1104 04:52:48.176877 2985 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/2daf1748-3d84-45ee-a6f7-260c81302b35-typha-certs\") pod \"calico-typha-77c65bc454-tw228\" (UID: \"2daf1748-3d84-45ee-a6f7-260c81302b35\") " pod="calico-system/calico-typha-77c65bc454-tw228" Nov 4 04:52:48.177157 kubelet[2985]: I1104 04:52:48.176908 2985 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2daf1748-3d84-45ee-a6f7-260c81302b35-tigera-ca-bundle\") pod \"calico-typha-77c65bc454-tw228\" (UID: \"2daf1748-3d84-45ee-a6f7-260c81302b35\") " pod="calico-system/calico-typha-77c65bc454-tw228" Nov 4 04:52:48.177157 kubelet[2985]: I1104 04:52:48.176933 2985 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xm4gm\" (UniqueName: \"kubernetes.io/projected/2daf1748-3d84-45ee-a6f7-260c81302b35-kube-api-access-xm4gm\") pod \"calico-typha-77c65bc454-tw228\" (UID: \"2daf1748-3d84-45ee-a6f7-260c81302b35\") " pod="calico-system/calico-typha-77c65bc454-tw228" Nov 4 04:52:48.319130 systemd[1]: Created slice kubepods-besteffort-pod87d6c001_7e73_4436_9093_92df4619c304.slice - libcontainer container kubepods-besteffort-pod87d6c001_7e73_4436_9093_92df4619c304.slice. Nov 4 04:52:48.379050 kubelet[2985]: I1104 04:52:48.379007 2985 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/87d6c001-7e73-4436-9093-92df4619c304-cni-bin-dir\") pod \"calico-node-gf622\" (UID: \"87d6c001-7e73-4436-9093-92df4619c304\") " pod="calico-system/calico-node-gf622" Nov 4 04:52:48.379167 kubelet[2985]: I1104 04:52:48.379056 2985 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/87d6c001-7e73-4436-9093-92df4619c304-var-run-calico\") pod \"calico-node-gf622\" (UID: \"87d6c001-7e73-4436-9093-92df4619c304\") " pod="calico-system/calico-node-gf622" Nov 4 04:52:48.379167 kubelet[2985]: I1104 04:52:48.379075 2985 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/87d6c001-7e73-4436-9093-92df4619c304-cni-log-dir\") pod \"calico-node-gf622\" (UID: \"87d6c001-7e73-4436-9093-92df4619c304\") " pod="calico-system/calico-node-gf622" Nov 4 04:52:48.379167 kubelet[2985]: I1104 04:52:48.379086 2985 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/87d6c001-7e73-4436-9093-92df4619c304-cni-net-dir\") pod \"calico-node-gf622\" (UID: \"87d6c001-7e73-4436-9093-92df4619c304\") " pod="calico-system/calico-node-gf622" Nov 4 04:52:48.379167 kubelet[2985]: I1104 04:52:48.379097 2985 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/87d6c001-7e73-4436-9093-92df4619c304-lib-modules\") pod \"calico-node-gf622\" (UID: \"87d6c001-7e73-4436-9093-92df4619c304\") " pod="calico-system/calico-node-gf622" Nov 4 04:52:48.379167 kubelet[2985]: I1104 04:52:48.379106 2985 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/87d6c001-7e73-4436-9093-92df4619c304-node-certs\") pod \"calico-node-gf622\" (UID: \"87d6c001-7e73-4436-9093-92df4619c304\") " pod="calico-system/calico-node-gf622" Nov 4 04:52:48.379274 kubelet[2985]: I1104 04:52:48.379114 2985 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/87d6c001-7e73-4436-9093-92df4619c304-xtables-lock\") pod \"calico-node-gf622\" (UID: \"87d6c001-7e73-4436-9093-92df4619c304\") " pod="calico-system/calico-node-gf622" Nov 4 04:52:48.379274 kubelet[2985]: I1104 04:52:48.379126 2985 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vf8vv\" (UniqueName: \"kubernetes.io/projected/87d6c001-7e73-4436-9093-92df4619c304-kube-api-access-vf8vv\") pod \"calico-node-gf622\" (UID: \"87d6c001-7e73-4436-9093-92df4619c304\") " pod="calico-system/calico-node-gf622" Nov 4 04:52:48.379274 kubelet[2985]: I1104 04:52:48.379147 2985 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/87d6c001-7e73-4436-9093-92df4619c304-flexvol-driver-host\") pod \"calico-node-gf622\" (UID: \"87d6c001-7e73-4436-9093-92df4619c304\") " pod="calico-system/calico-node-gf622" Nov 4 04:52:48.379274 kubelet[2985]: I1104 04:52:48.379157 2985 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/87d6c001-7e73-4436-9093-92df4619c304-policysync\") pod \"calico-node-gf622\" (UID: \"87d6c001-7e73-4436-9093-92df4619c304\") " pod="calico-system/calico-node-gf622" Nov 4 04:52:48.379274 kubelet[2985]: I1104 04:52:48.379165 2985 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/87d6c001-7e73-4436-9093-92df4619c304-tigera-ca-bundle\") pod \"calico-node-gf622\" (UID: \"87d6c001-7e73-4436-9093-92df4619c304\") " pod="calico-system/calico-node-gf622" Nov 4 04:52:48.379357 kubelet[2985]: I1104 04:52:48.379177 2985 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/87d6c001-7e73-4436-9093-92df4619c304-var-lib-calico\") pod \"calico-node-gf622\" (UID: \"87d6c001-7e73-4436-9093-92df4619c304\") " pod="calico-system/calico-node-gf622" Nov 4 04:52:48.455993 containerd[1675]: time="2025-11-04T04:52:48.455845894Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-77c65bc454-tw228,Uid:2daf1748-3d84-45ee-a6f7-260c81302b35,Namespace:calico-system,Attempt:0,}" Nov 4 04:52:48.504124 kubelet[2985]: E1104 04:52:48.504022 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:52:48.504124 kubelet[2985]: W1104 04:52:48.504043 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:52:48.504124 kubelet[2985]: E1104 04:52:48.504078 2985 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:52:48.509552 containerd[1675]: time="2025-11-04T04:52:48.509473407Z" level=info msg="connecting to shim bc4980718329171576fb9655bb35ed20cd9af645b24b062297c8b20785a8509e" address="unix:///run/containerd/s/2e153beb00f97625c6e186637a6c954dc33a365eba6feed98208277858064d08" namespace=k8s.io protocol=ttrpc version=3 Nov 4 04:52:48.536286 systemd[1]: Started cri-containerd-bc4980718329171576fb9655bb35ed20cd9af645b24b062297c8b20785a8509e.scope - libcontainer container bc4980718329171576fb9655bb35ed20cd9af645b24b062297c8b20785a8509e. Nov 4 04:52:48.544292 kubelet[2985]: E1104 04:52:48.544257 2985 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-n8l85" podUID="fcffefa6-5e43-470d-ba72-4fb9af0e6455" Nov 4 04:52:48.565100 kubelet[2985]: E1104 04:52:48.565070 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:52:48.565100 kubelet[2985]: W1104 04:52:48.565102 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:52:48.565243 kubelet[2985]: E1104 04:52:48.565121 2985 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:52:48.565535 kubelet[2985]: E1104 04:52:48.565525 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:52:48.565535 kubelet[2985]: W1104 04:52:48.565532 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:52:48.565589 kubelet[2985]: E1104 04:52:48.565539 2985 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:52:48.565711 kubelet[2985]: E1104 04:52:48.565702 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:52:48.565711 kubelet[2985]: W1104 04:52:48.565710 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:52:48.565770 kubelet[2985]: E1104 04:52:48.565718 2985 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:52:48.565923 kubelet[2985]: E1104 04:52:48.565914 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:52:48.565951 kubelet[2985]: W1104 04:52:48.565923 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:52:48.565951 kubelet[2985]: E1104 04:52:48.565931 2985 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:52:48.566172 kubelet[2985]: E1104 04:52:48.566163 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:52:48.566172 kubelet[2985]: W1104 04:52:48.566171 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:52:48.566232 kubelet[2985]: E1104 04:52:48.566179 2985 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:52:48.566396 kubelet[2985]: E1104 04:52:48.566347 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:52:48.566396 kubelet[2985]: W1104 04:52:48.566355 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:52:48.566396 kubelet[2985]: E1104 04:52:48.566361 2985 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:52:48.566529 kubelet[2985]: E1104 04:52:48.566521 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:52:48.566558 kubelet[2985]: W1104 04:52:48.566528 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:52:48.566558 kubelet[2985]: E1104 04:52:48.566535 2985 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:52:48.566705 kubelet[2985]: E1104 04:52:48.566696 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:52:48.566705 kubelet[2985]: W1104 04:52:48.566703 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:52:48.566755 kubelet[2985]: E1104 04:52:48.566710 2985 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:52:48.566878 kubelet[2985]: E1104 04:52:48.566869 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:52:48.566878 kubelet[2985]: W1104 04:52:48.566876 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:52:48.566941 kubelet[2985]: E1104 04:52:48.566883 2985 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:52:48.567051 kubelet[2985]: E1104 04:52:48.567041 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:52:48.567051 kubelet[2985]: W1104 04:52:48.567049 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:52:48.567101 kubelet[2985]: E1104 04:52:48.567056 2985 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:52:48.567259 kubelet[2985]: E1104 04:52:48.567250 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:52:48.567259 kubelet[2985]: W1104 04:52:48.567259 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:52:48.567312 kubelet[2985]: E1104 04:52:48.567265 2985 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:52:48.567380 kubelet[2985]: E1104 04:52:48.567372 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:52:48.567405 kubelet[2985]: W1104 04:52:48.567379 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:52:48.567405 kubelet[2985]: E1104 04:52:48.567386 2985 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:52:48.567553 kubelet[2985]: E1104 04:52:48.567544 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:52:48.567553 kubelet[2985]: W1104 04:52:48.567550 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:52:48.567609 kubelet[2985]: E1104 04:52:48.567556 2985 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:52:48.567654 kubelet[2985]: E1104 04:52:48.567646 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:52:48.567654 kubelet[2985]: W1104 04:52:48.567653 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:52:48.567701 kubelet[2985]: E1104 04:52:48.567660 2985 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:52:48.567744 kubelet[2985]: E1104 04:52:48.567737 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:52:48.567744 kubelet[2985]: W1104 04:52:48.567743 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:52:48.567802 kubelet[2985]: E1104 04:52:48.567748 2985 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:52:48.567838 kubelet[2985]: E1104 04:52:48.567829 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:52:48.567861 kubelet[2985]: W1104 04:52:48.567837 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:52:48.567861 kubelet[2985]: E1104 04:52:48.567845 2985 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:52:48.567934 kubelet[2985]: E1104 04:52:48.567926 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:52:48.567960 kubelet[2985]: W1104 04:52:48.567933 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:52:48.567960 kubelet[2985]: E1104 04:52:48.567940 2985 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:52:48.568030 kubelet[2985]: E1104 04:52:48.568020 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:52:48.568030 kubelet[2985]: W1104 04:52:48.568030 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:52:48.568081 kubelet[2985]: E1104 04:52:48.568037 2985 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:52:48.568118 kubelet[2985]: E1104 04:52:48.568107 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:52:48.568155 kubelet[2985]: W1104 04:52:48.568116 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:52:48.568155 kubelet[2985]: E1104 04:52:48.568125 2985 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:52:48.568229 kubelet[2985]: E1104 04:52:48.568223 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:52:48.568229 kubelet[2985]: W1104 04:52:48.568228 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:52:48.568268 kubelet[2985]: E1104 04:52:48.568233 2985 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:52:48.580487 kubelet[2985]: E1104 04:52:48.580423 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:52:48.580487 kubelet[2985]: W1104 04:52:48.580434 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:52:48.580487 kubelet[2985]: E1104 04:52:48.580445 2985 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:52:48.580487 kubelet[2985]: I1104 04:52:48.580460 2985 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/fcffefa6-5e43-470d-ba72-4fb9af0e6455-socket-dir\") pod \"csi-node-driver-n8l85\" (UID: \"fcffefa6-5e43-470d-ba72-4fb9af0e6455\") " pod="calico-system/csi-node-driver-n8l85" Nov 4 04:52:48.580791 kubelet[2985]: E1104 04:52:48.580561 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:52:48.580791 kubelet[2985]: W1104 04:52:48.580567 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:52:48.580791 kubelet[2985]: E1104 04:52:48.580572 2985 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:52:48.580791 kubelet[2985]: I1104 04:52:48.580580 2985 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/fcffefa6-5e43-470d-ba72-4fb9af0e6455-registration-dir\") pod \"csi-node-driver-n8l85\" (UID: \"fcffefa6-5e43-470d-ba72-4fb9af0e6455\") " pod="calico-system/csi-node-driver-n8l85" Nov 4 04:52:48.581034 kubelet[2985]: E1104 04:52:48.580904 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:52:48.581034 kubelet[2985]: W1104 04:52:48.580912 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:52:48.581034 kubelet[2985]: E1104 04:52:48.580918 2985 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:52:48.581034 kubelet[2985]: I1104 04:52:48.580928 2985 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/fcffefa6-5e43-470d-ba72-4fb9af0e6455-varrun\") pod \"csi-node-driver-n8l85\" (UID: \"fcffefa6-5e43-470d-ba72-4fb9af0e6455\") " pod="calico-system/csi-node-driver-n8l85" Nov 4 04:52:48.581034 kubelet[2985]: E1104 04:52:48.581025 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:52:48.581481 kubelet[2985]: W1104 04:52:48.581031 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:52:48.581481 kubelet[2985]: E1104 04:52:48.581044 2985 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:52:48.581481 kubelet[2985]: I1104 04:52:48.581079 2985 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fcffefa6-5e43-470d-ba72-4fb9af0e6455-kubelet-dir\") pod \"csi-node-driver-n8l85\" (UID: \"fcffefa6-5e43-470d-ba72-4fb9af0e6455\") " pod="calico-system/csi-node-driver-n8l85" Nov 4 04:52:48.581481 kubelet[2985]: E1104 04:52:48.581197 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:52:48.581481 kubelet[2985]: W1104 04:52:48.581202 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:52:48.581481 kubelet[2985]: E1104 04:52:48.581207 2985 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:52:48.581481 kubelet[2985]: I1104 04:52:48.581218 2985 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mm7kt\" (UniqueName: \"kubernetes.io/projected/fcffefa6-5e43-470d-ba72-4fb9af0e6455-kube-api-access-mm7kt\") pod \"csi-node-driver-n8l85\" (UID: \"fcffefa6-5e43-470d-ba72-4fb9af0e6455\") " pod="calico-system/csi-node-driver-n8l85" Nov 4 04:52:48.581481 kubelet[2985]: E1104 04:52:48.581338 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:52:48.581617 kubelet[2985]: W1104 04:52:48.581352 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:52:48.581617 kubelet[2985]: E1104 04:52:48.581361 2985 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:52:48.581942 kubelet[2985]: E1104 04:52:48.581844 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:52:48.581942 kubelet[2985]: W1104 04:52:48.581851 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:52:48.581942 kubelet[2985]: E1104 04:52:48.581858 2985 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:52:48.582237 kubelet[2985]: E1104 04:52:48.582024 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:52:48.582237 kubelet[2985]: W1104 04:52:48.582031 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:52:48.582237 kubelet[2985]: E1104 04:52:48.582039 2985 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:52:48.582557 kubelet[2985]: E1104 04:52:48.582420 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:52:48.582557 kubelet[2985]: W1104 04:52:48.582427 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:52:48.582557 kubelet[2985]: E1104 04:52:48.582433 2985 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:52:48.583422 kubelet[2985]: E1104 04:52:48.583335 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:52:48.583422 kubelet[2985]: W1104 04:52:48.583403 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:52:48.583422 kubelet[2985]: E1104 04:52:48.583413 2985 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:52:48.584343 kubelet[2985]: E1104 04:52:48.584315 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:52:48.584343 kubelet[2985]: W1104 04:52:48.584325 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:52:48.584343 kubelet[2985]: E1104 04:52:48.584333 2985 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:52:48.584896 kubelet[2985]: E1104 04:52:48.584875 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:52:48.584896 kubelet[2985]: W1104 04:52:48.584883 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:52:48.584896 kubelet[2985]: E1104 04:52:48.584889 2985 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:52:48.585588 kubelet[2985]: E1104 04:52:48.585581 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:52:48.585636 kubelet[2985]: W1104 04:52:48.585629 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:52:48.585694 kubelet[2985]: E1104 04:52:48.585676 2985 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:52:48.586424 kubelet[2985]: E1104 04:52:48.586350 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:52:48.586424 kubelet[2985]: W1104 04:52:48.586358 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:52:48.586424 kubelet[2985]: E1104 04:52:48.586364 2985 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:52:48.589906 kubelet[2985]: E1104 04:52:48.586807 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:52:48.589906 kubelet[2985]: W1104 04:52:48.586812 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:52:48.589906 kubelet[2985]: E1104 04:52:48.586818 2985 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:52:48.596197 containerd[1675]: time="2025-11-04T04:52:48.596168683Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-77c65bc454-tw228,Uid:2daf1748-3d84-45ee-a6f7-260c81302b35,Namespace:calico-system,Attempt:0,} returns sandbox id \"bc4980718329171576fb9655bb35ed20cd9af645b24b062297c8b20785a8509e\"" Nov 4 04:52:48.597915 containerd[1675]: time="2025-11-04T04:52:48.597492828Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Nov 4 04:52:48.626039 containerd[1675]: time="2025-11-04T04:52:48.625996578Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-gf622,Uid:87d6c001-7e73-4436-9093-92df4619c304,Namespace:calico-system,Attempt:0,}" Nov 4 04:52:48.680238 containerd[1675]: time="2025-11-04T04:52:48.680208676Z" level=info msg="connecting to shim 353455624ffa271a89a4a830f330ed267f8aa5b3cd4d74516877e8e7a7bf8cc6" address="unix:///run/containerd/s/06934d940f252d12f75959e4a1e5cca03239998601d4011cec17c891c4812c0c" namespace=k8s.io protocol=ttrpc version=3 Nov 4 04:52:48.682401 kubelet[2985]: E1104 04:52:48.682029 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:52:48.682401 kubelet[2985]: W1104 04:52:48.682070 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:52:48.682603 kubelet[2985]: E1104 04:52:48.682084 2985 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:52:48.682680 kubelet[2985]: E1104 04:52:48.682674 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:52:48.682760 kubelet[2985]: W1104 04:52:48.682726 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:52:48.682760 kubelet[2985]: E1104 04:52:48.682735 2985 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:52:48.683300 kubelet[2985]: E1104 04:52:48.683237 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:52:48.683300 kubelet[2985]: W1104 04:52:48.683250 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:52:48.683300 kubelet[2985]: E1104 04:52:48.683258 2985 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:52:48.683638 kubelet[2985]: E1104 04:52:48.683616 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:52:48.683638 kubelet[2985]: W1104 04:52:48.683623 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:52:48.683638 kubelet[2985]: E1104 04:52:48.683629 2985 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:52:48.683930 kubelet[2985]: E1104 04:52:48.683874 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:52:48.683930 kubelet[2985]: W1104 04:52:48.683881 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:52:48.683930 kubelet[2985]: E1104 04:52:48.683887 2985 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:52:48.684176 kubelet[2985]: E1104 04:52:48.684126 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:52:48.684232 kubelet[2985]: W1104 04:52:48.684209 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:52:48.684232 kubelet[2985]: E1104 04:52:48.684219 2985 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:52:48.684358 kubelet[2985]: E1104 04:52:48.684352 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:52:48.684535 kubelet[2985]: W1104 04:52:48.684397 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:52:48.684535 kubelet[2985]: E1104 04:52:48.684405 2985 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:52:48.684675 kubelet[2985]: E1104 04:52:48.684669 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:52:48.684784 kubelet[2985]: W1104 04:52:48.684707 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:52:48.684784 kubelet[2985]: E1104 04:52:48.684715 2985 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:52:48.685015 kubelet[2985]: E1104 04:52:48.684956 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:52:48.685015 kubelet[2985]: W1104 04:52:48.684963 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:52:48.685015 kubelet[2985]: E1104 04:52:48.684968 2985 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:52:48.685129 kubelet[2985]: E1104 04:52:48.685112 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:52:48.685129 kubelet[2985]: W1104 04:52:48.685117 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:52:48.685129 kubelet[2985]: E1104 04:52:48.685122 2985 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:52:48.685383 kubelet[2985]: E1104 04:52:48.685345 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:52:48.685383 kubelet[2985]: W1104 04:52:48.685351 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:52:48.685383 kubelet[2985]: E1104 04:52:48.685357 2985 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:52:48.685705 kubelet[2985]: E1104 04:52:48.685542 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:52:48.685705 kubelet[2985]: W1104 04:52:48.685548 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:52:48.685705 kubelet[2985]: E1104 04:52:48.685553 2985 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:52:48.685806 kubelet[2985]: E1104 04:52:48.685789 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:52:48.685806 kubelet[2985]: W1104 04:52:48.685795 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:52:48.685806 kubelet[2985]: E1104 04:52:48.685800 2985 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:52:48.687525 kubelet[2985]: E1104 04:52:48.687114 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:52:48.687525 kubelet[2985]: W1104 04:52:48.687122 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:52:48.687525 kubelet[2985]: E1104 04:52:48.687128 2985 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:52:48.687525 kubelet[2985]: E1104 04:52:48.687271 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:52:48.687525 kubelet[2985]: W1104 04:52:48.687276 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:52:48.687525 kubelet[2985]: E1104 04:52:48.687288 2985 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:52:48.687525 kubelet[2985]: E1104 04:52:48.687468 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:52:48.687525 kubelet[2985]: W1104 04:52:48.687473 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:52:48.687525 kubelet[2985]: E1104 04:52:48.687478 2985 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:52:48.688150 kubelet[2985]: E1104 04:52:48.687944 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:52:48.688150 kubelet[2985]: W1104 04:52:48.687951 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:52:48.688150 kubelet[2985]: E1104 04:52:48.687957 2985 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:52:48.688328 kubelet[2985]: E1104 04:52:48.688308 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:52:48.688328 kubelet[2985]: W1104 04:52:48.688315 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:52:48.688328 kubelet[2985]: E1104 04:52:48.688321 2985 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:52:48.688804 kubelet[2985]: E1104 04:52:48.688634 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:52:48.688804 kubelet[2985]: W1104 04:52:48.688640 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:52:48.688804 kubelet[2985]: E1104 04:52:48.688646 2985 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:52:48.689172 kubelet[2985]: E1104 04:52:48.689165 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:52:48.689221 kubelet[2985]: W1104 04:52:48.689214 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:52:48.689359 kubelet[2985]: E1104 04:52:48.689255 2985 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:52:48.689784 kubelet[2985]: E1104 04:52:48.689718 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:52:48.689784 kubelet[2985]: W1104 04:52:48.689725 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:52:48.689784 kubelet[2985]: E1104 04:52:48.689731 2985 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:52:48.690108 kubelet[2985]: E1104 04:52:48.690095 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:52:48.690325 kubelet[2985]: W1104 04:52:48.690243 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:52:48.690325 kubelet[2985]: E1104 04:52:48.690252 2985 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:52:48.690753 kubelet[2985]: E1104 04:52:48.690677 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:52:48.690753 kubelet[2985]: W1104 04:52:48.690683 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:52:48.690753 kubelet[2985]: E1104 04:52:48.690689 2985 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:52:48.691180 kubelet[2985]: E1104 04:52:48.691173 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:52:48.692778 kubelet[2985]: W1104 04:52:48.692767 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:52:48.692907 kubelet[2985]: E1104 04:52:48.692817 2985 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:52:48.693454 kubelet[2985]: E1104 04:52:48.693446 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:52:48.695000 kubelet[2985]: W1104 04:52:48.694985 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:52:48.695177 kubelet[2985]: E1104 04:52:48.695094 2985 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:52:48.698126 kubelet[2985]: E1104 04:52:48.698111 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:52:48.698220 kubelet[2985]: W1104 04:52:48.698210 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:52:48.698312 kubelet[2985]: E1104 04:52:48.698305 2985 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:52:48.713296 systemd[1]: Started cri-containerd-353455624ffa271a89a4a830f330ed267f8aa5b3cd4d74516877e8e7a7bf8cc6.scope - libcontainer container 353455624ffa271a89a4a830f330ed267f8aa5b3cd4d74516877e8e7a7bf8cc6. Nov 4 04:52:48.747330 containerd[1675]: time="2025-11-04T04:52:48.747307407Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-gf622,Uid:87d6c001-7e73-4436-9093-92df4619c304,Namespace:calico-system,Attempt:0,} returns sandbox id \"353455624ffa271a89a4a830f330ed267f8aa5b3cd4d74516877e8e7a7bf8cc6\"" Nov 4 04:52:49.831384 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1611954661.mount: Deactivated successfully. Nov 4 04:52:50.144253 kubelet[2985]: E1104 04:52:50.144222 2985 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-n8l85" podUID="fcffefa6-5e43-470d-ba72-4fb9af0e6455" Nov 4 04:52:50.533691 containerd[1675]: time="2025-11-04T04:52:50.533194815Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:52:50.534713 containerd[1675]: time="2025-11-04T04:52:50.534693449Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33735893" Nov 4 04:52:50.535374 containerd[1675]: time="2025-11-04T04:52:50.535346577Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:52:50.536259 containerd[1675]: time="2025-11-04T04:52:50.536231921Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:52:50.536846 containerd[1675]: time="2025-11-04T04:52:50.536607487Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 1.93908396s" Nov 4 04:52:50.536846 containerd[1675]: time="2025-11-04T04:52:50.536633842Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Nov 4 04:52:50.537196 containerd[1675]: time="2025-11-04T04:52:50.537181541Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Nov 4 04:52:50.547610 containerd[1675]: time="2025-11-04T04:52:50.547570954Z" level=info msg="CreateContainer within sandbox \"bc4980718329171576fb9655bb35ed20cd9af645b24b062297c8b20785a8509e\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 4 04:52:50.574299 containerd[1675]: time="2025-11-04T04:52:50.573812167Z" level=info msg="Container ac74722f3f6abb4f79149dc0a2e1f4ca9d3f5a26728a90988dd7ee0b68d205a2: CDI devices from CRI Config.CDIDevices: []" Nov 4 04:52:50.575977 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1699438314.mount: Deactivated successfully. Nov 4 04:52:50.624757 containerd[1675]: time="2025-11-04T04:52:50.624725187Z" level=info msg="CreateContainer within sandbox \"bc4980718329171576fb9655bb35ed20cd9af645b24b062297c8b20785a8509e\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"ac74722f3f6abb4f79149dc0a2e1f4ca9d3f5a26728a90988dd7ee0b68d205a2\"" Nov 4 04:52:50.625693 containerd[1675]: time="2025-11-04T04:52:50.625676223Z" level=info msg="StartContainer for \"ac74722f3f6abb4f79149dc0a2e1f4ca9d3f5a26728a90988dd7ee0b68d205a2\"" Nov 4 04:52:50.627416 containerd[1675]: time="2025-11-04T04:52:50.627398566Z" level=info msg="connecting to shim ac74722f3f6abb4f79149dc0a2e1f4ca9d3f5a26728a90988dd7ee0b68d205a2" address="unix:///run/containerd/s/2e153beb00f97625c6e186637a6c954dc33a365eba6feed98208277858064d08" protocol=ttrpc version=3 Nov 4 04:52:50.690270 systemd[1]: Started cri-containerd-ac74722f3f6abb4f79149dc0a2e1f4ca9d3f5a26728a90988dd7ee0b68d205a2.scope - libcontainer container ac74722f3f6abb4f79149dc0a2e1f4ca9d3f5a26728a90988dd7ee0b68d205a2. Nov 4 04:52:50.753292 containerd[1675]: time="2025-11-04T04:52:50.753267947Z" level=info msg="StartContainer for \"ac74722f3f6abb4f79149dc0a2e1f4ca9d3f5a26728a90988dd7ee0b68d205a2\" returns successfully" Nov 4 04:52:51.271426 kubelet[2985]: I1104 04:52:51.271366 2985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-77c65bc454-tw228" podStartSLOduration=1.331478415 podStartE2EDuration="3.271350418s" podCreationTimestamp="2025-11-04 04:52:48 +0000 UTC" firstStartedPulling="2025-11-04 04:52:48.597260168 +0000 UTC m=+18.539518643" lastFinishedPulling="2025-11-04 04:52:50.537132167 +0000 UTC m=+20.479390646" observedRunningTime="2025-11-04 04:52:51.270630621 +0000 UTC m=+21.212889107" watchObservedRunningTime="2025-11-04 04:52:51.271350418 +0000 UTC m=+21.213608898" Nov 4 04:52:51.285024 kubelet[2985]: E1104 04:52:51.284998 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:52:51.285024 kubelet[2985]: W1104 04:52:51.285018 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:52:51.285167 kubelet[2985]: E1104 04:52:51.285036 2985 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:52:51.285167 kubelet[2985]: E1104 04:52:51.285155 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:52:51.285167 kubelet[2985]: W1104 04:52:51.285162 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:52:51.285336 kubelet[2985]: E1104 04:52:51.285168 2985 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:52:51.285336 kubelet[2985]: E1104 04:52:51.285262 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:52:51.285336 kubelet[2985]: W1104 04:52:51.285267 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:52:51.285336 kubelet[2985]: E1104 04:52:51.285272 2985 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:52:51.285611 kubelet[2985]: E1104 04:52:51.285397 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:52:51.285611 kubelet[2985]: W1104 04:52:51.285402 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:52:51.285611 kubelet[2985]: E1104 04:52:51.285408 2985 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:52:51.285611 kubelet[2985]: E1104 04:52:51.285511 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:52:51.285611 kubelet[2985]: W1104 04:52:51.285515 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:52:51.285611 kubelet[2985]: E1104 04:52:51.285520 2985 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:52:51.285611 kubelet[2985]: E1104 04:52:51.285597 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:52:51.285611 kubelet[2985]: W1104 04:52:51.285601 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:52:51.285611 kubelet[2985]: E1104 04:52:51.285605 2985 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:52:51.286012 kubelet[2985]: E1104 04:52:51.285722 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:52:51.286012 kubelet[2985]: W1104 04:52:51.285728 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:52:51.286012 kubelet[2985]: E1104 04:52:51.285734 2985 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:52:51.286012 kubelet[2985]: E1104 04:52:51.285830 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:52:51.286012 kubelet[2985]: W1104 04:52:51.285836 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:52:51.286012 kubelet[2985]: E1104 04:52:51.285854 2985 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:52:51.286349 kubelet[2985]: E1104 04:52:51.286022 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:52:51.286349 kubelet[2985]: W1104 04:52:51.286026 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:52:51.286349 kubelet[2985]: E1104 04:52:51.286032 2985 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:52:51.286349 kubelet[2985]: E1104 04:52:51.286119 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:52:51.286349 kubelet[2985]: W1104 04:52:51.286123 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:52:51.286349 kubelet[2985]: E1104 04:52:51.286127 2985 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:52:51.286349 kubelet[2985]: E1104 04:52:51.286222 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:52:51.286349 kubelet[2985]: W1104 04:52:51.286231 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:52:51.286349 kubelet[2985]: E1104 04:52:51.286237 2985 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:52:51.286349 kubelet[2985]: E1104 04:52:51.286330 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:52:51.286720 kubelet[2985]: W1104 04:52:51.286336 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:52:51.286720 kubelet[2985]: E1104 04:52:51.286343 2985 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:52:51.286720 kubelet[2985]: E1104 04:52:51.286441 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:52:51.286720 kubelet[2985]: W1104 04:52:51.286445 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:52:51.286720 kubelet[2985]: E1104 04:52:51.286449 2985 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:52:51.286720 kubelet[2985]: E1104 04:52:51.286538 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:52:51.286720 kubelet[2985]: W1104 04:52:51.286551 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:52:51.286720 kubelet[2985]: E1104 04:52:51.286557 2985 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:52:51.286720 kubelet[2985]: E1104 04:52:51.286639 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:52:51.286720 kubelet[2985]: W1104 04:52:51.286643 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:52:51.287269 kubelet[2985]: E1104 04:52:51.286648 2985 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:52:51.302158 kubelet[2985]: E1104 04:52:51.301987 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:52:51.302158 kubelet[2985]: W1104 04:52:51.302073 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:52:51.302158 kubelet[2985]: E1104 04:52:51.302087 2985 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:52:51.302376 kubelet[2985]: E1104 04:52:51.302223 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:52:51.302376 kubelet[2985]: W1104 04:52:51.302228 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:52:51.302376 kubelet[2985]: E1104 04:52:51.302233 2985 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:52:51.302558 kubelet[2985]: E1104 04:52:51.302480 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:52:51.302558 kubelet[2985]: W1104 04:52:51.302491 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:52:51.302558 kubelet[2985]: E1104 04:52:51.302500 2985 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:52:51.303014 kubelet[2985]: E1104 04:52:51.302609 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:52:51.303014 kubelet[2985]: W1104 04:52:51.302615 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:52:51.303014 kubelet[2985]: E1104 04:52:51.302620 2985 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:52:51.303014 kubelet[2985]: E1104 04:52:51.302917 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:52:51.303014 kubelet[2985]: W1104 04:52:51.302923 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:52:51.303014 kubelet[2985]: E1104 04:52:51.302930 2985 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:52:51.303168 kubelet[2985]: E1104 04:52:51.303097 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:52:51.303168 kubelet[2985]: W1104 04:52:51.303104 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:52:51.303168 kubelet[2985]: E1104 04:52:51.303111 2985 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:52:51.303226 kubelet[2985]: E1104 04:52:51.303220 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:52:51.303226 kubelet[2985]: W1104 04:52:51.303224 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:52:51.303279 kubelet[2985]: E1104 04:52:51.303230 2985 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:52:51.303308 kubelet[2985]: E1104 04:52:51.303297 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:52:51.303308 kubelet[2985]: W1104 04:52:51.303301 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:52:51.303308 kubelet[2985]: E1104 04:52:51.303305 2985 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:52:51.303373 kubelet[2985]: E1104 04:52:51.303364 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:52:51.303373 kubelet[2985]: W1104 04:52:51.303368 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:52:51.303373 kubelet[2985]: E1104 04:52:51.303372 2985 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:52:51.303456 kubelet[2985]: E1104 04:52:51.303444 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:52:51.303456 kubelet[2985]: W1104 04:52:51.303451 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:52:51.303456 kubelet[2985]: E1104 04:52:51.303457 2985 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:52:51.303948 kubelet[2985]: E1104 04:52:51.303667 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:52:51.303948 kubelet[2985]: W1104 04:52:51.303672 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:52:51.303948 kubelet[2985]: E1104 04:52:51.303679 2985 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:52:51.303948 kubelet[2985]: E1104 04:52:51.303834 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:52:51.303948 kubelet[2985]: W1104 04:52:51.303841 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:52:51.303948 kubelet[2985]: E1104 04:52:51.303848 2985 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:52:51.303948 kubelet[2985]: E1104 04:52:51.303913 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:52:51.303948 kubelet[2985]: W1104 04:52:51.303917 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:52:51.303948 kubelet[2985]: E1104 04:52:51.303922 2985 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:52:51.304163 kubelet[2985]: E1104 04:52:51.303990 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:52:51.304163 kubelet[2985]: W1104 04:52:51.303994 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:52:51.304163 kubelet[2985]: E1104 04:52:51.303999 2985 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:52:51.304163 kubelet[2985]: E1104 04:52:51.304082 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:52:51.304163 kubelet[2985]: W1104 04:52:51.304087 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:52:51.304163 kubelet[2985]: E1104 04:52:51.304152 2985 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:52:51.304359 kubelet[2985]: E1104 04:52:51.304350 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:52:51.304359 kubelet[2985]: W1104 04:52:51.304357 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:52:51.304421 kubelet[2985]: E1104 04:52:51.304365 2985 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:52:51.304466 kubelet[2985]: E1104 04:52:51.304455 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:52:51.304466 kubelet[2985]: W1104 04:52:51.304459 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:52:51.304466 kubelet[2985]: E1104 04:52:51.304464 2985 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:52:51.304550 kubelet[2985]: E1104 04:52:51.304542 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:52:51.304550 kubelet[2985]: W1104 04:52:51.304546 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:52:51.304585 kubelet[2985]: E1104 04:52:51.304551 2985 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:52:52.153410 kubelet[2985]: E1104 04:52:52.152944 2985 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-n8l85" podUID="fcffefa6-5e43-470d-ba72-4fb9af0e6455" Nov 4 04:52:52.218375 containerd[1675]: time="2025-11-04T04:52:52.218336966Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:52:52.220071 containerd[1675]: time="2025-11-04T04:52:52.220050534Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=0" Nov 4 04:52:52.220358 containerd[1675]: time="2025-11-04T04:52:52.220336892Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:52:52.223665 containerd[1675]: time="2025-11-04T04:52:52.223625650Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:52:52.225403 containerd[1675]: time="2025-11-04T04:52:52.225005036Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.687806163s" Nov 4 04:52:52.225403 containerd[1675]: time="2025-11-04T04:52:52.225385470Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Nov 4 04:52:52.229191 containerd[1675]: time="2025-11-04T04:52:52.228119385Z" level=info msg="CreateContainer within sandbox \"353455624ffa271a89a4a830f330ed267f8aa5b3cd4d74516877e8e7a7bf8cc6\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 4 04:52:52.234909 containerd[1675]: time="2025-11-04T04:52:52.234883337Z" level=info msg="Container 3e7e3626ae412dca17fba4d9d0773524b90cf8af7646b0006eacd8b9f38da5e6: CDI devices from CRI Config.CDIDevices: []" Nov 4 04:52:52.237445 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount622899359.mount: Deactivated successfully. Nov 4 04:52:52.255376 containerd[1675]: time="2025-11-04T04:52:52.255349929Z" level=info msg="CreateContainer within sandbox \"353455624ffa271a89a4a830f330ed267f8aa5b3cd4d74516877e8e7a7bf8cc6\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"3e7e3626ae412dca17fba4d9d0773524b90cf8af7646b0006eacd8b9f38da5e6\"" Nov 4 04:52:52.258103 containerd[1675]: time="2025-11-04T04:52:52.257459975Z" level=info msg="StartContainer for \"3e7e3626ae412dca17fba4d9d0773524b90cf8af7646b0006eacd8b9f38da5e6\"" Nov 4 04:52:52.259999 containerd[1675]: time="2025-11-04T04:52:52.259865131Z" level=info msg="connecting to shim 3e7e3626ae412dca17fba4d9d0773524b90cf8af7646b0006eacd8b9f38da5e6" address="unix:///run/containerd/s/06934d940f252d12f75959e4a1e5cca03239998601d4011cec17c891c4812c0c" protocol=ttrpc version=3 Nov 4 04:52:52.290186 kubelet[2985]: I1104 04:52:52.290127 2985 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 4 04:52:52.293296 kubelet[2985]: E1104 04:52:52.292824 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:52:52.293296 kubelet[2985]: W1104 04:52:52.292843 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:52:52.297276 kubelet[2985]: E1104 04:52:52.297236 2985 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:52:52.297762 kubelet[2985]: E1104 04:52:52.297745 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:52:52.297762 kubelet[2985]: W1104 04:52:52.297759 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:52:52.297818 kubelet[2985]: E1104 04:52:52.297771 2985 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:52:52.297936 kubelet[2985]: E1104 04:52:52.297900 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:52:52.297936 kubelet[2985]: W1104 04:52:52.297907 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:52:52.297936 kubelet[2985]: E1104 04:52:52.297913 2985 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:52:52.298049 kubelet[2985]: E1104 04:52:52.298016 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:52:52.298091 kubelet[2985]: W1104 04:52:52.298048 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:52:52.298091 kubelet[2985]: E1104 04:52:52.298057 2985 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:52:52.298293 kubelet[2985]: E1104 04:52:52.298283 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:52:52.298293 kubelet[2985]: W1104 04:52:52.298291 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:52:52.298345 kubelet[2985]: E1104 04:52:52.298299 2985 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:52:52.298402 kubelet[2985]: E1104 04:52:52.298391 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:52:52.298430 kubelet[2985]: W1104 04:52:52.298397 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:52:52.298430 kubelet[2985]: E1104 04:52:52.298422 2985 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:52:52.298604 kubelet[2985]: E1104 04:52:52.298507 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:52:52.298604 kubelet[2985]: W1104 04:52:52.298513 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:52:52.298604 kubelet[2985]: E1104 04:52:52.298518 2985 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:52:52.298691 kubelet[2985]: E1104 04:52:52.298614 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:52:52.298691 kubelet[2985]: W1104 04:52:52.298618 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:52:52.298691 kubelet[2985]: E1104 04:52:52.298623 2985 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:52:52.298764 kubelet[2985]: E1104 04:52:52.298710 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:52:52.298764 kubelet[2985]: W1104 04:52:52.298727 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:52:52.298764 kubelet[2985]: E1104 04:52:52.298734 2985 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:52:52.298827 kubelet[2985]: E1104 04:52:52.298816 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:52:52.298827 kubelet[2985]: W1104 04:52:52.298821 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:52:52.298864 kubelet[2985]: E1104 04:52:52.298827 2985 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:52:52.299172 kubelet[2985]: E1104 04:52:52.298912 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:52:52.299172 kubelet[2985]: W1104 04:52:52.298938 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:52:52.299172 kubelet[2985]: E1104 04:52:52.298946 2985 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:52:52.299172 kubelet[2985]: E1104 04:52:52.299078 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:52:52.299172 kubelet[2985]: W1104 04:52:52.299083 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:52:52.299172 kubelet[2985]: E1104 04:52:52.299088 2985 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:52:52.299296 kubelet[2985]: E1104 04:52:52.299188 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:52:52.299296 kubelet[2985]: W1104 04:52:52.299194 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:52:52.299296 kubelet[2985]: E1104 04:52:52.299200 2985 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:52:52.299296 kubelet[2985]: E1104 04:52:52.299284 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:52:52.299373 kubelet[2985]: W1104 04:52:52.299302 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:52:52.299373 kubelet[2985]: E1104 04:52:52.299308 2985 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:52:52.308185 kubelet[2985]: E1104 04:52:52.299393 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:52:52.308185 kubelet[2985]: W1104 04:52:52.299399 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:52:52.308185 kubelet[2985]: E1104 04:52:52.299404 2985 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:52:52.299576 systemd[1]: Started cri-containerd-3e7e3626ae412dca17fba4d9d0773524b90cf8af7646b0006eacd8b9f38da5e6.scope - libcontainer container 3e7e3626ae412dca17fba4d9d0773524b90cf8af7646b0006eacd8b9f38da5e6. Nov 4 04:52:52.314462 kubelet[2985]: E1104 04:52:52.310081 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:52:52.314462 kubelet[2985]: W1104 04:52:52.310092 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:52:52.314462 kubelet[2985]: E1104 04:52:52.310104 2985 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:52:52.314462 kubelet[2985]: E1104 04:52:52.311013 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:52:52.314462 kubelet[2985]: W1104 04:52:52.311019 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:52:52.314462 kubelet[2985]: E1104 04:52:52.311027 2985 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:52:52.314462 kubelet[2985]: E1104 04:52:52.311132 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:52:52.314462 kubelet[2985]: W1104 04:52:52.311350 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:52:52.314462 kubelet[2985]: E1104 04:52:52.311357 2985 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:52:52.314462 kubelet[2985]: E1104 04:52:52.311475 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:52:52.314642 kubelet[2985]: W1104 04:52:52.311479 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:52:52.314642 kubelet[2985]: E1104 04:52:52.311484 2985 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:52:52.314642 kubelet[2985]: E1104 04:52:52.311713 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:52:52.314642 kubelet[2985]: W1104 04:52:52.311718 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:52:52.314642 kubelet[2985]: E1104 04:52:52.311724 2985 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:52:52.314642 kubelet[2985]: E1104 04:52:52.311800 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:52:52.314642 kubelet[2985]: W1104 04:52:52.311804 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:52:52.314642 kubelet[2985]: E1104 04:52:52.311809 2985 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:52:52.314642 kubelet[2985]: E1104 04:52:52.311902 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:52:52.314642 kubelet[2985]: W1104 04:52:52.311906 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:52:52.315956 kubelet[2985]: E1104 04:52:52.311910 2985 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:52:52.315956 kubelet[2985]: E1104 04:52:52.312658 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:52:52.315956 kubelet[2985]: W1104 04:52:52.312663 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:52:52.315956 kubelet[2985]: E1104 04:52:52.312669 2985 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:52:52.315956 kubelet[2985]: E1104 04:52:52.312762 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:52:52.315956 kubelet[2985]: W1104 04:52:52.312766 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:52:52.315956 kubelet[2985]: E1104 04:52:52.312771 2985 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:52:52.315956 kubelet[2985]: E1104 04:52:52.313086 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:52:52.315956 kubelet[2985]: W1104 04:52:52.313091 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:52:52.315956 kubelet[2985]: E1104 04:52:52.313097 2985 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:52:52.316120 kubelet[2985]: E1104 04:52:52.313190 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:52:52.316120 kubelet[2985]: W1104 04:52:52.313196 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:52:52.316120 kubelet[2985]: E1104 04:52:52.313201 2985 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:52:52.316120 kubelet[2985]: E1104 04:52:52.313476 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:52:52.316120 kubelet[2985]: W1104 04:52:52.313481 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:52:52.316120 kubelet[2985]: E1104 04:52:52.313487 2985 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:52:52.316120 kubelet[2985]: E1104 04:52:52.313887 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:52:52.316120 kubelet[2985]: W1104 04:52:52.313894 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:52:52.316120 kubelet[2985]: E1104 04:52:52.313899 2985 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:52:52.316120 kubelet[2985]: E1104 04:52:52.314016 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:52:52.316733 kubelet[2985]: W1104 04:52:52.314158 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:52:52.316733 kubelet[2985]: E1104 04:52:52.314167 2985 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:52:52.316733 kubelet[2985]: E1104 04:52:52.314247 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:52:52.316733 kubelet[2985]: W1104 04:52:52.314251 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:52:52.316733 kubelet[2985]: E1104 04:52:52.314255 2985 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:52:52.316733 kubelet[2985]: E1104 04:52:52.314454 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:52:52.316733 kubelet[2985]: W1104 04:52:52.314459 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:52:52.316733 kubelet[2985]: E1104 04:52:52.314465 2985 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:52:52.316733 kubelet[2985]: E1104 04:52:52.314545 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:52:52.316733 kubelet[2985]: W1104 04:52:52.314549 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:52:52.317113 kubelet[2985]: E1104 04:52:52.314554 2985 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:52:52.317113 kubelet[2985]: E1104 04:52:52.315087 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:52:52.317113 kubelet[2985]: W1104 04:52:52.315093 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:52:52.317113 kubelet[2985]: E1104 04:52:52.315098 2985 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:52:52.343307 systemd[1]: cri-containerd-3e7e3626ae412dca17fba4d9d0773524b90cf8af7646b0006eacd8b9f38da5e6.scope: Deactivated successfully. Nov 4 04:52:52.403496 containerd[1675]: time="2025-11-04T04:52:52.402875278Z" level=info msg="StartContainer for \"3e7e3626ae412dca17fba4d9d0773524b90cf8af7646b0006eacd8b9f38da5e6\" returns successfully" Nov 4 04:52:52.403687 containerd[1675]: time="2025-11-04T04:52:52.403662464Z" level=info msg="received exit event container_id:\"3e7e3626ae412dca17fba4d9d0773524b90cf8af7646b0006eacd8b9f38da5e6\" id:\"3e7e3626ae412dca17fba4d9d0773524b90cf8af7646b0006eacd8b9f38da5e6\" pid:3672 exited_at:{seconds:1762231972 nanos:344851734}" Nov 4 04:52:52.418314 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3e7e3626ae412dca17fba4d9d0773524b90cf8af7646b0006eacd8b9f38da5e6-rootfs.mount: Deactivated successfully. Nov 4 04:52:53.292177 containerd[1675]: time="2025-11-04T04:52:53.291796486Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Nov 4 04:52:54.141707 kubelet[2985]: E1104 04:52:54.141505 2985 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-n8l85" podUID="fcffefa6-5e43-470d-ba72-4fb9af0e6455" Nov 4 04:52:56.141352 kubelet[2985]: E1104 04:52:56.141325 2985 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-n8l85" podUID="fcffefa6-5e43-470d-ba72-4fb9af0e6455" Nov 4 04:52:58.140954 kubelet[2985]: E1104 04:52:58.140929 2985 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-n8l85" podUID="fcffefa6-5e43-470d-ba72-4fb9af0e6455" Nov 4 04:52:58.687527 containerd[1675]: time="2025-11-04T04:52:58.687500496Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:52:58.692638 containerd[1675]: time="2025-11-04T04:52:58.692617563Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70442291" Nov 4 04:52:58.692883 containerd[1675]: time="2025-11-04T04:52:58.692869379Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:52:58.693945 containerd[1675]: time="2025-11-04T04:52:58.693930015Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:52:58.694628 containerd[1675]: time="2025-11-04T04:52:58.694612401Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 5.402791384s" Nov 4 04:52:58.694653 containerd[1675]: time="2025-11-04T04:52:58.694630929Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Nov 4 04:52:58.717658 containerd[1675]: time="2025-11-04T04:52:58.717635983Z" level=info msg="CreateContainer within sandbox \"353455624ffa271a89a4a830f330ed267f8aa5b3cd4d74516877e8e7a7bf8cc6\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 4 04:52:58.723447 containerd[1675]: time="2025-11-04T04:52:58.723426995Z" level=info msg="Container 83b017122936d543ff8d00412cf268d2b377339a6e912550595c552b73b2afbe: CDI devices from CRI Config.CDIDevices: []" Nov 4 04:52:58.727365 containerd[1675]: time="2025-11-04T04:52:58.727349476Z" level=info msg="CreateContainer within sandbox \"353455624ffa271a89a4a830f330ed267f8aa5b3cd4d74516877e8e7a7bf8cc6\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"83b017122936d543ff8d00412cf268d2b377339a6e912550595c552b73b2afbe\"" Nov 4 04:52:58.727746 containerd[1675]: time="2025-11-04T04:52:58.727731066Z" level=info msg="StartContainer for \"83b017122936d543ff8d00412cf268d2b377339a6e912550595c552b73b2afbe\"" Nov 4 04:52:58.728632 containerd[1675]: time="2025-11-04T04:52:58.728615552Z" level=info msg="connecting to shim 83b017122936d543ff8d00412cf268d2b377339a6e912550595c552b73b2afbe" address="unix:///run/containerd/s/06934d940f252d12f75959e4a1e5cca03239998601d4011cec17c891c4812c0c" protocol=ttrpc version=3 Nov 4 04:52:58.752376 systemd[1]: Started cri-containerd-83b017122936d543ff8d00412cf268d2b377339a6e912550595c552b73b2afbe.scope - libcontainer container 83b017122936d543ff8d00412cf268d2b377339a6e912550595c552b73b2afbe. Nov 4 04:52:58.790313 containerd[1675]: time="2025-11-04T04:52:58.790287130Z" level=info msg="StartContainer for \"83b017122936d543ff8d00412cf268d2b377339a6e912550595c552b73b2afbe\" returns successfully" Nov 4 04:53:00.144353 kubelet[2985]: E1104 04:53:00.144094 2985 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-n8l85" podUID="fcffefa6-5e43-470d-ba72-4fb9af0e6455" Nov 4 04:53:00.152282 systemd[1]: cri-containerd-83b017122936d543ff8d00412cf268d2b377339a6e912550595c552b73b2afbe.scope: Deactivated successfully. Nov 4 04:53:00.152769 systemd[1]: cri-containerd-83b017122936d543ff8d00412cf268d2b377339a6e912550595c552b73b2afbe.scope: Consumed 280ms CPU time, 161.6M memory peak, 2.3M read from disk, 171.3M written to disk. Nov 4 04:53:00.161444 containerd[1675]: time="2025-11-04T04:53:00.161416625Z" level=info msg="received exit event container_id:\"83b017122936d543ff8d00412cf268d2b377339a6e912550595c552b73b2afbe\" id:\"83b017122936d543ff8d00412cf268d2b377339a6e912550595c552b73b2afbe\" pid:3746 exited_at:{seconds:1762231980 nanos:151640052}" Nov 4 04:53:00.199051 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-83b017122936d543ff8d00412cf268d2b377339a6e912550595c552b73b2afbe-rootfs.mount: Deactivated successfully. Nov 4 04:53:00.237612 kubelet[2985]: I1104 04:53:00.237594 2985 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Nov 4 04:53:00.317761 systemd[1]: Created slice kubepods-burstable-podefcdbb78_bd9e_49c7_af7a_d1cd50d1c472.slice - libcontainer container kubepods-burstable-podefcdbb78_bd9e_49c7_af7a_d1cd50d1c472.slice. Nov 4 04:53:00.331952 systemd[1]: Created slice kubepods-besteffort-pod1d8b1bcb_e02e_4b4d_a0fc_69488ed9f88e.slice - libcontainer container kubepods-besteffort-pod1d8b1bcb_e02e_4b4d_a0fc_69488ed9f88e.slice. Nov 4 04:53:00.344897 systemd[1]: Created slice kubepods-besteffort-pod7732b0eb_674a_43e3_92d8_1fb67b4f9d41.slice - libcontainer container kubepods-besteffort-pod7732b0eb_674a_43e3_92d8_1fb67b4f9d41.slice. Nov 4 04:53:00.351724 systemd[1]: Created slice kubepods-burstable-pod7550d563_69e4_4b12_a9be_4644b313f876.slice - libcontainer container kubepods-burstable-pod7550d563_69e4_4b12_a9be_4644b313f876.slice. Nov 4 04:53:00.357839 systemd[1]: Created slice kubepods-besteffort-podc17ebeee_c521_467f_b0f7_4c3c787f171e.slice - libcontainer container kubepods-besteffort-podc17ebeee_c521_467f_b0f7_4c3c787f171e.slice. Nov 4 04:53:00.366162 systemd[1]: Created slice kubepods-besteffort-podd45b2a71_f81e_4bd0_9908_32bdf3ccd976.slice - libcontainer container kubepods-besteffort-podd45b2a71_f81e_4bd0_9908_32bdf3ccd976.slice. Nov 4 04:53:00.371348 systemd[1]: Created slice kubepods-besteffort-podf119f006_6b2e_4995_831f_8cf672906209.slice - libcontainer container kubepods-besteffort-podf119f006_6b2e_4995_831f_8cf672906209.slice. Nov 4 04:53:00.385513 containerd[1675]: time="2025-11-04T04:53:00.385069583Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Nov 4 04:53:00.456526 kubelet[2985]: I1104 04:53:00.456439 2985 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/c17ebeee-c521-467f-b0f7-4c3c787f171e-goldmane-key-pair\") pod \"goldmane-7c778bb748-c5fgp\" (UID: \"c17ebeee-c521-467f-b0f7-4c3c787f171e\") " pod="calico-system/goldmane-7c778bb748-c5fgp" Nov 4 04:53:00.456526 kubelet[2985]: I1104 04:53:00.456469 2985 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8s5fj\" (UniqueName: \"kubernetes.io/projected/c17ebeee-c521-467f-b0f7-4c3c787f171e-kube-api-access-8s5fj\") pod \"goldmane-7c778bb748-c5fgp\" (UID: \"c17ebeee-c521-467f-b0f7-4c3c787f171e\") " pod="calico-system/goldmane-7c778bb748-c5fgp" Nov 4 04:53:00.456526 kubelet[2985]: I1104 04:53:00.456486 2985 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8l5n\" (UniqueName: \"kubernetes.io/projected/7550d563-69e4-4b12-a9be-4644b313f876-kube-api-access-n8l5n\") pod \"coredns-66bc5c9577-xxm2v\" (UID: \"7550d563-69e4-4b12-a9be-4644b313f876\") " pod="kube-system/coredns-66bc5c9577-xxm2v" Nov 4 04:53:00.456526 kubelet[2985]: I1104 04:53:00.456499 2985 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d45b2a71-f81e-4bd0-9908-32bdf3ccd976-calico-apiserver-certs\") pod \"calico-apiserver-6b698f4965-59j2r\" (UID: \"d45b2a71-f81e-4bd0-9908-32bdf3ccd976\") " pod="calico-apiserver/calico-apiserver-6b698f4965-59j2r" Nov 4 04:53:00.456715 kubelet[2985]: I1104 04:53:00.456630 2985 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/7732b0eb-674a-43e3-92d8-1fb67b4f9d41-calico-apiserver-certs\") pod \"calico-apiserver-6b698f4965-vbq5r\" (UID: \"7732b0eb-674a-43e3-92d8-1fb67b4f9d41\") " pod="calico-apiserver/calico-apiserver-6b698f4965-vbq5r" Nov 4 04:53:00.456742 kubelet[2985]: I1104 04:53:00.456713 2985 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgf7x\" (UniqueName: \"kubernetes.io/projected/efcdbb78-bd9e-49c7-af7a-d1cd50d1c472-kube-api-access-zgf7x\") pod \"coredns-66bc5c9577-pj2c4\" (UID: \"efcdbb78-bd9e-49c7-af7a-d1cd50d1c472\") " pod="kube-system/coredns-66bc5c9577-pj2c4" Nov 4 04:53:00.456742 kubelet[2985]: I1104 04:53:00.456730 2985 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8k89t\" (UniqueName: \"kubernetes.io/projected/7732b0eb-674a-43e3-92d8-1fb67b4f9d41-kube-api-access-8k89t\") pod \"calico-apiserver-6b698f4965-vbq5r\" (UID: \"7732b0eb-674a-43e3-92d8-1fb67b4f9d41\") " pod="calico-apiserver/calico-apiserver-6b698f4965-vbq5r" Nov 4 04:53:00.456792 kubelet[2985]: I1104 04:53:00.456745 2985 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/1d8b1bcb-e02e-4b4d-a0fc-69488ed9f88e-whisker-backend-key-pair\") pod \"whisker-86b98745dc-rsjl2\" (UID: \"1d8b1bcb-e02e-4b4d-a0fc-69488ed9f88e\") " pod="calico-system/whisker-86b98745dc-rsjl2" Nov 4 04:53:00.456938 kubelet[2985]: I1104 04:53:00.456913 2985 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1d8b1bcb-e02e-4b4d-a0fc-69488ed9f88e-whisker-ca-bundle\") pod \"whisker-86b98745dc-rsjl2\" (UID: \"1d8b1bcb-e02e-4b4d-a0fc-69488ed9f88e\") " pod="calico-system/whisker-86b98745dc-rsjl2" Nov 4 04:53:00.457007 kubelet[2985]: I1104 04:53:00.456964 2985 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c17ebeee-c521-467f-b0f7-4c3c787f171e-config\") pod \"goldmane-7c778bb748-c5fgp\" (UID: \"c17ebeee-c521-467f-b0f7-4c3c787f171e\") " pod="calico-system/goldmane-7c778bb748-c5fgp" Nov 4 04:53:00.457007 kubelet[2985]: I1104 04:53:00.456980 2985 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7550d563-69e4-4b12-a9be-4644b313f876-config-volume\") pod \"coredns-66bc5c9577-xxm2v\" (UID: \"7550d563-69e4-4b12-a9be-4644b313f876\") " pod="kube-system/coredns-66bc5c9577-xxm2v" Nov 4 04:53:00.457007 kubelet[2985]: I1104 04:53:00.456993 2985 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ts4p4\" (UniqueName: \"kubernetes.io/projected/1d8b1bcb-e02e-4b4d-a0fc-69488ed9f88e-kube-api-access-ts4p4\") pod \"whisker-86b98745dc-rsjl2\" (UID: \"1d8b1bcb-e02e-4b4d-a0fc-69488ed9f88e\") " pod="calico-system/whisker-86b98745dc-rsjl2" Nov 4 04:53:00.457007 kubelet[2985]: I1104 04:53:00.457004 2985 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c17ebeee-c521-467f-b0f7-4c3c787f171e-goldmane-ca-bundle\") pod \"goldmane-7c778bb748-c5fgp\" (UID: \"c17ebeee-c521-467f-b0f7-4c3c787f171e\") " pod="calico-system/goldmane-7c778bb748-c5fgp" Nov 4 04:53:00.457443 kubelet[2985]: I1104 04:53:00.457425 2985 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrtp7\" (UniqueName: \"kubernetes.io/projected/d45b2a71-f81e-4bd0-9908-32bdf3ccd976-kube-api-access-zrtp7\") pod \"calico-apiserver-6b698f4965-59j2r\" (UID: \"d45b2a71-f81e-4bd0-9908-32bdf3ccd976\") " pod="calico-apiserver/calico-apiserver-6b698f4965-59j2r" Nov 4 04:53:00.457479 kubelet[2985]: I1104 04:53:00.457450 2985 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zs9d\" (UniqueName: \"kubernetes.io/projected/f119f006-6b2e-4995-831f-8cf672906209-kube-api-access-7zs9d\") pod \"calico-kube-controllers-747674d8bc-lrrzc\" (UID: \"f119f006-6b2e-4995-831f-8cf672906209\") " pod="calico-system/calico-kube-controllers-747674d8bc-lrrzc" Nov 4 04:53:00.457516 kubelet[2985]: I1104 04:53:00.457494 2985 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/efcdbb78-bd9e-49c7-af7a-d1cd50d1c472-config-volume\") pod \"coredns-66bc5c9577-pj2c4\" (UID: \"efcdbb78-bd9e-49c7-af7a-d1cd50d1c472\") " pod="kube-system/coredns-66bc5c9577-pj2c4" Nov 4 04:53:00.457516 kubelet[2985]: I1104 04:53:00.457508 2985 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f119f006-6b2e-4995-831f-8cf672906209-tigera-ca-bundle\") pod \"calico-kube-controllers-747674d8bc-lrrzc\" (UID: \"f119f006-6b2e-4995-831f-8cf672906209\") " pod="calico-system/calico-kube-controllers-747674d8bc-lrrzc" Nov 4 04:53:00.628587 containerd[1675]: time="2025-11-04T04:53:00.628557353Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-pj2c4,Uid:efcdbb78-bd9e-49c7-af7a-d1cd50d1c472,Namespace:kube-system,Attempt:0,}" Nov 4 04:53:00.641918 containerd[1675]: time="2025-11-04T04:53:00.641784485Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-86b98745dc-rsjl2,Uid:1d8b1bcb-e02e-4b4d-a0fc-69488ed9f88e,Namespace:calico-system,Attempt:0,}" Nov 4 04:53:00.658594 containerd[1675]: time="2025-11-04T04:53:00.658440581Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-xxm2v,Uid:7550d563-69e4-4b12-a9be-4644b313f876,Namespace:kube-system,Attempt:0,}" Nov 4 04:53:00.665935 containerd[1675]: time="2025-11-04T04:53:00.665469969Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-c5fgp,Uid:c17ebeee-c521-467f-b0f7-4c3c787f171e,Namespace:calico-system,Attempt:0,}" Nov 4 04:53:00.667858 containerd[1675]: time="2025-11-04T04:53:00.667832235Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b698f4965-vbq5r,Uid:7732b0eb-674a-43e3-92d8-1fb67b4f9d41,Namespace:calico-apiserver,Attempt:0,}" Nov 4 04:53:00.673235 containerd[1675]: time="2025-11-04T04:53:00.673220063Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b698f4965-59j2r,Uid:d45b2a71-f81e-4bd0-9908-32bdf3ccd976,Namespace:calico-apiserver,Attempt:0,}" Nov 4 04:53:00.675332 containerd[1675]: time="2025-11-04T04:53:00.675278156Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-747674d8bc-lrrzc,Uid:f119f006-6b2e-4995-831f-8cf672906209,Namespace:calico-system,Attempt:0,}" Nov 4 04:53:00.917916 containerd[1675]: time="2025-11-04T04:53:00.917885886Z" level=error msg="Failed to destroy network for sandbox \"160998984a86830c77db9017f4300a0ae4199df0d9f37177d1a3fe920009d913\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 04:53:00.918547 containerd[1675]: time="2025-11-04T04:53:00.918530290Z" level=error msg="Failed to destroy network for sandbox \"e0f86e372ec2ca2c637650688fc41f1e3bf942e449772017362147cddb7b6f35\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 04:53:00.920431 containerd[1675]: time="2025-11-04T04:53:00.920415723Z" level=error msg="Failed to destroy network for sandbox \"1934aa967ad32cd285a336e743a8979b0193588eab9f6953d9d1bf3a65a4d79f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 04:53:00.925458 containerd[1675]: time="2025-11-04T04:53:00.925404058Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-xxm2v,Uid:7550d563-69e4-4b12-a9be-4644b313f876,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"160998984a86830c77db9017f4300a0ae4199df0d9f37177d1a3fe920009d913\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 04:53:00.929850 containerd[1675]: time="2025-11-04T04:53:00.929521070Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-86b98745dc-rsjl2,Uid:1d8b1bcb-e02e-4b4d-a0fc-69488ed9f88e,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1934aa967ad32cd285a336e743a8979b0193588eab9f6953d9d1bf3a65a4d79f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 04:53:00.934860 kubelet[2985]: E1104 04:53:00.934826 2985 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"160998984a86830c77db9017f4300a0ae4199df0d9f37177d1a3fe920009d913\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 04:53:00.935702 kubelet[2985]: E1104 04:53:00.934928 2985 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"160998984a86830c77db9017f4300a0ae4199df0d9f37177d1a3fe920009d913\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-xxm2v" Nov 4 04:53:00.935702 kubelet[2985]: E1104 04:53:00.934942 2985 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"160998984a86830c77db9017f4300a0ae4199df0d9f37177d1a3fe920009d913\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-xxm2v" Nov 4 04:53:00.937345 containerd[1675]: time="2025-11-04T04:53:00.937173673Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-pj2c4,Uid:efcdbb78-bd9e-49c7-af7a-d1cd50d1c472,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e0f86e372ec2ca2c637650688fc41f1e3bf942e449772017362147cddb7b6f35\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 04:53:00.937501 containerd[1675]: time="2025-11-04T04:53:00.937430077Z" level=error msg="Failed to destroy network for sandbox \"90dc5542463542d2e15e36c318e422a72fc1ed2aef3fcbbe9a3324aa1d34e14c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 04:53:00.937617 kubelet[2985]: E1104 04:53:00.937587 2985 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-xxm2v_kube-system(7550d563-69e4-4b12-a9be-4644b313f876)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-xxm2v_kube-system(7550d563-69e4-4b12-a9be-4644b313f876)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"160998984a86830c77db9017f4300a0ae4199df0d9f37177d1a3fe920009d913\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-xxm2v" podUID="7550d563-69e4-4b12-a9be-4644b313f876" Nov 4 04:53:00.938643 containerd[1675]: time="2025-11-04T04:53:00.938538022Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b698f4965-59j2r,Uid:d45b2a71-f81e-4bd0-9908-32bdf3ccd976,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"90dc5542463542d2e15e36c318e422a72fc1ed2aef3fcbbe9a3324aa1d34e14c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 04:53:00.940169 containerd[1675]: time="2025-11-04T04:53:00.939444061Z" level=error msg="Failed to destroy network for sandbox \"eeb4aa0656c6ba03d32f61c493b1b4d5940748d114c69e1076d22b839b15421f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 04:53:00.940641 containerd[1675]: time="2025-11-04T04:53:00.940609801Z" level=error msg="Failed to destroy network for sandbox \"d20623fe15beda176452db8067589f71de1c0309f934b9f846cade42258dce25\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 04:53:00.941289 containerd[1675]: time="2025-11-04T04:53:00.941252971Z" level=error msg="Failed to destroy network for sandbox \"9f930ce97a7ac9fdd6a6093f1238be92e2183aa1ca0af95172f464128593b018\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 04:53:00.941862 containerd[1675]: time="2025-11-04T04:53:00.941816301Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b698f4965-vbq5r,Uid:7732b0eb-674a-43e3-92d8-1fb67b4f9d41,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"eeb4aa0656c6ba03d32f61c493b1b4d5940748d114c69e1076d22b839b15421f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 04:53:00.942396 containerd[1675]: time="2025-11-04T04:53:00.942350126Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-c5fgp,Uid:c17ebeee-c521-467f-b0f7-4c3c787f171e,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d20623fe15beda176452db8067589f71de1c0309f934b9f846cade42258dce25\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 04:53:00.942443 kubelet[2985]: E1104 04:53:00.942404 2985 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eeb4aa0656c6ba03d32f61c493b1b4d5940748d114c69e1076d22b839b15421f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 04:53:00.942443 kubelet[2985]: E1104 04:53:00.942428 2985 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eeb4aa0656c6ba03d32f61c493b1b4d5940748d114c69e1076d22b839b15421f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6b698f4965-vbq5r" Nov 4 04:53:00.942443 kubelet[2985]: E1104 04:53:00.942438 2985 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eeb4aa0656c6ba03d32f61c493b1b4d5940748d114c69e1076d22b839b15421f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6b698f4965-vbq5r" Nov 4 04:53:00.942511 kubelet[2985]: E1104 04:53:00.942463 2985 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6b698f4965-vbq5r_calico-apiserver(7732b0eb-674a-43e3-92d8-1fb67b4f9d41)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6b698f4965-vbq5r_calico-apiserver(7732b0eb-674a-43e3-92d8-1fb67b4f9d41)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"eeb4aa0656c6ba03d32f61c493b1b4d5940748d114c69e1076d22b839b15421f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6b698f4965-vbq5r" podUID="7732b0eb-674a-43e3-92d8-1fb67b4f9d41" Nov 4 04:53:00.942511 kubelet[2985]: E1104 04:53:00.942488 2985 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1934aa967ad32cd285a336e743a8979b0193588eab9f6953d9d1bf3a65a4d79f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 04:53:00.942511 kubelet[2985]: E1104 04:53:00.942497 2985 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1934aa967ad32cd285a336e743a8979b0193588eab9f6953d9d1bf3a65a4d79f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-86b98745dc-rsjl2" Nov 4 04:53:00.942581 kubelet[2985]: E1104 04:53:00.942506 2985 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1934aa967ad32cd285a336e743a8979b0193588eab9f6953d9d1bf3a65a4d79f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-86b98745dc-rsjl2" Nov 4 04:53:00.942581 kubelet[2985]: E1104 04:53:00.942520 2985 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-86b98745dc-rsjl2_calico-system(1d8b1bcb-e02e-4b4d-a0fc-69488ed9f88e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-86b98745dc-rsjl2_calico-system(1d8b1bcb-e02e-4b4d-a0fc-69488ed9f88e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1934aa967ad32cd285a336e743a8979b0193588eab9f6953d9d1bf3a65a4d79f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-86b98745dc-rsjl2" podUID="1d8b1bcb-e02e-4b4d-a0fc-69488ed9f88e" Nov 4 04:53:00.942581 kubelet[2985]: E1104 04:53:00.942536 2985 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e0f86e372ec2ca2c637650688fc41f1e3bf942e449772017362147cddb7b6f35\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 04:53:00.943734 kubelet[2985]: E1104 04:53:00.942545 2985 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e0f86e372ec2ca2c637650688fc41f1e3bf942e449772017362147cddb7b6f35\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-pj2c4" Nov 4 04:53:00.943734 kubelet[2985]: E1104 04:53:00.942552 2985 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e0f86e372ec2ca2c637650688fc41f1e3bf942e449772017362147cddb7b6f35\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-pj2c4" Nov 4 04:53:00.943734 kubelet[2985]: E1104 04:53:00.942573 2985 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-pj2c4_kube-system(efcdbb78-bd9e-49c7-af7a-d1cd50d1c472)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-pj2c4_kube-system(efcdbb78-bd9e-49c7-af7a-d1cd50d1c472)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e0f86e372ec2ca2c637650688fc41f1e3bf942e449772017362147cddb7b6f35\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-pj2c4" podUID="efcdbb78-bd9e-49c7-af7a-d1cd50d1c472" Nov 4 04:53:00.943938 kubelet[2985]: E1104 04:53:00.942588 2985 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"90dc5542463542d2e15e36c318e422a72fc1ed2aef3fcbbe9a3324aa1d34e14c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 04:53:00.943938 kubelet[2985]: E1104 04:53:00.942597 2985 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"90dc5542463542d2e15e36c318e422a72fc1ed2aef3fcbbe9a3324aa1d34e14c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6b698f4965-59j2r" Nov 4 04:53:00.943938 kubelet[2985]: E1104 04:53:00.942605 2985 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"90dc5542463542d2e15e36c318e422a72fc1ed2aef3fcbbe9a3324aa1d34e14c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6b698f4965-59j2r" Nov 4 04:53:00.943999 kubelet[2985]: E1104 04:53:00.942619 2985 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6b698f4965-59j2r_calico-apiserver(d45b2a71-f81e-4bd0-9908-32bdf3ccd976)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6b698f4965-59j2r_calico-apiserver(d45b2a71-f81e-4bd0-9908-32bdf3ccd976)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"90dc5542463542d2e15e36c318e422a72fc1ed2aef3fcbbe9a3324aa1d34e14c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6b698f4965-59j2r" podUID="d45b2a71-f81e-4bd0-9908-32bdf3ccd976" Nov 4 04:53:00.943999 kubelet[2985]: E1104 04:53:00.943309 2985 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d20623fe15beda176452db8067589f71de1c0309f934b9f846cade42258dce25\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 04:53:00.943999 kubelet[2985]: E1104 04:53:00.943324 2985 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d20623fe15beda176452db8067589f71de1c0309f934b9f846cade42258dce25\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-c5fgp" Nov 4 04:53:00.944070 kubelet[2985]: E1104 04:53:00.943339 2985 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d20623fe15beda176452db8067589f71de1c0309f934b9f846cade42258dce25\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-c5fgp" Nov 4 04:53:00.944070 kubelet[2985]: E1104 04:53:00.943359 2985 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-c5fgp_calico-system(c17ebeee-c521-467f-b0f7-4c3c787f171e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-c5fgp_calico-system(c17ebeee-c521-467f-b0f7-4c3c787f171e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d20623fe15beda176452db8067589f71de1c0309f934b9f846cade42258dce25\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-c5fgp" podUID="c17ebeee-c521-467f-b0f7-4c3c787f171e" Nov 4 04:53:00.944545 containerd[1675]: time="2025-11-04T04:53:00.944491094Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-747674d8bc-lrrzc,Uid:f119f006-6b2e-4995-831f-8cf672906209,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9f930ce97a7ac9fdd6a6093f1238be92e2183aa1ca0af95172f464128593b018\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 04:53:00.944612 kubelet[2985]: E1104 04:53:00.944577 2985 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9f930ce97a7ac9fdd6a6093f1238be92e2183aa1ca0af95172f464128593b018\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 04:53:00.944612 kubelet[2985]: E1104 04:53:00.944597 2985 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9f930ce97a7ac9fdd6a6093f1238be92e2183aa1ca0af95172f464128593b018\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-747674d8bc-lrrzc" Nov 4 04:53:00.944658 kubelet[2985]: E1104 04:53:00.944609 2985 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9f930ce97a7ac9fdd6a6093f1238be92e2183aa1ca0af95172f464128593b018\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-747674d8bc-lrrzc" Nov 4 04:53:00.944658 kubelet[2985]: E1104 04:53:00.944633 2985 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-747674d8bc-lrrzc_calico-system(f119f006-6b2e-4995-831f-8cf672906209)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-747674d8bc-lrrzc_calico-system(f119f006-6b2e-4995-831f-8cf672906209)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9f930ce97a7ac9fdd6a6093f1238be92e2183aa1ca0af95172f464128593b018\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-747674d8bc-lrrzc" podUID="f119f006-6b2e-4995-831f-8cf672906209" Nov 4 04:53:02.159358 systemd[1]: Created slice kubepods-besteffort-podfcffefa6_5e43_470d_ba72_4fb9af0e6455.slice - libcontainer container kubepods-besteffort-podfcffefa6_5e43_470d_ba72_4fb9af0e6455.slice. Nov 4 04:53:02.202043 kubelet[2985]: I1104 04:53:02.201681 2985 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 4 04:53:02.461115 containerd[1675]: time="2025-11-04T04:53:02.461045058Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-n8l85,Uid:fcffefa6-5e43-470d-ba72-4fb9af0e6455,Namespace:calico-system,Attempt:0,}" Nov 4 04:53:02.587147 containerd[1675]: time="2025-11-04T04:53:02.587109172Z" level=error msg="Failed to destroy network for sandbox \"299021f96655976832484a43fce183a0a41f3a29e22c98b62663d54f4eb424dc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 04:53:02.588662 systemd[1]: run-netns-cni\x2dade7516c\x2d1ed3\x2de44e\x2d7c63\x2da5bae48bdecc.mount: Deactivated successfully. Nov 4 04:53:02.638674 containerd[1675]: time="2025-11-04T04:53:02.638589633Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-n8l85,Uid:fcffefa6-5e43-470d-ba72-4fb9af0e6455,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"299021f96655976832484a43fce183a0a41f3a29e22c98b62663d54f4eb424dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 04:53:02.638832 kubelet[2985]: E1104 04:53:02.638799 2985 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"299021f96655976832484a43fce183a0a41f3a29e22c98b62663d54f4eb424dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 04:53:02.638890 kubelet[2985]: E1104 04:53:02.638835 2985 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"299021f96655976832484a43fce183a0a41f3a29e22c98b62663d54f4eb424dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-n8l85" Nov 4 04:53:02.638890 kubelet[2985]: E1104 04:53:02.638853 2985 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"299021f96655976832484a43fce183a0a41f3a29e22c98b62663d54f4eb424dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-n8l85" Nov 4 04:53:02.638950 kubelet[2985]: E1104 04:53:02.638892 2985 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-n8l85_calico-system(fcffefa6-5e43-470d-ba72-4fb9af0e6455)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-n8l85_calico-system(fcffefa6-5e43-470d-ba72-4fb9af0e6455)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"299021f96655976832484a43fce183a0a41f3a29e22c98b62663d54f4eb424dc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-n8l85" podUID="fcffefa6-5e43-470d-ba72-4fb9af0e6455" Nov 4 04:53:04.799832 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount523227608.mount: Deactivated successfully. Nov 4 04:53:04.834159 containerd[1675]: time="2025-11-04T04:53:04.833801461Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:53:04.839199 containerd[1675]: time="2025-11-04T04:53:04.838456644Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156880025" Nov 4 04:53:04.855515 containerd[1675]: time="2025-11-04T04:53:04.855482794Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:53:04.857023 containerd[1675]: time="2025-11-04T04:53:04.856746734Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:53:04.857470 containerd[1675]: time="2025-11-04T04:53:04.857447909Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 4.471810932s" Nov 4 04:53:04.862148 containerd[1675]: time="2025-11-04T04:53:04.862105197Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Nov 4 04:53:04.912726 containerd[1675]: time="2025-11-04T04:53:04.912695851Z" level=info msg="CreateContainer within sandbox \"353455624ffa271a89a4a830f330ed267f8aa5b3cd4d74516877e8e7a7bf8cc6\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 4 04:53:05.044815 containerd[1675]: time="2025-11-04T04:53:05.043986420Z" level=info msg="Container 440de899614a6ee6cbf4d82917dce2c3110a9f6a88062d4c7b314ca45924e85e: CDI devices from CRI Config.CDIDevices: []" Nov 4 04:53:05.044320 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1588342663.mount: Deactivated successfully. Nov 4 04:53:05.303055 containerd[1675]: time="2025-11-04T04:53:05.302380911Z" level=info msg="CreateContainer within sandbox \"353455624ffa271a89a4a830f330ed267f8aa5b3cd4d74516877e8e7a7bf8cc6\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"440de899614a6ee6cbf4d82917dce2c3110a9f6a88062d4c7b314ca45924e85e\"" Nov 4 04:53:05.324889 containerd[1675]: time="2025-11-04T04:53:05.318467325Z" level=info msg="StartContainer for \"440de899614a6ee6cbf4d82917dce2c3110a9f6a88062d4c7b314ca45924e85e\"" Nov 4 04:53:05.326360 containerd[1675]: time="2025-11-04T04:53:05.326320057Z" level=info msg="connecting to shim 440de899614a6ee6cbf4d82917dce2c3110a9f6a88062d4c7b314ca45924e85e" address="unix:///run/containerd/s/06934d940f252d12f75959e4a1e5cca03239998601d4011cec17c891c4812c0c" protocol=ttrpc version=3 Nov 4 04:53:05.451263 systemd[1]: Started cri-containerd-440de899614a6ee6cbf4d82917dce2c3110a9f6a88062d4c7b314ca45924e85e.scope - libcontainer container 440de899614a6ee6cbf4d82917dce2c3110a9f6a88062d4c7b314ca45924e85e. Nov 4 04:53:05.488058 containerd[1675]: time="2025-11-04T04:53:05.488030191Z" level=info msg="StartContainer for \"440de899614a6ee6cbf4d82917dce2c3110a9f6a88062d4c7b314ca45924e85e\" returns successfully" Nov 4 04:53:05.622697 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 4 04:53:05.628279 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 4 04:53:06.425683 kubelet[2985]: I1104 04:53:06.424050 2985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-gf622" podStartSLOduration=2.3095531400000002 podStartE2EDuration="18.424037166s" podCreationTimestamp="2025-11-04 04:52:48 +0000 UTC" firstStartedPulling="2025-11-04 04:52:48.748182485 +0000 UTC m=+18.690440959" lastFinishedPulling="2025-11-04 04:53:04.862666505 +0000 UTC m=+34.804924985" observedRunningTime="2025-11-04 04:53:06.421697398 +0000 UTC m=+36.363955882" watchObservedRunningTime="2025-11-04 04:53:06.424037166 +0000 UTC m=+36.366295645" Nov 4 04:53:06.496067 kubelet[2985]: I1104 04:53:06.495796 2985 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/1d8b1bcb-e02e-4b4d-a0fc-69488ed9f88e-whisker-backend-key-pair\") pod \"1d8b1bcb-e02e-4b4d-a0fc-69488ed9f88e\" (UID: \"1d8b1bcb-e02e-4b4d-a0fc-69488ed9f88e\") " Nov 4 04:53:06.497380 kubelet[2985]: I1104 04:53:06.497185 2985 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1d8b1bcb-e02e-4b4d-a0fc-69488ed9f88e-whisker-ca-bundle\") pod \"1d8b1bcb-e02e-4b4d-a0fc-69488ed9f88e\" (UID: \"1d8b1bcb-e02e-4b4d-a0fc-69488ed9f88e\") " Nov 4 04:53:06.497380 kubelet[2985]: I1104 04:53:06.497208 2985 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ts4p4\" (UniqueName: \"kubernetes.io/projected/1d8b1bcb-e02e-4b4d-a0fc-69488ed9f88e-kube-api-access-ts4p4\") pod \"1d8b1bcb-e02e-4b4d-a0fc-69488ed9f88e\" (UID: \"1d8b1bcb-e02e-4b4d-a0fc-69488ed9f88e\") " Nov 4 04:53:06.508207 kubelet[2985]: I1104 04:53:06.508180 2985 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d8b1bcb-e02e-4b4d-a0fc-69488ed9f88e-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "1d8b1bcb-e02e-4b4d-a0fc-69488ed9f88e" (UID: "1d8b1bcb-e02e-4b4d-a0fc-69488ed9f88e"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 4 04:53:06.508774 kubelet[2985]: I1104 04:53:06.508397 2985 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1d8b1bcb-e02e-4b4d-a0fc-69488ed9f88e-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Nov 4 04:53:06.518769 systemd[1]: var-lib-kubelet-pods-1d8b1bcb\x2de02e\x2d4b4d\x2da0fc\x2d69488ed9f88e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dts4p4.mount: Deactivated successfully. Nov 4 04:53:06.522927 systemd[1]: var-lib-kubelet-pods-1d8b1bcb\x2de02e\x2d4b4d\x2da0fc\x2d69488ed9f88e-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Nov 4 04:53:06.536671 kubelet[2985]: I1104 04:53:06.536224 2985 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d8b1bcb-e02e-4b4d-a0fc-69488ed9f88e-kube-api-access-ts4p4" (OuterVolumeSpecName: "kube-api-access-ts4p4") pod "1d8b1bcb-e02e-4b4d-a0fc-69488ed9f88e" (UID: "1d8b1bcb-e02e-4b4d-a0fc-69488ed9f88e"). InnerVolumeSpecName "kube-api-access-ts4p4". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 4 04:53:06.537589 kubelet[2985]: I1104 04:53:06.536900 2985 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d8b1bcb-e02e-4b4d-a0fc-69488ed9f88e-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "1d8b1bcb-e02e-4b4d-a0fc-69488ed9f88e" (UID: "1d8b1bcb-e02e-4b4d-a0fc-69488ed9f88e"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 4 04:53:06.609299 kubelet[2985]: I1104 04:53:06.609274 2985 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/1d8b1bcb-e02e-4b4d-a0fc-69488ed9f88e-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Nov 4 04:53:06.609299 kubelet[2985]: I1104 04:53:06.609295 2985 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ts4p4\" (UniqueName: \"kubernetes.io/projected/1d8b1bcb-e02e-4b4d-a0fc-69488ed9f88e-kube-api-access-ts4p4\") on node \"localhost\" DevicePath \"\"" Nov 4 04:53:06.637667 systemd-networkd[1561]: vxlan.calico: Link UP Nov 4 04:53:06.637672 systemd-networkd[1561]: vxlan.calico: Gained carrier Nov 4 04:53:06.722071 systemd[1]: Removed slice kubepods-besteffort-pod1d8b1bcb_e02e_4b4d_a0fc_69488ed9f88e.slice - libcontainer container kubepods-besteffort-pod1d8b1bcb_e02e_4b4d_a0fc_69488ed9f88e.slice. Nov 4 04:53:06.834899 systemd[1]: Created slice kubepods-besteffort-podf9df857a_4dd9_492a_a798_bdfc5556871c.slice - libcontainer container kubepods-besteffort-podf9df857a_4dd9_492a_a798_bdfc5556871c.slice. Nov 4 04:53:06.910644 kubelet[2985]: I1104 04:53:06.910564 2985 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f9df857a-4dd9-492a-a798-bdfc5556871c-whisker-ca-bundle\") pod \"whisker-7c7fd785ff-28wps\" (UID: \"f9df857a-4dd9-492a-a798-bdfc5556871c\") " pod="calico-system/whisker-7c7fd785ff-28wps" Nov 4 04:53:06.910644 kubelet[2985]: I1104 04:53:06.910589 2985 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f9df857a-4dd9-492a-a798-bdfc5556871c-whisker-backend-key-pair\") pod \"whisker-7c7fd785ff-28wps\" (UID: \"f9df857a-4dd9-492a-a798-bdfc5556871c\") " pod="calico-system/whisker-7c7fd785ff-28wps" Nov 4 04:53:06.910644 kubelet[2985]: I1104 04:53:06.910602 2985 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6cb99\" (UniqueName: \"kubernetes.io/projected/f9df857a-4dd9-492a-a798-bdfc5556871c-kube-api-access-6cb99\") pod \"whisker-7c7fd785ff-28wps\" (UID: \"f9df857a-4dd9-492a-a798-bdfc5556871c\") " pod="calico-system/whisker-7c7fd785ff-28wps" Nov 4 04:53:07.138405 containerd[1675]: time="2025-11-04T04:53:07.138378903Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7c7fd785ff-28wps,Uid:f9df857a-4dd9-492a-a798-bdfc5556871c,Namespace:calico-system,Attempt:0,}" Nov 4 04:53:07.418046 kubelet[2985]: I1104 04:53:07.417971 2985 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 4 04:53:07.523098 systemd-networkd[1561]: cali79c31cf13f9: Link UP Nov 4 04:53:07.523719 systemd-networkd[1561]: cali79c31cf13f9: Gained carrier Nov 4 04:53:07.534155 containerd[1675]: 2025-11-04 04:53:07.192 [INFO][4297] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--7c7fd785ff--28wps-eth0 whisker-7c7fd785ff- calico-system f9df857a-4dd9-492a-a798-bdfc5556871c 910 0 2025-11-04 04:53:06 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:7c7fd785ff projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-7c7fd785ff-28wps eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali79c31cf13f9 [] [] }} ContainerID="6d32fe6b8c3522551704f5eb47348a9516c8f1ab39734da42d74aa2ea141be92" Namespace="calico-system" Pod="whisker-7c7fd785ff-28wps" WorkloadEndpoint="localhost-k8s-whisker--7c7fd785ff--28wps-" Nov 4 04:53:07.534155 containerd[1675]: 2025-11-04 04:53:07.193 [INFO][4297] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6d32fe6b8c3522551704f5eb47348a9516c8f1ab39734da42d74aa2ea141be92" Namespace="calico-system" Pod="whisker-7c7fd785ff-28wps" WorkloadEndpoint="localhost-k8s-whisker--7c7fd785ff--28wps-eth0" Nov 4 04:53:07.534155 containerd[1675]: 2025-11-04 04:53:07.415 [INFO][4311] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6d32fe6b8c3522551704f5eb47348a9516c8f1ab39734da42d74aa2ea141be92" HandleID="k8s-pod-network.6d32fe6b8c3522551704f5eb47348a9516c8f1ab39734da42d74aa2ea141be92" Workload="localhost-k8s-whisker--7c7fd785ff--28wps-eth0" Nov 4 04:53:07.535492 containerd[1675]: 2025-11-04 04:53:07.419 [INFO][4311] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="6d32fe6b8c3522551704f5eb47348a9516c8f1ab39734da42d74aa2ea141be92" HandleID="k8s-pod-network.6d32fe6b8c3522551704f5eb47348a9516c8f1ab39734da42d74aa2ea141be92" Workload="localhost-k8s-whisker--7c7fd785ff--28wps-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e2a0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-7c7fd785ff-28wps", "timestamp":"2025-11-04 04:53:07.415764655 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 4 04:53:07.535492 containerd[1675]: 2025-11-04 04:53:07.419 [INFO][4311] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 4 04:53:07.535492 containerd[1675]: 2025-11-04 04:53:07.420 [INFO][4311] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 4 04:53:07.535492 containerd[1675]: 2025-11-04 04:53:07.420 [INFO][4311] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 4 04:53:07.535492 containerd[1675]: 2025-11-04 04:53:07.438 [INFO][4311] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6d32fe6b8c3522551704f5eb47348a9516c8f1ab39734da42d74aa2ea141be92" host="localhost" Nov 4 04:53:07.535492 containerd[1675]: 2025-11-04 04:53:07.505 [INFO][4311] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 4 04:53:07.535492 containerd[1675]: 2025-11-04 04:53:07.507 [INFO][4311] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 4 04:53:07.535492 containerd[1675]: 2025-11-04 04:53:07.508 [INFO][4311] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 4 04:53:07.535492 containerd[1675]: 2025-11-04 04:53:07.509 [INFO][4311] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 4 04:53:07.535492 containerd[1675]: 2025-11-04 04:53:07.510 [INFO][4311] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6d32fe6b8c3522551704f5eb47348a9516c8f1ab39734da42d74aa2ea141be92" host="localhost" Nov 4 04:53:07.535688 containerd[1675]: 2025-11-04 04:53:07.510 [INFO][4311] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.6d32fe6b8c3522551704f5eb47348a9516c8f1ab39734da42d74aa2ea141be92 Nov 4 04:53:07.535688 containerd[1675]: 2025-11-04 04:53:07.512 [INFO][4311] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6d32fe6b8c3522551704f5eb47348a9516c8f1ab39734da42d74aa2ea141be92" host="localhost" Nov 4 04:53:07.535688 containerd[1675]: 2025-11-04 04:53:07.516 [INFO][4311] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.6d32fe6b8c3522551704f5eb47348a9516c8f1ab39734da42d74aa2ea141be92" host="localhost" Nov 4 04:53:07.535688 containerd[1675]: 2025-11-04 04:53:07.516 [INFO][4311] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.6d32fe6b8c3522551704f5eb47348a9516c8f1ab39734da42d74aa2ea141be92" host="localhost" Nov 4 04:53:07.535688 containerd[1675]: 2025-11-04 04:53:07.516 [INFO][4311] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 4 04:53:07.535688 containerd[1675]: 2025-11-04 04:53:07.516 [INFO][4311] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="6d32fe6b8c3522551704f5eb47348a9516c8f1ab39734da42d74aa2ea141be92" HandleID="k8s-pod-network.6d32fe6b8c3522551704f5eb47348a9516c8f1ab39734da42d74aa2ea141be92" Workload="localhost-k8s-whisker--7c7fd785ff--28wps-eth0" Nov 4 04:53:07.535798 containerd[1675]: 2025-11-04 04:53:07.518 [INFO][4297] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6d32fe6b8c3522551704f5eb47348a9516c8f1ab39734da42d74aa2ea141be92" Namespace="calico-system" Pod="whisker-7c7fd785ff-28wps" WorkloadEndpoint="localhost-k8s-whisker--7c7fd785ff--28wps-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--7c7fd785ff--28wps-eth0", GenerateName:"whisker-7c7fd785ff-", Namespace:"calico-system", SelfLink:"", UID:"f9df857a-4dd9-492a-a798-bdfc5556871c", ResourceVersion:"910", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 4, 53, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7c7fd785ff", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-7c7fd785ff-28wps", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali79c31cf13f9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 04:53:07.535798 containerd[1675]: 2025-11-04 04:53:07.518 [INFO][4297] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="6d32fe6b8c3522551704f5eb47348a9516c8f1ab39734da42d74aa2ea141be92" Namespace="calico-system" Pod="whisker-7c7fd785ff-28wps" WorkloadEndpoint="localhost-k8s-whisker--7c7fd785ff--28wps-eth0" Nov 4 04:53:07.536457 containerd[1675]: 2025-11-04 04:53:07.518 [INFO][4297] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali79c31cf13f9 ContainerID="6d32fe6b8c3522551704f5eb47348a9516c8f1ab39734da42d74aa2ea141be92" Namespace="calico-system" Pod="whisker-7c7fd785ff-28wps" WorkloadEndpoint="localhost-k8s-whisker--7c7fd785ff--28wps-eth0" Nov 4 04:53:07.536457 containerd[1675]: 2025-11-04 04:53:07.525 [INFO][4297] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6d32fe6b8c3522551704f5eb47348a9516c8f1ab39734da42d74aa2ea141be92" Namespace="calico-system" Pod="whisker-7c7fd785ff-28wps" WorkloadEndpoint="localhost-k8s-whisker--7c7fd785ff--28wps-eth0" Nov 4 04:53:07.536500 containerd[1675]: 2025-11-04 04:53:07.525 [INFO][4297] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6d32fe6b8c3522551704f5eb47348a9516c8f1ab39734da42d74aa2ea141be92" Namespace="calico-system" Pod="whisker-7c7fd785ff-28wps" WorkloadEndpoint="localhost-k8s-whisker--7c7fd785ff--28wps-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--7c7fd785ff--28wps-eth0", GenerateName:"whisker-7c7fd785ff-", Namespace:"calico-system", SelfLink:"", UID:"f9df857a-4dd9-492a-a798-bdfc5556871c", ResourceVersion:"910", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 4, 53, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7c7fd785ff", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6d32fe6b8c3522551704f5eb47348a9516c8f1ab39734da42d74aa2ea141be92", Pod:"whisker-7c7fd785ff-28wps", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali79c31cf13f9", MAC:"6e:37:25:e1:bb:69", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 04:53:07.536544 containerd[1675]: 2025-11-04 04:53:07.531 [INFO][4297] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6d32fe6b8c3522551704f5eb47348a9516c8f1ab39734da42d74aa2ea141be92" Namespace="calico-system" Pod="whisker-7c7fd785ff-28wps" WorkloadEndpoint="localhost-k8s-whisker--7c7fd785ff--28wps-eth0" Nov 4 04:53:07.635070 containerd[1675]: time="2025-11-04T04:53:07.635016105Z" level=info msg="connecting to shim 6d32fe6b8c3522551704f5eb47348a9516c8f1ab39734da42d74aa2ea141be92" address="unix:///run/containerd/s/70464f1e1aedf9780c0025919332fddc64f96ed9496d3f9fc607b11f19dcb989" namespace=k8s.io protocol=ttrpc version=3 Nov 4 04:53:07.655231 systemd[1]: Started cri-containerd-6d32fe6b8c3522551704f5eb47348a9516c8f1ab39734da42d74aa2ea141be92.scope - libcontainer container 6d32fe6b8c3522551704f5eb47348a9516c8f1ab39734da42d74aa2ea141be92. Nov 4 04:53:07.662671 systemd-resolved[1345]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 4 04:53:07.696855 containerd[1675]: time="2025-11-04T04:53:07.696712013Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7c7fd785ff-28wps,Uid:f9df857a-4dd9-492a-a798-bdfc5556871c,Namespace:calico-system,Attempt:0,} returns sandbox id \"6d32fe6b8c3522551704f5eb47348a9516c8f1ab39734da42d74aa2ea141be92\"" Nov 4 04:53:07.707010 containerd[1675]: time="2025-11-04T04:53:07.706982027Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 4 04:53:08.048536 containerd[1675]: time="2025-11-04T04:53:08.048445014Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 04:53:08.055216 containerd[1675]: time="2025-11-04T04:53:08.055161394Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Nov 4 04:53:08.055216 containerd[1675]: time="2025-11-04T04:53:08.055164312Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 4 04:53:08.055789 kubelet[2985]: E1104 04:53:08.055334 2985 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 4 04:53:08.055789 kubelet[2985]: E1104 04:53:08.055364 2985 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 4 04:53:08.055789 kubelet[2985]: E1104 04:53:08.055425 2985 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-7c7fd785ff-28wps_calico-system(f9df857a-4dd9-492a-a798-bdfc5556871c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 4 04:53:08.064231 containerd[1675]: time="2025-11-04T04:53:08.056383899Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 4 04:53:08.160542 kubelet[2985]: I1104 04:53:08.160352 2985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d8b1bcb-e02e-4b4d-a0fc-69488ed9f88e" path="/var/lib/kubelet/pods/1d8b1bcb-e02e-4b4d-a0fc-69488ed9f88e/volumes" Nov 4 04:53:08.169263 systemd-networkd[1561]: vxlan.calico: Gained IPv6LL Nov 4 04:53:08.425914 containerd[1675]: time="2025-11-04T04:53:08.425776364Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 04:53:08.427387 containerd[1675]: time="2025-11-04T04:53:08.427370130Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Nov 4 04:53:08.428234 containerd[1675]: time="2025-11-04T04:53:08.428218472Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 4 04:53:08.429325 kubelet[2985]: E1104 04:53:08.428396 2985 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 4 04:53:08.429325 kubelet[2985]: E1104 04:53:08.428420 2985 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 4 04:53:08.429325 kubelet[2985]: E1104 04:53:08.428471 2985 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-7c7fd785ff-28wps_calico-system(f9df857a-4dd9-492a-a798-bdfc5556871c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 4 04:53:08.429325 kubelet[2985]: E1104 04:53:08.428500 2985 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7c7fd785ff-28wps" podUID="f9df857a-4dd9-492a-a798-bdfc5556871c" Nov 4 04:53:09.193242 systemd-networkd[1561]: cali79c31cf13f9: Gained IPv6LL Nov 4 04:53:09.425152 kubelet[2985]: E1104 04:53:09.423225 2985 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7c7fd785ff-28wps" podUID="f9df857a-4dd9-492a-a798-bdfc5556871c" Nov 4 04:53:12.205450 kubelet[2985]: I1104 04:53:12.205253 2985 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 4 04:53:13.168423 containerd[1675]: time="2025-11-04T04:53:13.168239370Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-pj2c4,Uid:efcdbb78-bd9e-49c7-af7a-d1cd50d1c472,Namespace:kube-system,Attempt:0,}" Nov 4 04:53:13.293904 systemd-networkd[1561]: cali63427f0ef6a: Link UP Nov 4 04:53:13.294485 systemd-networkd[1561]: cali63427f0ef6a: Gained carrier Nov 4 04:53:13.318547 containerd[1675]: 2025-11-04 04:53:13.241 [INFO][4439] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--pj2c4-eth0 coredns-66bc5c9577- kube-system efcdbb78-bd9e-49c7-af7a-d1cd50d1c472 834 0 2025-11-04 04:52:36 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-pj2c4 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali63427f0ef6a [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="b2c8e3d89228f34a077f9125033dd1e1520e5e1f035f8cc28afa559a90fc19cd" Namespace="kube-system" Pod="coredns-66bc5c9577-pj2c4" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--pj2c4-" Nov 4 04:53:13.318547 containerd[1675]: 2025-11-04 04:53:13.241 [INFO][4439] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b2c8e3d89228f34a077f9125033dd1e1520e5e1f035f8cc28afa559a90fc19cd" Namespace="kube-system" Pod="coredns-66bc5c9577-pj2c4" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--pj2c4-eth0" Nov 4 04:53:13.318547 containerd[1675]: 2025-11-04 04:53:13.259 [INFO][4450] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b2c8e3d89228f34a077f9125033dd1e1520e5e1f035f8cc28afa559a90fc19cd" HandleID="k8s-pod-network.b2c8e3d89228f34a077f9125033dd1e1520e5e1f035f8cc28afa559a90fc19cd" Workload="localhost-k8s-coredns--66bc5c9577--pj2c4-eth0" Nov 4 04:53:13.327082 containerd[1675]: 2025-11-04 04:53:13.260 [INFO][4450] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b2c8e3d89228f34a077f9125033dd1e1520e5e1f035f8cc28afa559a90fc19cd" HandleID="k8s-pod-network.b2c8e3d89228f34a077f9125033dd1e1520e5e1f035f8cc28afa559a90fc19cd" Workload="localhost-k8s-coredns--66bc5c9577--pj2c4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f060), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-pj2c4", "timestamp":"2025-11-04 04:53:13.259937206 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 4 04:53:13.327082 containerd[1675]: 2025-11-04 04:53:13.260 [INFO][4450] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 4 04:53:13.327082 containerd[1675]: 2025-11-04 04:53:13.260 [INFO][4450] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 4 04:53:13.327082 containerd[1675]: 2025-11-04 04:53:13.260 [INFO][4450] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 4 04:53:13.327082 containerd[1675]: 2025-11-04 04:53:13.265 [INFO][4450] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b2c8e3d89228f34a077f9125033dd1e1520e5e1f035f8cc28afa559a90fc19cd" host="localhost" Nov 4 04:53:13.327082 containerd[1675]: 2025-11-04 04:53:13.269 [INFO][4450] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 4 04:53:13.327082 containerd[1675]: 2025-11-04 04:53:13.273 [INFO][4450] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 4 04:53:13.327082 containerd[1675]: 2025-11-04 04:53:13.275 [INFO][4450] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 4 04:53:13.327082 containerd[1675]: 2025-11-04 04:53:13.277 [INFO][4450] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 4 04:53:13.327082 containerd[1675]: 2025-11-04 04:53:13.277 [INFO][4450] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b2c8e3d89228f34a077f9125033dd1e1520e5e1f035f8cc28afa559a90fc19cd" host="localhost" Nov 4 04:53:13.327300 containerd[1675]: 2025-11-04 04:53:13.278 [INFO][4450] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b2c8e3d89228f34a077f9125033dd1e1520e5e1f035f8cc28afa559a90fc19cd Nov 4 04:53:13.327300 containerd[1675]: 2025-11-04 04:53:13.282 [INFO][4450] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b2c8e3d89228f34a077f9125033dd1e1520e5e1f035f8cc28afa559a90fc19cd" host="localhost" Nov 4 04:53:13.327300 containerd[1675]: 2025-11-04 04:53:13.285 [INFO][4450] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.b2c8e3d89228f34a077f9125033dd1e1520e5e1f035f8cc28afa559a90fc19cd" host="localhost" Nov 4 04:53:13.327300 containerd[1675]: 2025-11-04 04:53:13.286 [INFO][4450] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.b2c8e3d89228f34a077f9125033dd1e1520e5e1f035f8cc28afa559a90fc19cd" host="localhost" Nov 4 04:53:13.327300 containerd[1675]: 2025-11-04 04:53:13.286 [INFO][4450] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 4 04:53:13.327300 containerd[1675]: 2025-11-04 04:53:13.286 [INFO][4450] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="b2c8e3d89228f34a077f9125033dd1e1520e5e1f035f8cc28afa559a90fc19cd" HandleID="k8s-pod-network.b2c8e3d89228f34a077f9125033dd1e1520e5e1f035f8cc28afa559a90fc19cd" Workload="localhost-k8s-coredns--66bc5c9577--pj2c4-eth0" Nov 4 04:53:13.327396 containerd[1675]: 2025-11-04 04:53:13.289 [INFO][4439] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b2c8e3d89228f34a077f9125033dd1e1520e5e1f035f8cc28afa559a90fc19cd" Namespace="kube-system" Pod="coredns-66bc5c9577-pj2c4" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--pj2c4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--pj2c4-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"efcdbb78-bd9e-49c7-af7a-d1cd50d1c472", ResourceVersion:"834", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 4, 52, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-pj2c4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali63427f0ef6a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 04:53:13.327396 containerd[1675]: 2025-11-04 04:53:13.289 [INFO][4439] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="b2c8e3d89228f34a077f9125033dd1e1520e5e1f035f8cc28afa559a90fc19cd" Namespace="kube-system" Pod="coredns-66bc5c9577-pj2c4" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--pj2c4-eth0" Nov 4 04:53:13.327396 containerd[1675]: 2025-11-04 04:53:13.289 [INFO][4439] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali63427f0ef6a ContainerID="b2c8e3d89228f34a077f9125033dd1e1520e5e1f035f8cc28afa559a90fc19cd" Namespace="kube-system" Pod="coredns-66bc5c9577-pj2c4" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--pj2c4-eth0" Nov 4 04:53:13.327396 containerd[1675]: 2025-11-04 04:53:13.295 [INFO][4439] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b2c8e3d89228f34a077f9125033dd1e1520e5e1f035f8cc28afa559a90fc19cd" Namespace="kube-system" Pod="coredns-66bc5c9577-pj2c4" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--pj2c4-eth0" Nov 4 04:53:13.327396 containerd[1675]: 2025-11-04 04:53:13.296 [INFO][4439] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b2c8e3d89228f34a077f9125033dd1e1520e5e1f035f8cc28afa559a90fc19cd" Namespace="kube-system" Pod="coredns-66bc5c9577-pj2c4" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--pj2c4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--pj2c4-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"efcdbb78-bd9e-49c7-af7a-d1cd50d1c472", ResourceVersion:"834", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 4, 52, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b2c8e3d89228f34a077f9125033dd1e1520e5e1f035f8cc28afa559a90fc19cd", Pod:"coredns-66bc5c9577-pj2c4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali63427f0ef6a", MAC:"4e:09:4c:7d:5b:19", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 04:53:13.327396 containerd[1675]: 2025-11-04 04:53:13.317 [INFO][4439] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b2c8e3d89228f34a077f9125033dd1e1520e5e1f035f8cc28afa559a90fc19cd" Namespace="kube-system" Pod="coredns-66bc5c9577-pj2c4" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--pj2c4-eth0" Nov 4 04:53:13.389698 containerd[1675]: time="2025-11-04T04:53:13.389661699Z" level=info msg="connecting to shim b2c8e3d89228f34a077f9125033dd1e1520e5e1f035f8cc28afa559a90fc19cd" address="unix:///run/containerd/s/973e28910af7380a3add4f234350574dc4c8cfd7962aa27e0e673edf8c571ac1" namespace=k8s.io protocol=ttrpc version=3 Nov 4 04:53:13.411320 systemd[1]: Started cri-containerd-b2c8e3d89228f34a077f9125033dd1e1520e5e1f035f8cc28afa559a90fc19cd.scope - libcontainer container b2c8e3d89228f34a077f9125033dd1e1520e5e1f035f8cc28afa559a90fc19cd. Nov 4 04:53:13.421857 systemd-resolved[1345]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 4 04:53:13.457992 containerd[1675]: time="2025-11-04T04:53:13.457967505Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-pj2c4,Uid:efcdbb78-bd9e-49c7-af7a-d1cd50d1c472,Namespace:kube-system,Attempt:0,} returns sandbox id \"b2c8e3d89228f34a077f9125033dd1e1520e5e1f035f8cc28afa559a90fc19cd\"" Nov 4 04:53:13.478945 containerd[1675]: time="2025-11-04T04:53:13.478822695Z" level=info msg="CreateContainer within sandbox \"b2c8e3d89228f34a077f9125033dd1e1520e5e1f035f8cc28afa559a90fc19cd\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 4 04:53:13.537674 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2453748185.mount: Deactivated successfully. Nov 4 04:53:13.539664 containerd[1675]: time="2025-11-04T04:53:13.538551513Z" level=info msg="Container 1e1a44a9d1c0f71b053675a54462f6734e665423445241573768cba028361b78: CDI devices from CRI Config.CDIDevices: []" Nov 4 04:53:13.542752 containerd[1675]: time="2025-11-04T04:53:13.542728176Z" level=info msg="CreateContainer within sandbox \"b2c8e3d89228f34a077f9125033dd1e1520e5e1f035f8cc28afa559a90fc19cd\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1e1a44a9d1c0f71b053675a54462f6734e665423445241573768cba028361b78\"" Nov 4 04:53:13.543377 containerd[1675]: time="2025-11-04T04:53:13.543352075Z" level=info msg="StartContainer for \"1e1a44a9d1c0f71b053675a54462f6734e665423445241573768cba028361b78\"" Nov 4 04:53:13.545487 containerd[1675]: time="2025-11-04T04:53:13.545451904Z" level=info msg="connecting to shim 1e1a44a9d1c0f71b053675a54462f6734e665423445241573768cba028361b78" address="unix:///run/containerd/s/973e28910af7380a3add4f234350574dc4c8cfd7962aa27e0e673edf8c571ac1" protocol=ttrpc version=3 Nov 4 04:53:13.568328 systemd[1]: Started cri-containerd-1e1a44a9d1c0f71b053675a54462f6734e665423445241573768cba028361b78.scope - libcontainer container 1e1a44a9d1c0f71b053675a54462f6734e665423445241573768cba028361b78. Nov 4 04:53:13.623655 containerd[1675]: time="2025-11-04T04:53:13.623628847Z" level=info msg="StartContainer for \"1e1a44a9d1c0f71b053675a54462f6734e665423445241573768cba028361b78\" returns successfully" Nov 4 04:53:14.145928 containerd[1675]: time="2025-11-04T04:53:14.145665363Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-c5fgp,Uid:c17ebeee-c521-467f-b0f7-4c3c787f171e,Namespace:calico-system,Attempt:0,}" Nov 4 04:53:14.156333 containerd[1675]: time="2025-11-04T04:53:14.156290498Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b698f4965-59j2r,Uid:d45b2a71-f81e-4bd0-9908-32bdf3ccd976,Namespace:calico-apiserver,Attempt:0,}" Nov 4 04:53:14.333520 systemd-networkd[1561]: cali200c220b01d: Link UP Nov 4 04:53:14.334797 systemd-networkd[1561]: cali200c220b01d: Gained carrier Nov 4 04:53:14.357021 containerd[1675]: 2025-11-04 04:53:14.250 [INFO][4546] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--7c778bb748--c5fgp-eth0 goldmane-7c778bb748- calico-system c17ebeee-c521-467f-b0f7-4c3c787f171e 841 0 2025-11-04 04:52:46 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7c778bb748 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-7c778bb748-c5fgp eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali200c220b01d [] [] }} ContainerID="f0f69d8f03586e34a7960e1d13eb961ed52a5f55b0cf11c6403cc0deeccbf231" Namespace="calico-system" Pod="goldmane-7c778bb748-c5fgp" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--c5fgp-" Nov 4 04:53:14.357021 containerd[1675]: 2025-11-04 04:53:14.250 [INFO][4546] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f0f69d8f03586e34a7960e1d13eb961ed52a5f55b0cf11c6403cc0deeccbf231" Namespace="calico-system" Pod="goldmane-7c778bb748-c5fgp" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--c5fgp-eth0" Nov 4 04:53:14.357021 containerd[1675]: 2025-11-04 04:53:14.285 [INFO][4572] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f0f69d8f03586e34a7960e1d13eb961ed52a5f55b0cf11c6403cc0deeccbf231" HandleID="k8s-pod-network.f0f69d8f03586e34a7960e1d13eb961ed52a5f55b0cf11c6403cc0deeccbf231" Workload="localhost-k8s-goldmane--7c778bb748--c5fgp-eth0" Nov 4 04:53:14.357021 containerd[1675]: 2025-11-04 04:53:14.285 [INFO][4572] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="f0f69d8f03586e34a7960e1d13eb961ed52a5f55b0cf11c6403cc0deeccbf231" HandleID="k8s-pod-network.f0f69d8f03586e34a7960e1d13eb961ed52a5f55b0cf11c6403cc0deeccbf231" Workload="localhost-k8s-goldmane--7c778bb748--c5fgp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5850), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-7c778bb748-c5fgp", "timestamp":"2025-11-04 04:53:14.285680633 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 4 04:53:14.357021 containerd[1675]: 2025-11-04 04:53:14.285 [INFO][4572] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 4 04:53:14.357021 containerd[1675]: 2025-11-04 04:53:14.285 [INFO][4572] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 4 04:53:14.357021 containerd[1675]: 2025-11-04 04:53:14.285 [INFO][4572] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 4 04:53:14.357021 containerd[1675]: 2025-11-04 04:53:14.297 [INFO][4572] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f0f69d8f03586e34a7960e1d13eb961ed52a5f55b0cf11c6403cc0deeccbf231" host="localhost" Nov 4 04:53:14.357021 containerd[1675]: 2025-11-04 04:53:14.300 [INFO][4572] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 4 04:53:14.357021 containerd[1675]: 2025-11-04 04:53:14.306 [INFO][4572] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 4 04:53:14.357021 containerd[1675]: 2025-11-04 04:53:14.308 [INFO][4572] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 4 04:53:14.357021 containerd[1675]: 2025-11-04 04:53:14.309 [INFO][4572] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 4 04:53:14.357021 containerd[1675]: 2025-11-04 04:53:14.309 [INFO][4572] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f0f69d8f03586e34a7960e1d13eb961ed52a5f55b0cf11c6403cc0deeccbf231" host="localhost" Nov 4 04:53:14.357021 containerd[1675]: 2025-11-04 04:53:14.311 [INFO][4572] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.f0f69d8f03586e34a7960e1d13eb961ed52a5f55b0cf11c6403cc0deeccbf231 Nov 4 04:53:14.357021 containerd[1675]: 2025-11-04 04:53:14.315 [INFO][4572] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f0f69d8f03586e34a7960e1d13eb961ed52a5f55b0cf11c6403cc0deeccbf231" host="localhost" Nov 4 04:53:14.357021 containerd[1675]: 2025-11-04 04:53:14.322 [INFO][4572] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.f0f69d8f03586e34a7960e1d13eb961ed52a5f55b0cf11c6403cc0deeccbf231" host="localhost" Nov 4 04:53:14.357021 containerd[1675]: 2025-11-04 04:53:14.322 [INFO][4572] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.f0f69d8f03586e34a7960e1d13eb961ed52a5f55b0cf11c6403cc0deeccbf231" host="localhost" Nov 4 04:53:14.357021 containerd[1675]: 2025-11-04 04:53:14.322 [INFO][4572] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 4 04:53:14.357021 containerd[1675]: 2025-11-04 04:53:14.322 [INFO][4572] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="f0f69d8f03586e34a7960e1d13eb961ed52a5f55b0cf11c6403cc0deeccbf231" HandleID="k8s-pod-network.f0f69d8f03586e34a7960e1d13eb961ed52a5f55b0cf11c6403cc0deeccbf231" Workload="localhost-k8s-goldmane--7c778bb748--c5fgp-eth0" Nov 4 04:53:14.361273 containerd[1675]: 2025-11-04 04:53:14.325 [INFO][4546] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f0f69d8f03586e34a7960e1d13eb961ed52a5f55b0cf11c6403cc0deeccbf231" Namespace="calico-system" Pod="goldmane-7c778bb748-c5fgp" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--c5fgp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7c778bb748--c5fgp-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"c17ebeee-c521-467f-b0f7-4c3c787f171e", ResourceVersion:"841", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 4, 52, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-7c778bb748-c5fgp", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali200c220b01d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 04:53:14.361273 containerd[1675]: 2025-11-04 04:53:14.325 [INFO][4546] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="f0f69d8f03586e34a7960e1d13eb961ed52a5f55b0cf11c6403cc0deeccbf231" Namespace="calico-system" Pod="goldmane-7c778bb748-c5fgp" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--c5fgp-eth0" Nov 4 04:53:14.361273 containerd[1675]: 2025-11-04 04:53:14.325 [INFO][4546] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali200c220b01d ContainerID="f0f69d8f03586e34a7960e1d13eb961ed52a5f55b0cf11c6403cc0deeccbf231" Namespace="calico-system" Pod="goldmane-7c778bb748-c5fgp" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--c5fgp-eth0" Nov 4 04:53:14.361273 containerd[1675]: 2025-11-04 04:53:14.336 [INFO][4546] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f0f69d8f03586e34a7960e1d13eb961ed52a5f55b0cf11c6403cc0deeccbf231" Namespace="calico-system" Pod="goldmane-7c778bb748-c5fgp" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--c5fgp-eth0" Nov 4 04:53:14.361273 containerd[1675]: 2025-11-04 04:53:14.336 [INFO][4546] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f0f69d8f03586e34a7960e1d13eb961ed52a5f55b0cf11c6403cc0deeccbf231" Namespace="calico-system" Pod="goldmane-7c778bb748-c5fgp" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--c5fgp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7c778bb748--c5fgp-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"c17ebeee-c521-467f-b0f7-4c3c787f171e", ResourceVersion:"841", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 4, 52, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f0f69d8f03586e34a7960e1d13eb961ed52a5f55b0cf11c6403cc0deeccbf231", Pod:"goldmane-7c778bb748-c5fgp", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali200c220b01d", MAC:"fa:ef:94:12:96:c2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 04:53:14.361273 containerd[1675]: 2025-11-04 04:53:14.354 [INFO][4546] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f0f69d8f03586e34a7960e1d13eb961ed52a5f55b0cf11c6403cc0deeccbf231" Namespace="calico-system" Pod="goldmane-7c778bb748-c5fgp" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--c5fgp-eth0" Nov 4 04:53:14.453913 systemd-networkd[1561]: calic2394d3fac7: Link UP Nov 4 04:53:14.456276 systemd-networkd[1561]: calic2394d3fac7: Gained carrier Nov 4 04:53:14.481535 containerd[1675]: time="2025-11-04T04:53:14.481494065Z" level=info msg="connecting to shim f0f69d8f03586e34a7960e1d13eb961ed52a5f55b0cf11c6403cc0deeccbf231" address="unix:///run/containerd/s/51d9a606c27d242defc6352187a6253586d06938a8e2ab3af9952f7b73c49158" namespace=k8s.io protocol=ttrpc version=3 Nov 4 04:53:14.503401 containerd[1675]: 2025-11-04 04:53:14.251 [INFO][4550] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6b698f4965--59j2r-eth0 calico-apiserver-6b698f4965- calico-apiserver d45b2a71-f81e-4bd0-9908-32bdf3ccd976 842 0 2025-11-04 04:52:44 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6b698f4965 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6b698f4965-59j2r eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calic2394d3fac7 [] [] }} ContainerID="c86d74616e1dd91063838577f04bf3c83c1c3b4433e32ee30c4a5707a451f4bc" Namespace="calico-apiserver" Pod="calico-apiserver-6b698f4965-59j2r" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b698f4965--59j2r-" Nov 4 04:53:14.503401 containerd[1675]: 2025-11-04 04:53:14.251 [INFO][4550] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c86d74616e1dd91063838577f04bf3c83c1c3b4433e32ee30c4a5707a451f4bc" Namespace="calico-apiserver" Pod="calico-apiserver-6b698f4965-59j2r" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b698f4965--59j2r-eth0" Nov 4 04:53:14.503401 containerd[1675]: 2025-11-04 04:53:14.312 [INFO][4570] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c86d74616e1dd91063838577f04bf3c83c1c3b4433e32ee30c4a5707a451f4bc" HandleID="k8s-pod-network.c86d74616e1dd91063838577f04bf3c83c1c3b4433e32ee30c4a5707a451f4bc" Workload="localhost-k8s-calico--apiserver--6b698f4965--59j2r-eth0" Nov 4 04:53:14.503401 containerd[1675]: 2025-11-04 04:53:14.312 [INFO][4570] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c86d74616e1dd91063838577f04bf3c83c1c3b4433e32ee30c4a5707a451f4bc" HandleID="k8s-pod-network.c86d74616e1dd91063838577f04bf3c83c1c3b4433e32ee30c4a5707a451f4bc" Workload="localhost-k8s-calico--apiserver--6b698f4965--59j2r-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003c7890), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6b698f4965-59j2r", "timestamp":"2025-11-04 04:53:14.31263838 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 4 04:53:14.503401 containerd[1675]: 2025-11-04 04:53:14.312 [INFO][4570] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 4 04:53:14.503401 containerd[1675]: 2025-11-04 04:53:14.322 [INFO][4570] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 4 04:53:14.503401 containerd[1675]: 2025-11-04 04:53:14.322 [INFO][4570] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 4 04:53:14.503401 containerd[1675]: 2025-11-04 04:53:14.398 [INFO][4570] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c86d74616e1dd91063838577f04bf3c83c1c3b4433e32ee30c4a5707a451f4bc" host="localhost" Nov 4 04:53:14.503401 containerd[1675]: 2025-11-04 04:53:14.401 [INFO][4570] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 4 04:53:14.503401 containerd[1675]: 2025-11-04 04:53:14.404 [INFO][4570] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 4 04:53:14.503401 containerd[1675]: 2025-11-04 04:53:14.405 [INFO][4570] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 4 04:53:14.503401 containerd[1675]: 2025-11-04 04:53:14.407 [INFO][4570] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 4 04:53:14.503401 containerd[1675]: 2025-11-04 04:53:14.407 [INFO][4570] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c86d74616e1dd91063838577f04bf3c83c1c3b4433e32ee30c4a5707a451f4bc" host="localhost" Nov 4 04:53:14.503401 containerd[1675]: 2025-11-04 04:53:14.408 [INFO][4570] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c86d74616e1dd91063838577f04bf3c83c1c3b4433e32ee30c4a5707a451f4bc Nov 4 04:53:14.503401 containerd[1675]: 2025-11-04 04:53:14.418 [INFO][4570] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c86d74616e1dd91063838577f04bf3c83c1c3b4433e32ee30c4a5707a451f4bc" host="localhost" Nov 4 04:53:14.503401 containerd[1675]: 2025-11-04 04:53:14.449 [INFO][4570] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.c86d74616e1dd91063838577f04bf3c83c1c3b4433e32ee30c4a5707a451f4bc" host="localhost" Nov 4 04:53:14.503401 containerd[1675]: 2025-11-04 04:53:14.449 [INFO][4570] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.c86d74616e1dd91063838577f04bf3c83c1c3b4433e32ee30c4a5707a451f4bc" host="localhost" Nov 4 04:53:14.503401 containerd[1675]: 2025-11-04 04:53:14.449 [INFO][4570] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 4 04:53:14.503401 containerd[1675]: 2025-11-04 04:53:14.449 [INFO][4570] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="c86d74616e1dd91063838577f04bf3c83c1c3b4433e32ee30c4a5707a451f4bc" HandleID="k8s-pod-network.c86d74616e1dd91063838577f04bf3c83c1c3b4433e32ee30c4a5707a451f4bc" Workload="localhost-k8s-calico--apiserver--6b698f4965--59j2r-eth0" Nov 4 04:53:14.520529 containerd[1675]: 2025-11-04 04:53:14.451 [INFO][4550] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c86d74616e1dd91063838577f04bf3c83c1c3b4433e32ee30c4a5707a451f4bc" Namespace="calico-apiserver" Pod="calico-apiserver-6b698f4965-59j2r" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b698f4965--59j2r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6b698f4965--59j2r-eth0", GenerateName:"calico-apiserver-6b698f4965-", Namespace:"calico-apiserver", SelfLink:"", UID:"d45b2a71-f81e-4bd0-9908-32bdf3ccd976", ResourceVersion:"842", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 4, 52, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6b698f4965", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6b698f4965-59j2r", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic2394d3fac7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 04:53:14.520529 containerd[1675]: 2025-11-04 04:53:14.451 [INFO][4550] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="c86d74616e1dd91063838577f04bf3c83c1c3b4433e32ee30c4a5707a451f4bc" Namespace="calico-apiserver" Pod="calico-apiserver-6b698f4965-59j2r" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b698f4965--59j2r-eth0" Nov 4 04:53:14.520529 containerd[1675]: 2025-11-04 04:53:14.451 [INFO][4550] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic2394d3fac7 ContainerID="c86d74616e1dd91063838577f04bf3c83c1c3b4433e32ee30c4a5707a451f4bc" Namespace="calico-apiserver" Pod="calico-apiserver-6b698f4965-59j2r" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b698f4965--59j2r-eth0" Nov 4 04:53:14.520529 containerd[1675]: 2025-11-04 04:53:14.455 [INFO][4550] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c86d74616e1dd91063838577f04bf3c83c1c3b4433e32ee30c4a5707a451f4bc" Namespace="calico-apiserver" Pod="calico-apiserver-6b698f4965-59j2r" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b698f4965--59j2r-eth0" Nov 4 04:53:14.520529 containerd[1675]: 2025-11-04 04:53:14.457 [INFO][4550] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c86d74616e1dd91063838577f04bf3c83c1c3b4433e32ee30c4a5707a451f4bc" Namespace="calico-apiserver" Pod="calico-apiserver-6b698f4965-59j2r" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b698f4965--59j2r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6b698f4965--59j2r-eth0", GenerateName:"calico-apiserver-6b698f4965-", Namespace:"calico-apiserver", SelfLink:"", UID:"d45b2a71-f81e-4bd0-9908-32bdf3ccd976", ResourceVersion:"842", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 4, 52, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6b698f4965", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c86d74616e1dd91063838577f04bf3c83c1c3b4433e32ee30c4a5707a451f4bc", Pod:"calico-apiserver-6b698f4965-59j2r", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic2394d3fac7", MAC:"aa:7a:ce:77:66:ce", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 04:53:14.520529 containerd[1675]: 2025-11-04 04:53:14.499 [INFO][4550] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c86d74616e1dd91063838577f04bf3c83c1c3b4433e32ee30c4a5707a451f4bc" Namespace="calico-apiserver" Pod="calico-apiserver-6b698f4965-59j2r" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b698f4965--59j2r-eth0" Nov 4 04:53:14.505305 systemd-networkd[1561]: cali63427f0ef6a: Gained IPv6LL Nov 4 04:53:14.524262 systemd[1]: Started cri-containerd-f0f69d8f03586e34a7960e1d13eb961ed52a5f55b0cf11c6403cc0deeccbf231.scope - libcontainer container f0f69d8f03586e34a7960e1d13eb961ed52a5f55b0cf11c6403cc0deeccbf231. Nov 4 04:53:14.597127 systemd-resolved[1345]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 4 04:53:14.637713 containerd[1675]: time="2025-11-04T04:53:14.637662574Z" level=info msg="connecting to shim c86d74616e1dd91063838577f04bf3c83c1c3b4433e32ee30c4a5707a451f4bc" address="unix:///run/containerd/s/c2e66318e5f2cd5c9f3ce1b0b0834aade7d17602c5a4d2b05c7f2ec1939e4218" namespace=k8s.io protocol=ttrpc version=3 Nov 4 04:53:14.652172 containerd[1675]: time="2025-11-04T04:53:14.651996487Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-c5fgp,Uid:c17ebeee-c521-467f-b0f7-4c3c787f171e,Namespace:calico-system,Attempt:0,} returns sandbox id \"f0f69d8f03586e34a7960e1d13eb961ed52a5f55b0cf11c6403cc0deeccbf231\"" Nov 4 04:53:14.668540 systemd[1]: Started cri-containerd-c86d74616e1dd91063838577f04bf3c83c1c3b4433e32ee30c4a5707a451f4bc.scope - libcontainer container c86d74616e1dd91063838577f04bf3c83c1c3b4433e32ee30c4a5707a451f4bc. Nov 4 04:53:14.677170 systemd-resolved[1345]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 4 04:53:14.706067 containerd[1675]: time="2025-11-04T04:53:14.706003116Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b698f4965-59j2r,Uid:d45b2a71-f81e-4bd0-9908-32bdf3ccd976,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"c86d74616e1dd91063838577f04bf3c83c1c3b4433e32ee30c4a5707a451f4bc\"" Nov 4 04:53:14.763640 containerd[1675]: time="2025-11-04T04:53:14.763589072Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 4 04:53:15.105656 containerd[1675]: time="2025-11-04T04:53:15.105303329Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 04:53:15.105838 containerd[1675]: time="2025-11-04T04:53:15.105790416Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 4 04:53:15.105838 containerd[1675]: time="2025-11-04T04:53:15.105834509Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Nov 4 04:53:15.106011 kubelet[2985]: E1104 04:53:15.105991 2985 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 4 04:53:15.106340 kubelet[2985]: E1104 04:53:15.106021 2985 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 4 04:53:15.106397 containerd[1675]: time="2025-11-04T04:53:15.106376347Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 4 04:53:15.113338 kubelet[2985]: E1104 04:53:15.113167 2985 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-c5fgp_calico-system(c17ebeee-c521-467f-b0f7-4c3c787f171e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 4 04:53:15.113338 kubelet[2985]: E1104 04:53:15.113212 2985 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-c5fgp" podUID="c17ebeee-c521-467f-b0f7-4c3c787f171e" Nov 4 04:53:15.142369 containerd[1675]: time="2025-11-04T04:53:15.142341244Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-xxm2v,Uid:7550d563-69e4-4b12-a9be-4644b313f876,Namespace:kube-system,Attempt:0,}" Nov 4 04:53:15.142690 containerd[1675]: time="2025-11-04T04:53:15.142672359Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b698f4965-vbq5r,Uid:7732b0eb-674a-43e3-92d8-1fb67b4f9d41,Namespace:calico-apiserver,Attempt:0,}" Nov 4 04:53:15.273276 systemd-networkd[1561]: cali7c95365b1e1: Link UP Nov 4 04:53:15.274410 systemd-networkd[1561]: cali7c95365b1e1: Gained carrier Nov 4 04:53:15.300088 containerd[1675]: 2025-11-04 04:53:15.191 [INFO][4705] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6b698f4965--vbq5r-eth0 calico-apiserver-6b698f4965- calico-apiserver 7732b0eb-674a-43e3-92d8-1fb67b4f9d41 843 0 2025-11-04 04:52:44 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6b698f4965 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6b698f4965-vbq5r eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali7c95365b1e1 [] [] }} ContainerID="07e60533ba160b543d9afc0ade123582fdaaecf4ebabe3a4a6ae2bc3df0f23c8" Namespace="calico-apiserver" Pod="calico-apiserver-6b698f4965-vbq5r" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b698f4965--vbq5r-" Nov 4 04:53:15.300088 containerd[1675]: 2025-11-04 04:53:15.191 [INFO][4705] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="07e60533ba160b543d9afc0ade123582fdaaecf4ebabe3a4a6ae2bc3df0f23c8" Namespace="calico-apiserver" Pod="calico-apiserver-6b698f4965-vbq5r" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b698f4965--vbq5r-eth0" Nov 4 04:53:15.300088 containerd[1675]: 2025-11-04 04:53:15.224 [INFO][4725] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="07e60533ba160b543d9afc0ade123582fdaaecf4ebabe3a4a6ae2bc3df0f23c8" HandleID="k8s-pod-network.07e60533ba160b543d9afc0ade123582fdaaecf4ebabe3a4a6ae2bc3df0f23c8" Workload="localhost-k8s-calico--apiserver--6b698f4965--vbq5r-eth0" Nov 4 04:53:15.300088 containerd[1675]: 2025-11-04 04:53:15.224 [INFO][4725] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="07e60533ba160b543d9afc0ade123582fdaaecf4ebabe3a4a6ae2bc3df0f23c8" HandleID="k8s-pod-network.07e60533ba160b543d9afc0ade123582fdaaecf4ebabe3a4a6ae2bc3df0f23c8" Workload="localhost-k8s-calico--apiserver--6b698f4965--vbq5r-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ad750), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6b698f4965-vbq5r", "timestamp":"2025-11-04 04:53:15.224726331 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 4 04:53:15.300088 containerd[1675]: 2025-11-04 04:53:15.224 [INFO][4725] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 4 04:53:15.300088 containerd[1675]: 2025-11-04 04:53:15.224 [INFO][4725] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 4 04:53:15.300088 containerd[1675]: 2025-11-04 04:53:15.224 [INFO][4725] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 4 04:53:15.300088 containerd[1675]: 2025-11-04 04:53:15.230 [INFO][4725] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.07e60533ba160b543d9afc0ade123582fdaaecf4ebabe3a4a6ae2bc3df0f23c8" host="localhost" Nov 4 04:53:15.300088 containerd[1675]: 2025-11-04 04:53:15.233 [INFO][4725] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 4 04:53:15.300088 containerd[1675]: 2025-11-04 04:53:15.235 [INFO][4725] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 4 04:53:15.300088 containerd[1675]: 2025-11-04 04:53:15.236 [INFO][4725] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 4 04:53:15.300088 containerd[1675]: 2025-11-04 04:53:15.237 [INFO][4725] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 4 04:53:15.300088 containerd[1675]: 2025-11-04 04:53:15.237 [INFO][4725] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.07e60533ba160b543d9afc0ade123582fdaaecf4ebabe3a4a6ae2bc3df0f23c8" host="localhost" Nov 4 04:53:15.300088 containerd[1675]: 2025-11-04 04:53:15.238 [INFO][4725] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.07e60533ba160b543d9afc0ade123582fdaaecf4ebabe3a4a6ae2bc3df0f23c8 Nov 4 04:53:15.300088 containerd[1675]: 2025-11-04 04:53:15.250 [INFO][4725] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.07e60533ba160b543d9afc0ade123582fdaaecf4ebabe3a4a6ae2bc3df0f23c8" host="localhost" Nov 4 04:53:15.300088 containerd[1675]: 2025-11-04 04:53:15.266 [INFO][4725] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.07e60533ba160b543d9afc0ade123582fdaaecf4ebabe3a4a6ae2bc3df0f23c8" host="localhost" Nov 4 04:53:15.300088 containerd[1675]: 2025-11-04 04:53:15.266 [INFO][4725] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.07e60533ba160b543d9afc0ade123582fdaaecf4ebabe3a4a6ae2bc3df0f23c8" host="localhost" Nov 4 04:53:15.300088 containerd[1675]: 2025-11-04 04:53:15.266 [INFO][4725] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 4 04:53:15.300088 containerd[1675]: 2025-11-04 04:53:15.266 [INFO][4725] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="07e60533ba160b543d9afc0ade123582fdaaecf4ebabe3a4a6ae2bc3df0f23c8" HandleID="k8s-pod-network.07e60533ba160b543d9afc0ade123582fdaaecf4ebabe3a4a6ae2bc3df0f23c8" Workload="localhost-k8s-calico--apiserver--6b698f4965--vbq5r-eth0" Nov 4 04:53:15.311616 containerd[1675]: 2025-11-04 04:53:15.268 [INFO][4705] cni-plugin/k8s.go 418: Populated endpoint ContainerID="07e60533ba160b543d9afc0ade123582fdaaecf4ebabe3a4a6ae2bc3df0f23c8" Namespace="calico-apiserver" Pod="calico-apiserver-6b698f4965-vbq5r" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b698f4965--vbq5r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6b698f4965--vbq5r-eth0", GenerateName:"calico-apiserver-6b698f4965-", Namespace:"calico-apiserver", SelfLink:"", UID:"7732b0eb-674a-43e3-92d8-1fb67b4f9d41", ResourceVersion:"843", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 4, 52, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6b698f4965", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6b698f4965-vbq5r", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7c95365b1e1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 04:53:15.311616 containerd[1675]: 2025-11-04 04:53:15.269 [INFO][4705] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="07e60533ba160b543d9afc0ade123582fdaaecf4ebabe3a4a6ae2bc3df0f23c8" Namespace="calico-apiserver" Pod="calico-apiserver-6b698f4965-vbq5r" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b698f4965--vbq5r-eth0" Nov 4 04:53:15.311616 containerd[1675]: 2025-11-04 04:53:15.269 [INFO][4705] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7c95365b1e1 ContainerID="07e60533ba160b543d9afc0ade123582fdaaecf4ebabe3a4a6ae2bc3df0f23c8" Namespace="calico-apiserver" Pod="calico-apiserver-6b698f4965-vbq5r" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b698f4965--vbq5r-eth0" Nov 4 04:53:15.311616 containerd[1675]: 2025-11-04 04:53:15.275 [INFO][4705] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="07e60533ba160b543d9afc0ade123582fdaaecf4ebabe3a4a6ae2bc3df0f23c8" Namespace="calico-apiserver" Pod="calico-apiserver-6b698f4965-vbq5r" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b698f4965--vbq5r-eth0" Nov 4 04:53:15.311616 containerd[1675]: 2025-11-04 04:53:15.275 [INFO][4705] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="07e60533ba160b543d9afc0ade123582fdaaecf4ebabe3a4a6ae2bc3df0f23c8" Namespace="calico-apiserver" Pod="calico-apiserver-6b698f4965-vbq5r" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b698f4965--vbq5r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6b698f4965--vbq5r-eth0", GenerateName:"calico-apiserver-6b698f4965-", Namespace:"calico-apiserver", SelfLink:"", UID:"7732b0eb-674a-43e3-92d8-1fb67b4f9d41", ResourceVersion:"843", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 4, 52, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6b698f4965", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"07e60533ba160b543d9afc0ade123582fdaaecf4ebabe3a4a6ae2bc3df0f23c8", Pod:"calico-apiserver-6b698f4965-vbq5r", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7c95365b1e1", MAC:"3e:df:9e:8f:97:37", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 04:53:15.311616 containerd[1675]: 2025-11-04 04:53:15.294 [INFO][4705] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="07e60533ba160b543d9afc0ade123582fdaaecf4ebabe3a4a6ae2bc3df0f23c8" Namespace="calico-apiserver" Pod="calico-apiserver-6b698f4965-vbq5r" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b698f4965--vbq5r-eth0" Nov 4 04:53:15.391373 kubelet[2985]: I1104 04:53:15.371956 2985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-pj2c4" podStartSLOduration=39.29359616 podStartE2EDuration="39.29359616s" podCreationTimestamp="2025-11-04 04:52:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-04 04:53:14.722571696 +0000 UTC m=+44.664830181" watchObservedRunningTime="2025-11-04 04:53:15.29359616 +0000 UTC m=+45.235854640" Nov 4 04:53:15.395464 systemd-networkd[1561]: calic4653c3ade2: Link UP Nov 4 04:53:15.396211 systemd-networkd[1561]: calic4653c3ade2: Gained carrier Nov 4 04:53:15.429243 containerd[1675]: 2025-11-04 04:53:15.188 [INFO][4700] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--xxm2v-eth0 coredns-66bc5c9577- kube-system 7550d563-69e4-4b12-a9be-4644b313f876 840 0 2025-11-04 04:52:36 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-xxm2v eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calic4653c3ade2 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="3a847ef2be0a19d9c57c470d81b764e4cda6d8ad4ffc90533cca72e11d70ec86" Namespace="kube-system" Pod="coredns-66bc5c9577-xxm2v" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--xxm2v-" Nov 4 04:53:15.429243 containerd[1675]: 2025-11-04 04:53:15.188 [INFO][4700] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3a847ef2be0a19d9c57c470d81b764e4cda6d8ad4ffc90533cca72e11d70ec86" Namespace="kube-system" Pod="coredns-66bc5c9577-xxm2v" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--xxm2v-eth0" Nov 4 04:53:15.429243 containerd[1675]: 2025-11-04 04:53:15.241 [INFO][4723] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3a847ef2be0a19d9c57c470d81b764e4cda6d8ad4ffc90533cca72e11d70ec86" HandleID="k8s-pod-network.3a847ef2be0a19d9c57c470d81b764e4cda6d8ad4ffc90533cca72e11d70ec86" Workload="localhost-k8s-coredns--66bc5c9577--xxm2v-eth0" Nov 4 04:53:15.429243 containerd[1675]: 2025-11-04 04:53:15.241 [INFO][4723] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="3a847ef2be0a19d9c57c470d81b764e4cda6d8ad4ffc90533cca72e11d70ec86" HandleID="k8s-pod-network.3a847ef2be0a19d9c57c470d81b764e4cda6d8ad4ffc90533cca72e11d70ec86" Workload="localhost-k8s-coredns--66bc5c9577--xxm2v-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d55a0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-xxm2v", "timestamp":"2025-11-04 04:53:15.241450429 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 4 04:53:15.429243 containerd[1675]: 2025-11-04 04:53:15.241 [INFO][4723] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 4 04:53:15.429243 containerd[1675]: 2025-11-04 04:53:15.266 [INFO][4723] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 4 04:53:15.429243 containerd[1675]: 2025-11-04 04:53:15.266 [INFO][4723] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 4 04:53:15.429243 containerd[1675]: 2025-11-04 04:53:15.345 [INFO][4723] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3a847ef2be0a19d9c57c470d81b764e4cda6d8ad4ffc90533cca72e11d70ec86" host="localhost" Nov 4 04:53:15.429243 containerd[1675]: 2025-11-04 04:53:15.349 [INFO][4723] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 4 04:53:15.429243 containerd[1675]: 2025-11-04 04:53:15.351 [INFO][4723] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 4 04:53:15.429243 containerd[1675]: 2025-11-04 04:53:15.352 [INFO][4723] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 4 04:53:15.429243 containerd[1675]: 2025-11-04 04:53:15.353 [INFO][4723] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 4 04:53:15.429243 containerd[1675]: 2025-11-04 04:53:15.353 [INFO][4723] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3a847ef2be0a19d9c57c470d81b764e4cda6d8ad4ffc90533cca72e11d70ec86" host="localhost" Nov 4 04:53:15.429243 containerd[1675]: 2025-11-04 04:53:15.353 [INFO][4723] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.3a847ef2be0a19d9c57c470d81b764e4cda6d8ad4ffc90533cca72e11d70ec86 Nov 4 04:53:15.429243 containerd[1675]: 2025-11-04 04:53:15.362 [INFO][4723] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3a847ef2be0a19d9c57c470d81b764e4cda6d8ad4ffc90533cca72e11d70ec86" host="localhost" Nov 4 04:53:15.429243 containerd[1675]: 2025-11-04 04:53:15.388 [INFO][4723] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.3a847ef2be0a19d9c57c470d81b764e4cda6d8ad4ffc90533cca72e11d70ec86" host="localhost" Nov 4 04:53:15.429243 containerd[1675]: 2025-11-04 04:53:15.388 [INFO][4723] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.3a847ef2be0a19d9c57c470d81b764e4cda6d8ad4ffc90533cca72e11d70ec86" host="localhost" Nov 4 04:53:15.429243 containerd[1675]: 2025-11-04 04:53:15.388 [INFO][4723] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 4 04:53:15.429243 containerd[1675]: 2025-11-04 04:53:15.388 [INFO][4723] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="3a847ef2be0a19d9c57c470d81b764e4cda6d8ad4ffc90533cca72e11d70ec86" HandleID="k8s-pod-network.3a847ef2be0a19d9c57c470d81b764e4cda6d8ad4ffc90533cca72e11d70ec86" Workload="localhost-k8s-coredns--66bc5c9577--xxm2v-eth0" Nov 4 04:53:15.437274 containerd[1675]: 2025-11-04 04:53:15.390 [INFO][4700] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3a847ef2be0a19d9c57c470d81b764e4cda6d8ad4ffc90533cca72e11d70ec86" Namespace="kube-system" Pod="coredns-66bc5c9577-xxm2v" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--xxm2v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--xxm2v-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"7550d563-69e4-4b12-a9be-4644b313f876", ResourceVersion:"840", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 4, 52, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-xxm2v", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic4653c3ade2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 04:53:15.437274 containerd[1675]: 2025-11-04 04:53:15.392 [INFO][4700] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="3a847ef2be0a19d9c57c470d81b764e4cda6d8ad4ffc90533cca72e11d70ec86" Namespace="kube-system" Pod="coredns-66bc5c9577-xxm2v" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--xxm2v-eth0" Nov 4 04:53:15.437274 containerd[1675]: 2025-11-04 04:53:15.392 [INFO][4700] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic4653c3ade2 ContainerID="3a847ef2be0a19d9c57c470d81b764e4cda6d8ad4ffc90533cca72e11d70ec86" Namespace="kube-system" Pod="coredns-66bc5c9577-xxm2v" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--xxm2v-eth0" Nov 4 04:53:15.437274 containerd[1675]: 2025-11-04 04:53:15.396 [INFO][4700] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3a847ef2be0a19d9c57c470d81b764e4cda6d8ad4ffc90533cca72e11d70ec86" Namespace="kube-system" Pod="coredns-66bc5c9577-xxm2v" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--xxm2v-eth0" Nov 4 04:53:15.437274 containerd[1675]: 2025-11-04 04:53:15.397 [INFO][4700] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3a847ef2be0a19d9c57c470d81b764e4cda6d8ad4ffc90533cca72e11d70ec86" Namespace="kube-system" Pod="coredns-66bc5c9577-xxm2v" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--xxm2v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--xxm2v-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"7550d563-69e4-4b12-a9be-4644b313f876", ResourceVersion:"840", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 4, 52, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3a847ef2be0a19d9c57c470d81b764e4cda6d8ad4ffc90533cca72e11d70ec86", Pod:"coredns-66bc5c9577-xxm2v", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic4653c3ade2", MAC:"86:78:5d:20:08:20", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 04:53:15.437274 containerd[1675]: 2025-11-04 04:53:15.425 [INFO][4700] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3a847ef2be0a19d9c57c470d81b764e4cda6d8ad4ffc90533cca72e11d70ec86" Namespace="kube-system" Pod="coredns-66bc5c9577-xxm2v" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--xxm2v-eth0" Nov 4 04:53:15.490654 containerd[1675]: time="2025-11-04T04:53:15.490616855Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 04:53:15.499620 containerd[1675]: time="2025-11-04T04:53:15.499290775Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 4 04:53:15.499620 containerd[1675]: time="2025-11-04T04:53:15.499361281Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Nov 4 04:53:15.501198 kubelet[2985]: E1104 04:53:15.500776 2985 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 04:53:15.501198 kubelet[2985]: E1104 04:53:15.500811 2985 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 04:53:15.501198 kubelet[2985]: E1104 04:53:15.500869 2985 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6b698f4965-59j2r_calico-apiserver(d45b2a71-f81e-4bd0-9908-32bdf3ccd976): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 4 04:53:15.501198 kubelet[2985]: E1104 04:53:15.500897 2985 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b698f4965-59j2r" podUID="d45b2a71-f81e-4bd0-9908-32bdf3ccd976" Nov 4 04:53:15.507428 containerd[1675]: time="2025-11-04T04:53:15.506765479Z" level=info msg="connecting to shim 07e60533ba160b543d9afc0ade123582fdaaecf4ebabe3a4a6ae2bc3df0f23c8" address="unix:///run/containerd/s/47eb3b83f4fe4a9e6cd92be9799583f10743521cf8948290ecbeaa1d2723af57" namespace=k8s.io protocol=ttrpc version=3 Nov 4 04:53:15.543153 containerd[1675]: time="2025-11-04T04:53:15.542258736Z" level=info msg="connecting to shim 3a847ef2be0a19d9c57c470d81b764e4cda6d8ad4ffc90533cca72e11d70ec86" address="unix:///run/containerd/s/e869a84121be02d4fb31b1ed0fc005d776389f1533d839913fa7e2e4066c59f1" namespace=k8s.io protocol=ttrpc version=3 Nov 4 04:53:15.580309 systemd[1]: Started cri-containerd-07e60533ba160b543d9afc0ade123582fdaaecf4ebabe3a4a6ae2bc3df0f23c8.scope - libcontainer container 07e60533ba160b543d9afc0ade123582fdaaecf4ebabe3a4a6ae2bc3df0f23c8. Nov 4 04:53:15.592276 systemd[1]: Started cri-containerd-3a847ef2be0a19d9c57c470d81b764e4cda6d8ad4ffc90533cca72e11d70ec86.scope - libcontainer container 3a847ef2be0a19d9c57c470d81b764e4cda6d8ad4ffc90533cca72e11d70ec86. Nov 4 04:53:15.599554 systemd-networkd[1561]: calic2394d3fac7: Gained IPv6LL Nov 4 04:53:15.603694 kubelet[2985]: E1104 04:53:15.603643 2985 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b698f4965-59j2r" podUID="d45b2a71-f81e-4bd0-9908-32bdf3ccd976" Nov 4 04:53:15.608402 kubelet[2985]: E1104 04:53:15.605715 2985 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-c5fgp" podUID="c17ebeee-c521-467f-b0f7-4c3c787f171e" Nov 4 04:53:15.634101 systemd-resolved[1345]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 4 04:53:15.668243 systemd-resolved[1345]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 4 04:53:15.680165 containerd[1675]: time="2025-11-04T04:53:15.679404430Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-xxm2v,Uid:7550d563-69e4-4b12-a9be-4644b313f876,Namespace:kube-system,Attempt:0,} returns sandbox id \"3a847ef2be0a19d9c57c470d81b764e4cda6d8ad4ffc90533cca72e11d70ec86\"" Nov 4 04:53:15.701150 containerd[1675]: time="2025-11-04T04:53:15.699672526Z" level=info msg="CreateContainer within sandbox \"3a847ef2be0a19d9c57c470d81b764e4cda6d8ad4ffc90533cca72e11d70ec86\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 4 04:53:15.749382 containerd[1675]: time="2025-11-04T04:53:15.749361227Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b698f4965-vbq5r,Uid:7732b0eb-674a-43e3-92d8-1fb67b4f9d41,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"07e60533ba160b543d9afc0ade123582fdaaecf4ebabe3a4a6ae2bc3df0f23c8\"" Nov 4 04:53:15.751327 containerd[1675]: time="2025-11-04T04:53:15.751307428Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 4 04:53:15.769362 containerd[1675]: time="2025-11-04T04:53:15.769329875Z" level=info msg="Container c768bc22b20d5bc86897079a6b3f650bfa28326f64c56b865aa45edd553682db: CDI devices from CRI Config.CDIDevices: []" Nov 4 04:53:15.813549 containerd[1675]: time="2025-11-04T04:53:15.813502942Z" level=info msg="CreateContainer within sandbox \"3a847ef2be0a19d9c57c470d81b764e4cda6d8ad4ffc90533cca72e11d70ec86\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c768bc22b20d5bc86897079a6b3f650bfa28326f64c56b865aa45edd553682db\"" Nov 4 04:53:15.814309 containerd[1675]: time="2025-11-04T04:53:15.814287985Z" level=info msg="StartContainer for \"c768bc22b20d5bc86897079a6b3f650bfa28326f64c56b865aa45edd553682db\"" Nov 4 04:53:15.815950 containerd[1675]: time="2025-11-04T04:53:15.815915824Z" level=info msg="connecting to shim c768bc22b20d5bc86897079a6b3f650bfa28326f64c56b865aa45edd553682db" address="unix:///run/containerd/s/e869a84121be02d4fb31b1ed0fc005d776389f1533d839913fa7e2e4066c59f1" protocol=ttrpc version=3 Nov 4 04:53:15.837274 systemd[1]: Started cri-containerd-c768bc22b20d5bc86897079a6b3f650bfa28326f64c56b865aa45edd553682db.scope - libcontainer container c768bc22b20d5bc86897079a6b3f650bfa28326f64c56b865aa45edd553682db. Nov 4 04:53:15.850423 systemd-networkd[1561]: cali200c220b01d: Gained IPv6LL Nov 4 04:53:15.866155 containerd[1675]: time="2025-11-04T04:53:15.866111868Z" level=info msg="StartContainer for \"c768bc22b20d5bc86897079a6b3f650bfa28326f64c56b865aa45edd553682db\" returns successfully" Nov 4 04:53:16.109637 containerd[1675]: time="2025-11-04T04:53:16.109482736Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 04:53:16.110172 containerd[1675]: time="2025-11-04T04:53:16.110121689Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 4 04:53:16.110374 containerd[1675]: time="2025-11-04T04:53:16.110188201Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Nov 4 04:53:16.110453 kubelet[2985]: E1104 04:53:16.110425 2985 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 04:53:16.112391 kubelet[2985]: E1104 04:53:16.110459 2985 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 04:53:16.112391 kubelet[2985]: E1104 04:53:16.110514 2985 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6b698f4965-vbq5r_calico-apiserver(7732b0eb-674a-43e3-92d8-1fb67b4f9d41): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 4 04:53:16.112391 kubelet[2985]: E1104 04:53:16.110542 2985 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b698f4965-vbq5r" podUID="7732b0eb-674a-43e3-92d8-1fb67b4f9d41" Nov 4 04:53:16.155149 containerd[1675]: time="2025-11-04T04:53:16.155092074Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-747674d8bc-lrrzc,Uid:f119f006-6b2e-4995-831f-8cf672906209,Namespace:calico-system,Attempt:0,}" Nov 4 04:53:16.359145 systemd-networkd[1561]: cali5712baec357: Link UP Nov 4 04:53:16.360398 systemd-networkd[1561]: cali5712baec357: Gained carrier Nov 4 04:53:16.386220 containerd[1675]: 2025-11-04 04:53:16.246 [INFO][4883] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--747674d8bc--lrrzc-eth0 calico-kube-controllers-747674d8bc- calico-system f119f006-6b2e-4995-831f-8cf672906209 838 0 2025-11-04 04:52:48 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:747674d8bc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-747674d8bc-lrrzc eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali5712baec357 [] [] }} ContainerID="3ec8b6d112e3e8054d69e763b453645db628379924f47eb91fb352dbad7c7b70" Namespace="calico-system" Pod="calico-kube-controllers-747674d8bc-lrrzc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--747674d8bc--lrrzc-" Nov 4 04:53:16.386220 containerd[1675]: 2025-11-04 04:53:16.246 [INFO][4883] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3ec8b6d112e3e8054d69e763b453645db628379924f47eb91fb352dbad7c7b70" Namespace="calico-system" Pod="calico-kube-controllers-747674d8bc-lrrzc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--747674d8bc--lrrzc-eth0" Nov 4 04:53:16.386220 containerd[1675]: 2025-11-04 04:53:16.296 [INFO][4890] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3ec8b6d112e3e8054d69e763b453645db628379924f47eb91fb352dbad7c7b70" HandleID="k8s-pod-network.3ec8b6d112e3e8054d69e763b453645db628379924f47eb91fb352dbad7c7b70" Workload="localhost-k8s-calico--kube--controllers--747674d8bc--lrrzc-eth0" Nov 4 04:53:16.386220 containerd[1675]: 2025-11-04 04:53:16.297 [INFO][4890] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="3ec8b6d112e3e8054d69e763b453645db628379924f47eb91fb352dbad7c7b70" HandleID="k8s-pod-network.3ec8b6d112e3e8054d69e763b453645db628379924f47eb91fb352dbad7c7b70" Workload="localhost-k8s-calico--kube--controllers--747674d8bc--lrrzc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5ca0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-747674d8bc-lrrzc", "timestamp":"2025-11-04 04:53:16.296698494 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 4 04:53:16.386220 containerd[1675]: 2025-11-04 04:53:16.297 [INFO][4890] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 4 04:53:16.386220 containerd[1675]: 2025-11-04 04:53:16.297 [INFO][4890] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 4 04:53:16.386220 containerd[1675]: 2025-11-04 04:53:16.298 [INFO][4890] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 4 04:53:16.386220 containerd[1675]: 2025-11-04 04:53:16.305 [INFO][4890] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3ec8b6d112e3e8054d69e763b453645db628379924f47eb91fb352dbad7c7b70" host="localhost" Nov 4 04:53:16.386220 containerd[1675]: 2025-11-04 04:53:16.310 [INFO][4890] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 4 04:53:16.386220 containerd[1675]: 2025-11-04 04:53:16.314 [INFO][4890] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 4 04:53:16.386220 containerd[1675]: 2025-11-04 04:53:16.316 [INFO][4890] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 4 04:53:16.386220 containerd[1675]: 2025-11-04 04:53:16.318 [INFO][4890] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 4 04:53:16.386220 containerd[1675]: 2025-11-04 04:53:16.318 [INFO][4890] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3ec8b6d112e3e8054d69e763b453645db628379924f47eb91fb352dbad7c7b70" host="localhost" Nov 4 04:53:16.386220 containerd[1675]: 2025-11-04 04:53:16.319 [INFO][4890] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.3ec8b6d112e3e8054d69e763b453645db628379924f47eb91fb352dbad7c7b70 Nov 4 04:53:16.386220 containerd[1675]: 2025-11-04 04:53:16.334 [INFO][4890] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3ec8b6d112e3e8054d69e763b453645db628379924f47eb91fb352dbad7c7b70" host="localhost" Nov 4 04:53:16.386220 containerd[1675]: 2025-11-04 04:53:16.344 [INFO][4890] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.3ec8b6d112e3e8054d69e763b453645db628379924f47eb91fb352dbad7c7b70" host="localhost" Nov 4 04:53:16.386220 containerd[1675]: 2025-11-04 04:53:16.344 [INFO][4890] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.3ec8b6d112e3e8054d69e763b453645db628379924f47eb91fb352dbad7c7b70" host="localhost" Nov 4 04:53:16.386220 containerd[1675]: 2025-11-04 04:53:16.345 [INFO][4890] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 4 04:53:16.386220 containerd[1675]: 2025-11-04 04:53:16.345 [INFO][4890] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="3ec8b6d112e3e8054d69e763b453645db628379924f47eb91fb352dbad7c7b70" HandleID="k8s-pod-network.3ec8b6d112e3e8054d69e763b453645db628379924f47eb91fb352dbad7c7b70" Workload="localhost-k8s-calico--kube--controllers--747674d8bc--lrrzc-eth0" Nov 4 04:53:16.389805 containerd[1675]: 2025-11-04 04:53:16.348 [INFO][4883] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3ec8b6d112e3e8054d69e763b453645db628379924f47eb91fb352dbad7c7b70" Namespace="calico-system" Pod="calico-kube-controllers-747674d8bc-lrrzc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--747674d8bc--lrrzc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--747674d8bc--lrrzc-eth0", GenerateName:"calico-kube-controllers-747674d8bc-", Namespace:"calico-system", SelfLink:"", UID:"f119f006-6b2e-4995-831f-8cf672906209", ResourceVersion:"838", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 4, 52, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"747674d8bc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-747674d8bc-lrrzc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5712baec357", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 04:53:16.389805 containerd[1675]: 2025-11-04 04:53:16.348 [INFO][4883] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="3ec8b6d112e3e8054d69e763b453645db628379924f47eb91fb352dbad7c7b70" Namespace="calico-system" Pod="calico-kube-controllers-747674d8bc-lrrzc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--747674d8bc--lrrzc-eth0" Nov 4 04:53:16.389805 containerd[1675]: 2025-11-04 04:53:16.348 [INFO][4883] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5712baec357 ContainerID="3ec8b6d112e3e8054d69e763b453645db628379924f47eb91fb352dbad7c7b70" Namespace="calico-system" Pod="calico-kube-controllers-747674d8bc-lrrzc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--747674d8bc--lrrzc-eth0" Nov 4 04:53:16.389805 containerd[1675]: 2025-11-04 04:53:16.362 [INFO][4883] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3ec8b6d112e3e8054d69e763b453645db628379924f47eb91fb352dbad7c7b70" Namespace="calico-system" Pod="calico-kube-controllers-747674d8bc-lrrzc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--747674d8bc--lrrzc-eth0" Nov 4 04:53:16.389805 containerd[1675]: 2025-11-04 04:53:16.363 [INFO][4883] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3ec8b6d112e3e8054d69e763b453645db628379924f47eb91fb352dbad7c7b70" Namespace="calico-system" Pod="calico-kube-controllers-747674d8bc-lrrzc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--747674d8bc--lrrzc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--747674d8bc--lrrzc-eth0", GenerateName:"calico-kube-controllers-747674d8bc-", Namespace:"calico-system", SelfLink:"", UID:"f119f006-6b2e-4995-831f-8cf672906209", ResourceVersion:"838", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 4, 52, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"747674d8bc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3ec8b6d112e3e8054d69e763b453645db628379924f47eb91fb352dbad7c7b70", Pod:"calico-kube-controllers-747674d8bc-lrrzc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5712baec357", MAC:"da:69:36:39:d0:51", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 04:53:16.389805 containerd[1675]: 2025-11-04 04:53:16.379 [INFO][4883] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3ec8b6d112e3e8054d69e763b453645db628379924f47eb91fb352dbad7c7b70" Namespace="calico-system" Pod="calico-kube-controllers-747674d8bc-lrrzc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--747674d8bc--lrrzc-eth0" Nov 4 04:53:16.462494 containerd[1675]: time="2025-11-04T04:53:16.462432492Z" level=info msg="connecting to shim 3ec8b6d112e3e8054d69e763b453645db628379924f47eb91fb352dbad7c7b70" address="unix:///run/containerd/s/519bf8c063332eea4ee24c7cfd864c44600588a663ad9902b49d65555239dfc4" namespace=k8s.io protocol=ttrpc version=3 Nov 4 04:53:16.512292 systemd[1]: Started cri-containerd-3ec8b6d112e3e8054d69e763b453645db628379924f47eb91fb352dbad7c7b70.scope - libcontainer container 3ec8b6d112e3e8054d69e763b453645db628379924f47eb91fb352dbad7c7b70. Nov 4 04:53:16.538513 systemd-resolved[1345]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 4 04:53:16.598477 containerd[1675]: time="2025-11-04T04:53:16.598446674Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-747674d8bc-lrrzc,Uid:f119f006-6b2e-4995-831f-8cf672906209,Namespace:calico-system,Attempt:0,} returns sandbox id \"3ec8b6d112e3e8054d69e763b453645db628379924f47eb91fb352dbad7c7b70\"" Nov 4 04:53:16.602724 containerd[1675]: time="2025-11-04T04:53:16.599549575Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 4 04:53:16.625499 kubelet[2985]: E1104 04:53:16.624893 2985 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-c5fgp" podUID="c17ebeee-c521-467f-b0f7-4c3c787f171e" Nov 4 04:53:16.627257 kubelet[2985]: E1104 04:53:16.627225 2985 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b698f4965-59j2r" podUID="d45b2a71-f81e-4bd0-9908-32bdf3ccd976" Nov 4 04:53:16.627377 kubelet[2985]: E1104 04:53:16.627305 2985 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b698f4965-vbq5r" podUID="7732b0eb-674a-43e3-92d8-1fb67b4f9d41" Nov 4 04:53:16.708861 kubelet[2985]: I1104 04:53:16.708820 2985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-xxm2v" podStartSLOduration=40.708807428 podStartE2EDuration="40.708807428s" podCreationTimestamp="2025-11-04 04:52:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-04 04:53:16.649701128 +0000 UTC m=+46.591959614" watchObservedRunningTime="2025-11-04 04:53:16.708807428 +0000 UTC m=+46.651065906" Nov 4 04:53:16.954833 containerd[1675]: time="2025-11-04T04:53:16.954617578Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 04:53:16.957336 containerd[1675]: time="2025-11-04T04:53:16.957291646Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 4 04:53:16.957504 containerd[1675]: time="2025-11-04T04:53:16.957334747Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Nov 4 04:53:16.957796 kubelet[2985]: E1104 04:53:16.957760 2985 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 4 04:53:16.957859 kubelet[2985]: E1104 04:53:16.957807 2985 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 4 04:53:16.958001 kubelet[2985]: E1104 04:53:16.957890 2985 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-747674d8bc-lrrzc_calico-system(f119f006-6b2e-4995-831f-8cf672906209): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 4 04:53:16.958001 kubelet[2985]: E1104 04:53:16.957926 2985 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-747674d8bc-lrrzc" podUID="f119f006-6b2e-4995-831f-8cf672906209" Nov 4 04:53:17.129315 systemd-networkd[1561]: cali7c95365b1e1: Gained IPv6LL Nov 4 04:53:17.146637 containerd[1675]: time="2025-11-04T04:53:17.146294153Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-n8l85,Uid:fcffefa6-5e43-470d-ba72-4fb9af0e6455,Namespace:calico-system,Attempt:0,}" Nov 4 04:53:17.193374 systemd-networkd[1561]: calic4653c3ade2: Gained IPv6LL Nov 4 04:53:17.356775 systemd-networkd[1561]: cali557f5c0e912: Link UP Nov 4 04:53:17.359286 systemd-networkd[1561]: cali557f5c0e912: Gained carrier Nov 4 04:53:17.380233 containerd[1675]: 2025-11-04 04:53:17.258 [INFO][4965] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--n8l85-eth0 csi-node-driver- calico-system fcffefa6-5e43-470d-ba72-4fb9af0e6455 729 0 2025-11-04 04:52:48 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:9d99788f7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-n8l85 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali557f5c0e912 [] [] }} ContainerID="16fff265390c1c08552a2ba7b7768bf68f0c1cba1f9ec04e0ab30c84e0b0e116" Namespace="calico-system" Pod="csi-node-driver-n8l85" WorkloadEndpoint="localhost-k8s-csi--node--driver--n8l85-" Nov 4 04:53:17.380233 containerd[1675]: 2025-11-04 04:53:17.267 [INFO][4965] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="16fff265390c1c08552a2ba7b7768bf68f0c1cba1f9ec04e0ab30c84e0b0e116" Namespace="calico-system" Pod="csi-node-driver-n8l85" WorkloadEndpoint="localhost-k8s-csi--node--driver--n8l85-eth0" Nov 4 04:53:17.380233 containerd[1675]: 2025-11-04 04:53:17.306 [INFO][4977] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="16fff265390c1c08552a2ba7b7768bf68f0c1cba1f9ec04e0ab30c84e0b0e116" HandleID="k8s-pod-network.16fff265390c1c08552a2ba7b7768bf68f0c1cba1f9ec04e0ab30c84e0b0e116" Workload="localhost-k8s-csi--node--driver--n8l85-eth0" Nov 4 04:53:17.380233 containerd[1675]: 2025-11-04 04:53:17.306 [INFO][4977] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="16fff265390c1c08552a2ba7b7768bf68f0c1cba1f9ec04e0ab30c84e0b0e116" HandleID="k8s-pod-network.16fff265390c1c08552a2ba7b7768bf68f0c1cba1f9ec04e0ab30c84e0b0e116" Workload="localhost-k8s-csi--node--driver--n8l85-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cb7f0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-n8l85", "timestamp":"2025-11-04 04:53:17.306663407 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 4 04:53:17.380233 containerd[1675]: 2025-11-04 04:53:17.306 [INFO][4977] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 4 04:53:17.380233 containerd[1675]: 2025-11-04 04:53:17.306 [INFO][4977] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 4 04:53:17.380233 containerd[1675]: 2025-11-04 04:53:17.306 [INFO][4977] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 4 04:53:17.380233 containerd[1675]: 2025-11-04 04:53:17.320 [INFO][4977] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.16fff265390c1c08552a2ba7b7768bf68f0c1cba1f9ec04e0ab30c84e0b0e116" host="localhost" Nov 4 04:53:17.380233 containerd[1675]: 2025-11-04 04:53:17.323 [INFO][4977] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 4 04:53:17.380233 containerd[1675]: 2025-11-04 04:53:17.325 [INFO][4977] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 4 04:53:17.380233 containerd[1675]: 2025-11-04 04:53:17.326 [INFO][4977] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 4 04:53:17.380233 containerd[1675]: 2025-11-04 04:53:17.328 [INFO][4977] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 4 04:53:17.380233 containerd[1675]: 2025-11-04 04:53:17.328 [INFO][4977] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.16fff265390c1c08552a2ba7b7768bf68f0c1cba1f9ec04e0ab30c84e0b0e116" host="localhost" Nov 4 04:53:17.380233 containerd[1675]: 2025-11-04 04:53:17.329 [INFO][4977] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.16fff265390c1c08552a2ba7b7768bf68f0c1cba1f9ec04e0ab30c84e0b0e116 Nov 4 04:53:17.380233 containerd[1675]: 2025-11-04 04:53:17.334 [INFO][4977] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.16fff265390c1c08552a2ba7b7768bf68f0c1cba1f9ec04e0ab30c84e0b0e116" host="localhost" Nov 4 04:53:17.380233 containerd[1675]: 2025-11-04 04:53:17.350 [INFO][4977] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.16fff265390c1c08552a2ba7b7768bf68f0c1cba1f9ec04e0ab30c84e0b0e116" host="localhost" Nov 4 04:53:17.380233 containerd[1675]: 2025-11-04 04:53:17.350 [INFO][4977] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.16fff265390c1c08552a2ba7b7768bf68f0c1cba1f9ec04e0ab30c84e0b0e116" host="localhost" Nov 4 04:53:17.380233 containerd[1675]: 2025-11-04 04:53:17.350 [INFO][4977] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 4 04:53:17.380233 containerd[1675]: 2025-11-04 04:53:17.350 [INFO][4977] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="16fff265390c1c08552a2ba7b7768bf68f0c1cba1f9ec04e0ab30c84e0b0e116" HandleID="k8s-pod-network.16fff265390c1c08552a2ba7b7768bf68f0c1cba1f9ec04e0ab30c84e0b0e116" Workload="localhost-k8s-csi--node--driver--n8l85-eth0" Nov 4 04:53:17.392002 containerd[1675]: 2025-11-04 04:53:17.352 [INFO][4965] cni-plugin/k8s.go 418: Populated endpoint ContainerID="16fff265390c1c08552a2ba7b7768bf68f0c1cba1f9ec04e0ab30c84e0b0e116" Namespace="calico-system" Pod="csi-node-driver-n8l85" WorkloadEndpoint="localhost-k8s-csi--node--driver--n8l85-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--n8l85-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"fcffefa6-5e43-470d-ba72-4fb9af0e6455", ResourceVersion:"729", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 4, 52, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-n8l85", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali557f5c0e912", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 04:53:17.392002 containerd[1675]: 2025-11-04 04:53:17.352 [INFO][4965] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="16fff265390c1c08552a2ba7b7768bf68f0c1cba1f9ec04e0ab30c84e0b0e116" Namespace="calico-system" Pod="csi-node-driver-n8l85" WorkloadEndpoint="localhost-k8s-csi--node--driver--n8l85-eth0" Nov 4 04:53:17.392002 containerd[1675]: 2025-11-04 04:53:17.352 [INFO][4965] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali557f5c0e912 ContainerID="16fff265390c1c08552a2ba7b7768bf68f0c1cba1f9ec04e0ab30c84e0b0e116" Namespace="calico-system" Pod="csi-node-driver-n8l85" WorkloadEndpoint="localhost-k8s-csi--node--driver--n8l85-eth0" Nov 4 04:53:17.392002 containerd[1675]: 2025-11-04 04:53:17.358 [INFO][4965] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="16fff265390c1c08552a2ba7b7768bf68f0c1cba1f9ec04e0ab30c84e0b0e116" Namespace="calico-system" Pod="csi-node-driver-n8l85" WorkloadEndpoint="localhost-k8s-csi--node--driver--n8l85-eth0" Nov 4 04:53:17.392002 containerd[1675]: 2025-11-04 04:53:17.358 [INFO][4965] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="16fff265390c1c08552a2ba7b7768bf68f0c1cba1f9ec04e0ab30c84e0b0e116" Namespace="calico-system" Pod="csi-node-driver-n8l85" WorkloadEndpoint="localhost-k8s-csi--node--driver--n8l85-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--n8l85-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"fcffefa6-5e43-470d-ba72-4fb9af0e6455", ResourceVersion:"729", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 4, 52, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"16fff265390c1c08552a2ba7b7768bf68f0c1cba1f9ec04e0ab30c84e0b0e116", Pod:"csi-node-driver-n8l85", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali557f5c0e912", MAC:"ba:39:fa:6d:3d:bf", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 04:53:17.392002 containerd[1675]: 2025-11-04 04:53:17.376 [INFO][4965] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="16fff265390c1c08552a2ba7b7768bf68f0c1cba1f9ec04e0ab30c84e0b0e116" Namespace="calico-system" Pod="csi-node-driver-n8l85" WorkloadEndpoint="localhost-k8s-csi--node--driver--n8l85-eth0" Nov 4 04:53:17.542469 containerd[1675]: time="2025-11-04T04:53:17.542428096Z" level=info msg="connecting to shim 16fff265390c1c08552a2ba7b7768bf68f0c1cba1f9ec04e0ab30c84e0b0e116" address="unix:///run/containerd/s/23c46533d5e7b100dbf89e366af31fafb6085c3da42e08da488ad78a36f03603" namespace=k8s.io protocol=ttrpc version=3 Nov 4 04:53:17.584416 systemd[1]: Started cri-containerd-16fff265390c1c08552a2ba7b7768bf68f0c1cba1f9ec04e0ab30c84e0b0e116.scope - libcontainer container 16fff265390c1c08552a2ba7b7768bf68f0c1cba1f9ec04e0ab30c84e0b0e116. Nov 4 04:53:17.609114 systemd-resolved[1345]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 4 04:53:17.631235 kubelet[2985]: E1104 04:53:17.631195 2985 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b698f4965-vbq5r" podUID="7732b0eb-674a-43e3-92d8-1fb67b4f9d41" Nov 4 04:53:17.633898 containerd[1675]: time="2025-11-04T04:53:17.633827247Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-n8l85,Uid:fcffefa6-5e43-470d-ba72-4fb9af0e6455,Namespace:calico-system,Attempt:0,} returns sandbox id \"16fff265390c1c08552a2ba7b7768bf68f0c1cba1f9ec04e0ab30c84e0b0e116\"" Nov 4 04:53:17.636199 kubelet[2985]: E1104 04:53:17.636165 2985 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-747674d8bc-lrrzc" podUID="f119f006-6b2e-4995-831f-8cf672906209" Nov 4 04:53:17.637793 containerd[1675]: time="2025-11-04T04:53:17.637752443Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 4 04:53:17.897251 systemd-networkd[1561]: cali5712baec357: Gained IPv6LL Nov 4 04:53:17.998930 containerd[1675]: time="2025-11-04T04:53:17.998890668Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 04:53:18.000327 containerd[1675]: time="2025-11-04T04:53:18.000290393Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 4 04:53:18.000384 containerd[1675]: time="2025-11-04T04:53:18.000368797Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Nov 4 04:53:18.001759 kubelet[2985]: E1104 04:53:18.001403 2985 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 4 04:53:18.001759 kubelet[2985]: E1104 04:53:18.001440 2985 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 4 04:53:18.001759 kubelet[2985]: E1104 04:53:18.001494 2985 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-n8l85_calico-system(fcffefa6-5e43-470d-ba72-4fb9af0e6455): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 4 04:53:18.002297 containerd[1675]: time="2025-11-04T04:53:18.002274267Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 4 04:53:18.382772 containerd[1675]: time="2025-11-04T04:53:18.382707039Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 04:53:18.398083 containerd[1675]: time="2025-11-04T04:53:18.398037337Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 4 04:53:18.398213 containerd[1675]: time="2025-11-04T04:53:18.398093138Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Nov 4 04:53:18.398300 kubelet[2985]: E1104 04:53:18.398267 2985 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 4 04:53:18.398353 kubelet[2985]: E1104 04:53:18.398314 2985 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 4 04:53:18.406246 kubelet[2985]: E1104 04:53:18.398377 2985 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-n8l85_calico-system(fcffefa6-5e43-470d-ba72-4fb9af0e6455): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 4 04:53:18.406246 kubelet[2985]: E1104 04:53:18.398412 2985 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-n8l85" podUID="fcffefa6-5e43-470d-ba72-4fb9af0e6455" Nov 4 04:53:18.631283 kubelet[2985]: E1104 04:53:18.631201 2985 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-n8l85" podUID="fcffefa6-5e43-470d-ba72-4fb9af0e6455" Nov 4 04:53:18.729564 systemd-networkd[1561]: cali557f5c0e912: Gained IPv6LL Nov 4 04:53:19.633608 kubelet[2985]: E1104 04:53:19.633237 2985 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-n8l85" podUID="fcffefa6-5e43-470d-ba72-4fb9af0e6455" Nov 4 04:53:23.142814 containerd[1675]: time="2025-11-04T04:53:23.142784418Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 4 04:53:23.458947 containerd[1675]: time="2025-11-04T04:53:23.458835364Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 04:53:23.464925 containerd[1675]: time="2025-11-04T04:53:23.464886206Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 4 04:53:23.465004 containerd[1675]: time="2025-11-04T04:53:23.464980965Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Nov 4 04:53:23.465575 kubelet[2985]: E1104 04:53:23.465200 2985 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 4 04:53:23.465575 kubelet[2985]: E1104 04:53:23.465255 2985 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 4 04:53:23.465575 kubelet[2985]: E1104 04:53:23.465314 2985 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-7c7fd785ff-28wps_calico-system(f9df857a-4dd9-492a-a798-bdfc5556871c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 4 04:53:23.467258 containerd[1675]: time="2025-11-04T04:53:23.467109116Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 4 04:53:23.840922 containerd[1675]: time="2025-11-04T04:53:23.840836883Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 04:53:23.848383 containerd[1675]: time="2025-11-04T04:53:23.848345419Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 4 04:53:23.848486 containerd[1675]: time="2025-11-04T04:53:23.848416165Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Nov 4 04:53:23.848634 kubelet[2985]: E1104 04:53:23.848600 2985 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 4 04:53:23.848690 kubelet[2985]: E1104 04:53:23.848638 2985 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 4 04:53:23.848714 kubelet[2985]: E1104 04:53:23.848692 2985 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-7c7fd785ff-28wps_calico-system(f9df857a-4dd9-492a-a798-bdfc5556871c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 4 04:53:23.848968 kubelet[2985]: E1104 04:53:23.848721 2985 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7c7fd785ff-28wps" podUID="f9df857a-4dd9-492a-a798-bdfc5556871c" Nov 4 04:53:27.142210 containerd[1675]: time="2025-11-04T04:53:27.141937630Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 4 04:53:27.437976 containerd[1675]: time="2025-11-04T04:53:27.437866898Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 04:53:27.438499 containerd[1675]: time="2025-11-04T04:53:27.438473309Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 4 04:53:27.438608 containerd[1675]: time="2025-11-04T04:53:27.438590727Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Nov 4 04:53:27.438780 kubelet[2985]: E1104 04:53:27.438725 2985 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 04:53:27.438780 kubelet[2985]: E1104 04:53:27.438765 2985 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 04:53:27.439459 kubelet[2985]: E1104 04:53:27.438917 2985 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6b698f4965-59j2r_calico-apiserver(d45b2a71-f81e-4bd0-9908-32bdf3ccd976): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 4 04:53:27.439459 kubelet[2985]: E1104 04:53:27.439340 2985 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b698f4965-59j2r" podUID="d45b2a71-f81e-4bd0-9908-32bdf3ccd976" Nov 4 04:53:28.142429 containerd[1675]: time="2025-11-04T04:53:28.142109022Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 4 04:53:28.456367 containerd[1675]: time="2025-11-04T04:53:28.456261875Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 04:53:28.456918 containerd[1675]: time="2025-11-04T04:53:28.456769550Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 4 04:53:28.456918 containerd[1675]: time="2025-11-04T04:53:28.456846734Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Nov 4 04:53:28.457958 kubelet[2985]: E1104 04:53:28.457059 2985 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 4 04:53:28.457958 kubelet[2985]: E1104 04:53:28.457087 2985 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 4 04:53:28.457958 kubelet[2985]: E1104 04:53:28.457132 2985 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-c5fgp_calico-system(c17ebeee-c521-467f-b0f7-4c3c787f171e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 4 04:53:28.457958 kubelet[2985]: E1104 04:53:28.457164 2985 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-c5fgp" podUID="c17ebeee-c521-467f-b0f7-4c3c787f171e" Nov 4 04:53:29.142715 containerd[1675]: time="2025-11-04T04:53:29.142127051Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 4 04:53:29.439132 containerd[1675]: time="2025-11-04T04:53:29.439043781Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 04:53:29.439898 containerd[1675]: time="2025-11-04T04:53:29.439869512Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 4 04:53:29.439961 containerd[1675]: time="2025-11-04T04:53:29.439936034Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Nov 4 04:53:29.441170 kubelet[2985]: E1104 04:53:29.440240 2985 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 04:53:29.441170 kubelet[2985]: E1104 04:53:29.440273 2985 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 04:53:29.441170 kubelet[2985]: E1104 04:53:29.440330 2985 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6b698f4965-vbq5r_calico-apiserver(7732b0eb-674a-43e3-92d8-1fb67b4f9d41): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 4 04:53:29.441170 kubelet[2985]: E1104 04:53:29.440352 2985 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b698f4965-vbq5r" podUID="7732b0eb-674a-43e3-92d8-1fb67b4f9d41" Nov 4 04:53:30.177416 containerd[1675]: time="2025-11-04T04:53:30.177238283Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 4 04:53:30.521574 containerd[1675]: time="2025-11-04T04:53:30.521485930Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 04:53:30.521820 containerd[1675]: time="2025-11-04T04:53:30.521801624Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 4 04:53:30.521869 containerd[1675]: time="2025-11-04T04:53:30.521854274Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Nov 4 04:53:30.521989 kubelet[2985]: E1104 04:53:30.521952 2985 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 4 04:53:30.522196 kubelet[2985]: E1104 04:53:30.521996 2985 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 4 04:53:30.522196 kubelet[2985]: E1104 04:53:30.522055 2985 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-747674d8bc-lrrzc_calico-system(f119f006-6b2e-4995-831f-8cf672906209): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 4 04:53:30.522196 kubelet[2985]: E1104 04:53:30.522078 2985 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-747674d8bc-lrrzc" podUID="f119f006-6b2e-4995-831f-8cf672906209" Nov 4 04:53:31.142234 containerd[1675]: time="2025-11-04T04:53:31.142209225Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 4 04:53:31.473131 containerd[1675]: time="2025-11-04T04:53:31.473053131Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 04:53:31.473468 containerd[1675]: time="2025-11-04T04:53:31.473428943Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 4 04:53:31.473769 containerd[1675]: time="2025-11-04T04:53:31.473493737Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Nov 4 04:53:31.473803 kubelet[2985]: E1104 04:53:31.473705 2985 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 4 04:53:31.473803 kubelet[2985]: E1104 04:53:31.473754 2985 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 4 04:53:31.474037 kubelet[2985]: E1104 04:53:31.473929 2985 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-n8l85_calico-system(fcffefa6-5e43-470d-ba72-4fb9af0e6455): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 4 04:53:31.474677 containerd[1675]: time="2025-11-04T04:53:31.474663548Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 4 04:53:31.796320 containerd[1675]: time="2025-11-04T04:53:31.796244780Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 04:53:31.799726 containerd[1675]: time="2025-11-04T04:53:31.799698309Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 4 04:53:31.799794 containerd[1675]: time="2025-11-04T04:53:31.799757861Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Nov 4 04:53:31.799880 kubelet[2985]: E1104 04:53:31.799856 2985 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 4 04:53:31.800092 kubelet[2985]: E1104 04:53:31.799884 2985 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 4 04:53:31.800092 kubelet[2985]: E1104 04:53:31.799939 2985 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-n8l85_calico-system(fcffefa6-5e43-470d-ba72-4fb9af0e6455): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 4 04:53:31.800092 kubelet[2985]: E1104 04:53:31.799964 2985 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-n8l85" podUID="fcffefa6-5e43-470d-ba72-4fb9af0e6455" Nov 4 04:53:35.147612 kubelet[2985]: E1104 04:53:35.147523 2985 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7c7fd785ff-28wps" podUID="f9df857a-4dd9-492a-a798-bdfc5556871c" Nov 4 04:53:40.144640 kubelet[2985]: E1104 04:53:40.144613 2985 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-c5fgp" podUID="c17ebeee-c521-467f-b0f7-4c3c787f171e" Nov 4 04:53:41.141999 kubelet[2985]: E1104 04:53:41.141895 2985 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b698f4965-59j2r" podUID="d45b2a71-f81e-4bd0-9908-32bdf3ccd976" Nov 4 04:53:42.142808 kubelet[2985]: E1104 04:53:42.142586 2985 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b698f4965-vbq5r" podUID="7732b0eb-674a-43e3-92d8-1fb67b4f9d41" Nov 4 04:53:43.142199 kubelet[2985]: E1104 04:53:43.142125 2985 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-747674d8bc-lrrzc" podUID="f119f006-6b2e-4995-831f-8cf672906209" Nov 4 04:53:44.865300 systemd[1]: Started sshd@7-139.178.70.105:22-147.75.109.163:43428.service - OpenSSH per-connection server daemon (147.75.109.163:43428). Nov 4 04:53:44.985824 sshd[5089]: Accepted publickey for core from 147.75.109.163 port 43428 ssh2: RSA SHA256:0ivB5Oq6GZf2x7NCrEMZD9tOsD3Rg6IbL/XhZjctde4 Nov 4 04:53:44.987885 sshd-session[5089]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 04:53:44.996182 systemd-logind[1653]: New session 10 of user core. Nov 4 04:53:45.001307 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 4 04:53:45.142564 kubelet[2985]: E1104 04:53:45.142513 2985 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-n8l85" podUID="fcffefa6-5e43-470d-ba72-4fb9af0e6455" Nov 4 04:53:45.688753 sshd[5092]: Connection closed by 147.75.109.163 port 43428 Nov 4 04:53:45.689149 sshd-session[5089]: pam_unix(sshd:session): session closed for user core Nov 4 04:53:45.703464 systemd-logind[1653]: Session 10 logged out. Waiting for processes to exit. Nov 4 04:53:45.708349 systemd[1]: sshd@7-139.178.70.105:22-147.75.109.163:43428.service: Deactivated successfully. Nov 4 04:53:45.710038 systemd[1]: session-10.scope: Deactivated successfully. Nov 4 04:53:45.711983 systemd-logind[1653]: Removed session 10. Nov 4 04:53:47.143057 containerd[1675]: time="2025-11-04T04:53:47.142950594Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 4 04:53:47.468816 containerd[1675]: time="2025-11-04T04:53:47.468502507Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 04:53:47.475365 containerd[1675]: time="2025-11-04T04:53:47.475304301Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 4 04:53:47.475511 containerd[1675]: time="2025-11-04T04:53:47.475400157Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Nov 4 04:53:47.476279 kubelet[2985]: E1104 04:53:47.475678 2985 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 4 04:53:47.476279 kubelet[2985]: E1104 04:53:47.475723 2985 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 4 04:53:47.476279 kubelet[2985]: E1104 04:53:47.475785 2985 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-7c7fd785ff-28wps_calico-system(f9df857a-4dd9-492a-a798-bdfc5556871c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 4 04:53:47.477758 containerd[1675]: time="2025-11-04T04:53:47.477270838Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 4 04:53:47.803812 containerd[1675]: time="2025-11-04T04:53:47.803609376Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 04:53:47.813548 containerd[1675]: time="2025-11-04T04:53:47.813422100Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 4 04:53:47.813714 containerd[1675]: time="2025-11-04T04:53:47.813539047Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Nov 4 04:53:47.813805 kubelet[2985]: E1104 04:53:47.813778 2985 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 4 04:53:47.817842 kubelet[2985]: E1104 04:53:47.813810 2985 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 4 04:53:47.817842 kubelet[2985]: E1104 04:53:47.813858 2985 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-7c7fd785ff-28wps_calico-system(f9df857a-4dd9-492a-a798-bdfc5556871c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 4 04:53:47.817842 kubelet[2985]: E1104 04:53:47.813885 2985 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7c7fd785ff-28wps" podUID="f9df857a-4dd9-492a-a798-bdfc5556871c" Nov 4 04:53:50.700401 systemd[1]: Started sshd@8-139.178.70.105:22-147.75.109.163:35596.service - OpenSSH per-connection server daemon (147.75.109.163:35596). Nov 4 04:53:50.770445 sshd[5115]: Accepted publickey for core from 147.75.109.163 port 35596 ssh2: RSA SHA256:0ivB5Oq6GZf2x7NCrEMZD9tOsD3Rg6IbL/XhZjctde4 Nov 4 04:53:50.771287 sshd-session[5115]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 04:53:50.774102 systemd-logind[1653]: New session 11 of user core. Nov 4 04:53:50.779285 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 4 04:53:50.874109 sshd[5118]: Connection closed by 147.75.109.163 port 35596 Nov 4 04:53:50.874355 sshd-session[5115]: pam_unix(sshd:session): session closed for user core Nov 4 04:53:50.877049 systemd[1]: sshd@8-139.178.70.105:22-147.75.109.163:35596.service: Deactivated successfully. Nov 4 04:53:50.878381 systemd[1]: session-11.scope: Deactivated successfully. Nov 4 04:53:50.878987 systemd-logind[1653]: Session 11 logged out. Waiting for processes to exit. Nov 4 04:53:50.879981 systemd-logind[1653]: Removed session 11. Nov 4 04:53:52.142620 containerd[1675]: time="2025-11-04T04:53:52.142211570Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 4 04:53:52.476102 containerd[1675]: time="2025-11-04T04:53:52.476005250Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 04:53:52.476435 containerd[1675]: time="2025-11-04T04:53:52.476411389Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 4 04:53:52.476490 containerd[1675]: time="2025-11-04T04:53:52.476468715Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Nov 4 04:53:52.476660 kubelet[2985]: E1104 04:53:52.476607 2985 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 04:53:52.476840 kubelet[2985]: E1104 04:53:52.476666 2985 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 04:53:52.476840 kubelet[2985]: E1104 04:53:52.476753 2985 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6b698f4965-59j2r_calico-apiserver(d45b2a71-f81e-4bd0-9908-32bdf3ccd976): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 4 04:53:52.476840 kubelet[2985]: E1104 04:53:52.476792 2985 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b698f4965-59j2r" podUID="d45b2a71-f81e-4bd0-9908-32bdf3ccd976" Nov 4 04:53:55.142783 containerd[1675]: time="2025-11-04T04:53:55.142732487Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 4 04:53:55.480835 containerd[1675]: time="2025-11-04T04:53:55.480586107Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 04:53:55.480986 containerd[1675]: time="2025-11-04T04:53:55.480960265Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 4 04:53:55.481025 containerd[1675]: time="2025-11-04T04:53:55.481012739Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Nov 4 04:53:55.481287 kubelet[2985]: E1104 04:53:55.481129 2985 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 4 04:53:55.481287 kubelet[2985]: E1104 04:53:55.481187 2985 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 4 04:53:55.481287 kubelet[2985]: E1104 04:53:55.481241 2985 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-c5fgp_calico-system(c17ebeee-c521-467f-b0f7-4c3c787f171e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 4 04:53:55.481287 kubelet[2985]: E1104 04:53:55.481263 2985 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-c5fgp" podUID="c17ebeee-c521-467f-b0f7-4c3c787f171e" Nov 4 04:53:55.886967 systemd[1]: Started sshd@9-139.178.70.105:22-147.75.109.163:35606.service - OpenSSH per-connection server daemon (147.75.109.163:35606). Nov 4 04:53:55.973571 sshd[5132]: Accepted publickey for core from 147.75.109.163 port 35606 ssh2: RSA SHA256:0ivB5Oq6GZf2x7NCrEMZD9tOsD3Rg6IbL/XhZjctde4 Nov 4 04:53:55.974329 sshd-session[5132]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 04:53:55.978850 systemd-logind[1653]: New session 12 of user core. Nov 4 04:53:55.983234 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 4 04:53:56.039320 sshd[5135]: Connection closed by 147.75.109.163 port 35606 Nov 4 04:53:56.038910 sshd-session[5132]: pam_unix(sshd:session): session closed for user core Nov 4 04:53:56.046545 systemd[1]: sshd@9-139.178.70.105:22-147.75.109.163:35606.service: Deactivated successfully. Nov 4 04:53:56.048406 systemd[1]: session-12.scope: Deactivated successfully. Nov 4 04:53:56.049127 systemd-logind[1653]: Session 12 logged out. Waiting for processes to exit. Nov 4 04:53:56.050683 systemd-logind[1653]: Removed session 12. Nov 4 04:53:56.052483 systemd[1]: Started sshd@10-139.178.70.105:22-147.75.109.163:35610.service - OpenSSH per-connection server daemon (147.75.109.163:35610). Nov 4 04:53:56.093489 sshd[5147]: Accepted publickey for core from 147.75.109.163 port 35610 ssh2: RSA SHA256:0ivB5Oq6GZf2x7NCrEMZD9tOsD3Rg6IbL/XhZjctde4 Nov 4 04:53:56.094368 sshd-session[5147]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 04:53:56.098455 systemd-logind[1653]: New session 13 of user core. Nov 4 04:53:56.105272 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 4 04:53:56.227976 sshd[5150]: Connection closed by 147.75.109.163 port 35610 Nov 4 04:53:56.234538 sshd-session[5147]: pam_unix(sshd:session): session closed for user core Nov 4 04:53:56.246491 systemd[1]: Started sshd@11-139.178.70.105:22-147.75.109.163:35612.service - OpenSSH per-connection server daemon (147.75.109.163:35612). Nov 4 04:53:56.247048 systemd[1]: sshd@10-139.178.70.105:22-147.75.109.163:35610.service: Deactivated successfully. Nov 4 04:53:56.248655 systemd[1]: session-13.scope: Deactivated successfully. Nov 4 04:53:56.249475 systemd-logind[1653]: Session 13 logged out. Waiting for processes to exit. Nov 4 04:53:56.251409 systemd-logind[1653]: Removed session 13. Nov 4 04:53:56.352480 sshd[5157]: Accepted publickey for core from 147.75.109.163 port 35612 ssh2: RSA SHA256:0ivB5Oq6GZf2x7NCrEMZD9tOsD3Rg6IbL/XhZjctde4 Nov 4 04:53:56.353566 sshd-session[5157]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 04:53:56.358365 systemd-logind[1653]: New session 14 of user core. Nov 4 04:53:56.363287 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 4 04:53:56.439979 sshd[5163]: Connection closed by 147.75.109.163 port 35612 Nov 4 04:53:56.440441 sshd-session[5157]: pam_unix(sshd:session): session closed for user core Nov 4 04:53:56.443702 systemd[1]: sshd@11-139.178.70.105:22-147.75.109.163:35612.service: Deactivated successfully. Nov 4 04:53:56.444763 systemd[1]: session-14.scope: Deactivated successfully. Nov 4 04:53:56.445939 systemd-logind[1653]: Session 14 logged out. Waiting for processes to exit. Nov 4 04:53:56.447013 systemd-logind[1653]: Removed session 14. Nov 4 04:53:57.143203 containerd[1675]: time="2025-11-04T04:53:57.142511048Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 4 04:53:57.492342 containerd[1675]: time="2025-11-04T04:53:57.492014240Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 04:53:57.492809 containerd[1675]: time="2025-11-04T04:53:57.492758362Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 4 04:53:57.492960 containerd[1675]: time="2025-11-04T04:53:57.492888124Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Nov 4 04:53:57.493158 kubelet[2985]: E1104 04:53:57.493107 2985 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 04:53:57.493430 kubelet[2985]: E1104 04:53:57.493161 2985 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 04:53:57.493465 containerd[1675]: time="2025-11-04T04:53:57.493358595Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 4 04:53:57.494270 kubelet[2985]: E1104 04:53:57.494205 2985 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6b698f4965-vbq5r_calico-apiserver(7732b0eb-674a-43e3-92d8-1fb67b4f9d41): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 4 04:53:57.494270 kubelet[2985]: E1104 04:53:57.494231 2985 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b698f4965-vbq5r" podUID="7732b0eb-674a-43e3-92d8-1fb67b4f9d41" Nov 4 04:53:57.839529 containerd[1675]: time="2025-11-04T04:53:57.839274129Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 04:53:57.846746 containerd[1675]: time="2025-11-04T04:53:57.846576929Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 4 04:53:57.846746 containerd[1675]: time="2025-11-04T04:53:57.846631632Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Nov 4 04:53:57.846827 kubelet[2985]: E1104 04:53:57.846708 2985 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 4 04:53:57.846827 kubelet[2985]: E1104 04:53:57.846733 2985 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 4 04:53:57.847030 kubelet[2985]: E1104 04:53:57.846922 2985 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-747674d8bc-lrrzc_calico-system(f119f006-6b2e-4995-831f-8cf672906209): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 4 04:53:57.847112 kubelet[2985]: E1104 04:53:57.847093 2985 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-747674d8bc-lrrzc" podUID="f119f006-6b2e-4995-831f-8cf672906209" Nov 4 04:53:58.215984 kubelet[2985]: E1104 04:53:58.215766 2985 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7c7fd785ff-28wps" podUID="f9df857a-4dd9-492a-a798-bdfc5556871c" Nov 4 04:53:59.143345 containerd[1675]: time="2025-11-04T04:53:59.143319025Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 4 04:53:59.480948 containerd[1675]: time="2025-11-04T04:53:59.480861464Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 04:53:59.484053 containerd[1675]: time="2025-11-04T04:53:59.484029120Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 4 04:53:59.484181 containerd[1675]: time="2025-11-04T04:53:59.484073522Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Nov 4 04:53:59.484204 kubelet[2985]: E1104 04:53:59.484180 2985 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 4 04:53:59.484434 kubelet[2985]: E1104 04:53:59.484208 2985 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 4 04:53:59.484434 kubelet[2985]: E1104 04:53:59.484253 2985 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-n8l85_calico-system(fcffefa6-5e43-470d-ba72-4fb9af0e6455): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 4 04:53:59.485447 containerd[1675]: time="2025-11-04T04:53:59.485132580Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 4 04:53:59.862007 containerd[1675]: time="2025-11-04T04:53:59.861923763Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 04:53:59.866388 containerd[1675]: time="2025-11-04T04:53:59.866354298Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 4 04:53:59.866498 containerd[1675]: time="2025-11-04T04:53:59.866370593Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Nov 4 04:53:59.866554 kubelet[2985]: E1104 04:53:59.866524 2985 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 4 04:53:59.866609 kubelet[2985]: E1104 04:53:59.866560 2985 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 4 04:53:59.866641 kubelet[2985]: E1104 04:53:59.866616 2985 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-n8l85_calico-system(fcffefa6-5e43-470d-ba72-4fb9af0e6455): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 4 04:53:59.866732 kubelet[2985]: E1104 04:53:59.866707 2985 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-n8l85" podUID="fcffefa6-5e43-470d-ba72-4fb9af0e6455" Nov 4 04:54:01.449377 systemd[1]: Started sshd@12-139.178.70.105:22-147.75.109.163:42614.service - OpenSSH per-connection server daemon (147.75.109.163:42614). Nov 4 04:54:01.490587 sshd[5182]: Accepted publickey for core from 147.75.109.163 port 42614 ssh2: RSA SHA256:0ivB5Oq6GZf2x7NCrEMZD9tOsD3Rg6IbL/XhZjctde4 Nov 4 04:54:01.491405 sshd-session[5182]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 04:54:01.495803 systemd-logind[1653]: New session 15 of user core. Nov 4 04:54:01.500233 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 4 04:54:01.552350 sshd[5185]: Connection closed by 147.75.109.163 port 42614 Nov 4 04:54:01.553488 sshd-session[5182]: pam_unix(sshd:session): session closed for user core Nov 4 04:54:01.555524 systemd[1]: sshd@12-139.178.70.105:22-147.75.109.163:42614.service: Deactivated successfully. Nov 4 04:54:01.557023 systemd[1]: session-15.scope: Deactivated successfully. Nov 4 04:54:01.558716 systemd-logind[1653]: Session 15 logged out. Waiting for processes to exit. Nov 4 04:54:01.559691 systemd-logind[1653]: Removed session 15. Nov 4 04:54:04.143888 kubelet[2985]: E1104 04:54:04.143861 2985 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b698f4965-59j2r" podUID="d45b2a71-f81e-4bd0-9908-32bdf3ccd976" Nov 4 04:54:06.565452 systemd[1]: Started sshd@13-139.178.70.105:22-147.75.109.163:42622.service - OpenSSH per-connection server daemon (147.75.109.163:42622). Nov 4 04:54:06.886757 sshd[5197]: Accepted publickey for core from 147.75.109.163 port 42622 ssh2: RSA SHA256:0ivB5Oq6GZf2x7NCrEMZD9tOsD3Rg6IbL/XhZjctde4 Nov 4 04:54:06.887669 sshd-session[5197]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 04:54:06.891305 systemd-logind[1653]: New session 16 of user core. Nov 4 04:54:06.898538 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 4 04:54:07.005820 sshd[5200]: Connection closed by 147.75.109.163 port 42622 Nov 4 04:54:07.006011 sshd-session[5197]: pam_unix(sshd:session): session closed for user core Nov 4 04:54:07.008838 systemd-logind[1653]: Session 16 logged out. Waiting for processes to exit. Nov 4 04:54:07.009986 systemd[1]: sshd@13-139.178.70.105:22-147.75.109.163:42622.service: Deactivated successfully. Nov 4 04:54:07.011695 systemd[1]: session-16.scope: Deactivated successfully. Nov 4 04:54:07.013278 systemd-logind[1653]: Removed session 16. Nov 4 04:54:08.143125 kubelet[2985]: E1104 04:54:08.143079 2985 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-c5fgp" podUID="c17ebeee-c521-467f-b0f7-4c3c787f171e" Nov 4 04:54:10.143061 kubelet[2985]: E1104 04:54:10.142877 2985 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b698f4965-vbq5r" podUID="7732b0eb-674a-43e3-92d8-1fb67b4f9d41" Nov 4 04:54:10.143893 kubelet[2985]: E1104 04:54:10.143577 2985 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7c7fd785ff-28wps" podUID="f9df857a-4dd9-492a-a798-bdfc5556871c" Nov 4 04:54:11.141529 kubelet[2985]: E1104 04:54:11.141484 2985 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-747674d8bc-lrrzc" podUID="f119f006-6b2e-4995-831f-8cf672906209" Nov 4 04:54:11.142912 kubelet[2985]: E1104 04:54:11.142884 2985 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-n8l85" podUID="fcffefa6-5e43-470d-ba72-4fb9af0e6455" Nov 4 04:54:12.015018 systemd[1]: Started sshd@14-139.178.70.105:22-147.75.109.163:41456.service - OpenSSH per-connection server daemon (147.75.109.163:41456). Nov 4 04:54:12.400233 sshd[5214]: Accepted publickey for core from 147.75.109.163 port 41456 ssh2: RSA SHA256:0ivB5Oq6GZf2x7NCrEMZD9tOsD3Rg6IbL/XhZjctde4 Nov 4 04:54:12.401423 sshd-session[5214]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 04:54:12.404647 systemd-logind[1653]: New session 17 of user core. Nov 4 04:54:12.409254 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 4 04:54:12.579550 sshd[5218]: Connection closed by 147.75.109.163 port 41456 Nov 4 04:54:12.579968 sshd-session[5214]: pam_unix(sshd:session): session closed for user core Nov 4 04:54:12.607782 systemd[1]: sshd@14-139.178.70.105:22-147.75.109.163:41456.service: Deactivated successfully. Nov 4 04:54:12.609219 systemd[1]: session-17.scope: Deactivated successfully. Nov 4 04:54:12.609746 systemd-logind[1653]: Session 17 logged out. Waiting for processes to exit. Nov 4 04:54:12.610439 systemd-logind[1653]: Removed session 17. Nov 4 04:54:16.142102 kubelet[2985]: E1104 04:54:16.141887 2985 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b698f4965-59j2r" podUID="d45b2a71-f81e-4bd0-9908-32bdf3ccd976" Nov 4 04:54:17.578329 systemd[1]: Started sshd@15-139.178.70.105:22-147.75.109.163:41466.service - OpenSSH per-connection server daemon (147.75.109.163:41466). Nov 4 04:54:17.630285 sshd[5256]: Accepted publickey for core from 147.75.109.163 port 41466 ssh2: RSA SHA256:0ivB5Oq6GZf2x7NCrEMZD9tOsD3Rg6IbL/XhZjctde4 Nov 4 04:54:17.631215 sshd-session[5256]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 04:54:17.634591 systemd-logind[1653]: New session 18 of user core. Nov 4 04:54:17.643353 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 4 04:54:17.714216 sshd[5259]: Connection closed by 147.75.109.163 port 41466 Nov 4 04:54:17.714564 sshd-session[5256]: pam_unix(sshd:session): session closed for user core Nov 4 04:54:17.721423 systemd[1]: sshd@15-139.178.70.105:22-147.75.109.163:41466.service: Deactivated successfully. Nov 4 04:54:17.722659 systemd[1]: session-18.scope: Deactivated successfully. Nov 4 04:54:17.723318 systemd-logind[1653]: Session 18 logged out. Waiting for processes to exit. Nov 4 04:54:17.724934 systemd[1]: Started sshd@16-139.178.70.105:22-147.75.109.163:41482.service - OpenSSH per-connection server daemon (147.75.109.163:41482). Nov 4 04:54:17.725731 systemd-logind[1653]: Removed session 18. Nov 4 04:54:17.775684 sshd[5271]: Accepted publickey for core from 147.75.109.163 port 41482 ssh2: RSA SHA256:0ivB5Oq6GZf2x7NCrEMZD9tOsD3Rg6IbL/XhZjctde4 Nov 4 04:54:17.776673 sshd-session[5271]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 04:54:17.780117 systemd-logind[1653]: New session 19 of user core. Nov 4 04:54:17.786280 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 4 04:54:18.565651 sshd[5274]: Connection closed by 147.75.109.163 port 41482 Nov 4 04:54:18.567622 sshd-session[5271]: pam_unix(sshd:session): session closed for user core Nov 4 04:54:18.582266 systemd[1]: Started sshd@17-139.178.70.105:22-147.75.109.163:41484.service - OpenSSH per-connection server daemon (147.75.109.163:41484). Nov 4 04:54:18.586518 systemd[1]: sshd@16-139.178.70.105:22-147.75.109.163:41482.service: Deactivated successfully. Nov 4 04:54:18.592596 systemd[1]: session-19.scope: Deactivated successfully. Nov 4 04:54:18.596329 systemd-logind[1653]: Session 19 logged out. Waiting for processes to exit. Nov 4 04:54:18.598441 systemd-logind[1653]: Removed session 19. Nov 4 04:54:18.758874 sshd[5282]: Accepted publickey for core from 147.75.109.163 port 41484 ssh2: RSA SHA256:0ivB5Oq6GZf2x7NCrEMZD9tOsD3Rg6IbL/XhZjctde4 Nov 4 04:54:18.760512 sshd-session[5282]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 04:54:18.763861 systemd-logind[1653]: New session 20 of user core. Nov 4 04:54:18.774253 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 4 04:54:20.065324 sshd[5288]: Connection closed by 147.75.109.163 port 41484 Nov 4 04:54:20.065771 sshd-session[5282]: pam_unix(sshd:session): session closed for user core Nov 4 04:54:20.072925 systemd[1]: sshd@17-139.178.70.105:22-147.75.109.163:41484.service: Deactivated successfully. Nov 4 04:54:20.075875 systemd[1]: session-20.scope: Deactivated successfully. Nov 4 04:54:20.077675 systemd-logind[1653]: Session 20 logged out. Waiting for processes to exit. Nov 4 04:54:20.082784 systemd[1]: Started sshd@18-139.178.70.105:22-147.75.109.163:33080.service - OpenSSH per-connection server daemon (147.75.109.163:33080). Nov 4 04:54:20.087931 systemd-logind[1653]: Removed session 20. Nov 4 04:54:20.139480 sshd[5303]: Accepted publickey for core from 147.75.109.163 port 33080 ssh2: RSA SHA256:0ivB5Oq6GZf2x7NCrEMZD9tOsD3Rg6IbL/XhZjctde4 Nov 4 04:54:20.140304 sshd-session[5303]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 04:54:20.145316 systemd-logind[1653]: New session 21 of user core. Nov 4 04:54:20.149220 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 4 04:54:20.170905 kubelet[2985]: E1104 04:54:20.170876 2985 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-c5fgp" podUID="c17ebeee-c521-467f-b0f7-4c3c787f171e" Nov 4 04:54:20.517966 sshd[5306]: Connection closed by 147.75.109.163 port 33080 Nov 4 04:54:20.518986 sshd-session[5303]: pam_unix(sshd:session): session closed for user core Nov 4 04:54:20.527079 systemd[1]: sshd@18-139.178.70.105:22-147.75.109.163:33080.service: Deactivated successfully. Nov 4 04:54:20.531173 systemd[1]: session-21.scope: Deactivated successfully. Nov 4 04:54:20.533274 systemd-logind[1653]: Session 21 logged out. Waiting for processes to exit. Nov 4 04:54:20.535970 systemd-logind[1653]: Removed session 21. Nov 4 04:54:20.542561 systemd[1]: Started sshd@19-139.178.70.105:22-147.75.109.163:33096.service - OpenSSH per-connection server daemon (147.75.109.163:33096). Nov 4 04:54:20.591854 sshd[5316]: Accepted publickey for core from 147.75.109.163 port 33096 ssh2: RSA SHA256:0ivB5Oq6GZf2x7NCrEMZD9tOsD3Rg6IbL/XhZjctde4 Nov 4 04:54:20.592687 sshd-session[5316]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 04:54:20.599752 systemd-logind[1653]: New session 22 of user core. Nov 4 04:54:20.603274 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 4 04:54:20.686633 sshd[5319]: Connection closed by 147.75.109.163 port 33096 Nov 4 04:54:20.689690 systemd[1]: sshd@19-139.178.70.105:22-147.75.109.163:33096.service: Deactivated successfully. Nov 4 04:54:20.687004 sshd-session[5316]: pam_unix(sshd:session): session closed for user core Nov 4 04:54:20.691131 systemd[1]: session-22.scope: Deactivated successfully. Nov 4 04:54:20.691914 systemd-logind[1653]: Session 22 logged out. Waiting for processes to exit. Nov 4 04:54:20.693125 systemd-logind[1653]: Removed session 22. Nov 4 04:54:22.142624 kubelet[2985]: E1104 04:54:22.142120 2985 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7c7fd785ff-28wps" podUID="f9df857a-4dd9-492a-a798-bdfc5556871c" Nov 4 04:54:23.141911 kubelet[2985]: E1104 04:54:23.141885 2985 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b698f4965-vbq5r" podUID="7732b0eb-674a-43e3-92d8-1fb67b4f9d41" Nov 4 04:54:24.142776 kubelet[2985]: E1104 04:54:24.142612 2985 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-747674d8bc-lrrzc" podUID="f119f006-6b2e-4995-831f-8cf672906209" Nov 4 04:54:25.700369 systemd[1]: Started sshd@20-139.178.70.105:22-147.75.109.163:33100.service - OpenSSH per-connection server daemon (147.75.109.163:33100). Nov 4 04:54:25.742179 sshd[5334]: Accepted publickey for core from 147.75.109.163 port 33100 ssh2: RSA SHA256:0ivB5Oq6GZf2x7NCrEMZD9tOsD3Rg6IbL/XhZjctde4 Nov 4 04:54:25.743047 sshd-session[5334]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 04:54:25.748013 systemd-logind[1653]: New session 23 of user core. Nov 4 04:54:25.754307 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 4 04:54:25.814392 sshd[5337]: Connection closed by 147.75.109.163 port 33100 Nov 4 04:54:25.814824 sshd-session[5334]: pam_unix(sshd:session): session closed for user core Nov 4 04:54:25.819147 systemd[1]: sshd@20-139.178.70.105:22-147.75.109.163:33100.service: Deactivated successfully. Nov 4 04:54:25.820936 systemd[1]: session-23.scope: Deactivated successfully. Nov 4 04:54:25.821492 systemd-logind[1653]: Session 23 logged out. Waiting for processes to exit. Nov 4 04:54:25.822284 systemd-logind[1653]: Removed session 23. Nov 4 04:54:26.144958 kubelet[2985]: E1104 04:54:26.144932 2985 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-n8l85" podUID="fcffefa6-5e43-470d-ba72-4fb9af0e6455" Nov 4 04:54:29.141505 kubelet[2985]: E1104 04:54:29.141447 2985 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b698f4965-59j2r" podUID="d45b2a71-f81e-4bd0-9908-32bdf3ccd976" Nov 4 04:54:30.824229 systemd[1]: Started sshd@21-139.178.70.105:22-147.75.109.163:35656.service - OpenSSH per-connection server daemon (147.75.109.163:35656). Nov 4 04:54:30.880856 sshd[5356]: Accepted publickey for core from 147.75.109.163 port 35656 ssh2: RSA SHA256:0ivB5Oq6GZf2x7NCrEMZD9tOsD3Rg6IbL/XhZjctde4 Nov 4 04:54:30.890019 sshd-session[5356]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 04:54:30.893784 systemd-logind[1653]: New session 24 of user core. Nov 4 04:54:30.898364 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 4 04:54:31.034557 sshd[5359]: Connection closed by 147.75.109.163 port 35656 Nov 4 04:54:31.036276 sshd-session[5356]: pam_unix(sshd:session): session closed for user core Nov 4 04:54:31.040080 systemd[1]: sshd@21-139.178.70.105:22-147.75.109.163:35656.service: Deactivated successfully. Nov 4 04:54:31.041118 systemd[1]: session-24.scope: Deactivated successfully. Nov 4 04:54:31.042561 systemd-logind[1653]: Session 24 logged out. Waiting for processes to exit. Nov 4 04:54:31.044119 systemd-logind[1653]: Removed session 24. Nov 4 04:54:31.143697 kubelet[2985]: E1104 04:54:31.143447 2985 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-c5fgp" podUID="c17ebeee-c521-467f-b0f7-4c3c787f171e" Nov 4 04:54:33.141981 containerd[1675]: time="2025-11-04T04:54:33.141732813Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 4 04:54:33.495296 containerd[1675]: time="2025-11-04T04:54:33.495176362Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 04:54:33.497373 containerd[1675]: time="2025-11-04T04:54:33.497333882Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 4 04:54:33.497428 containerd[1675]: time="2025-11-04T04:54:33.497394759Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Nov 4 04:54:33.505573 kubelet[2985]: E1104 04:54:33.503011 2985 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 4 04:54:33.509833 kubelet[2985]: E1104 04:54:33.509672 2985 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 4 04:54:33.515710 kubelet[2985]: E1104 04:54:33.511308 2985 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-7c7fd785ff-28wps_calico-system(f9df857a-4dd9-492a-a798-bdfc5556871c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 4 04:54:33.516799 containerd[1675]: time="2025-11-04T04:54:33.516560921Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 4 04:54:33.842068 containerd[1675]: time="2025-11-04T04:54:33.841667358Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 04:54:33.869628 containerd[1675]: time="2025-11-04T04:54:33.869586846Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 4 04:54:33.869984 containerd[1675]: time="2025-11-04T04:54:33.869603218Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Nov 4 04:54:33.870030 kubelet[2985]: E1104 04:54:33.869898 2985 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 4 04:54:33.870030 kubelet[2985]: E1104 04:54:33.869930 2985 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 4 04:54:33.870309 kubelet[2985]: E1104 04:54:33.870294 2985 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-7c7fd785ff-28wps_calico-system(f9df857a-4dd9-492a-a798-bdfc5556871c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 4 04:54:33.870405 kubelet[2985]: E1104 04:54:33.870389 2985 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7c7fd785ff-28wps" podUID="f9df857a-4dd9-492a-a798-bdfc5556871c" Nov 4 04:54:34.142649 kubelet[2985]: E1104 04:54:34.142390 2985 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b698f4965-vbq5r" podUID="7732b0eb-674a-43e3-92d8-1fb67b4f9d41" Nov 4 04:54:35.141587 kubelet[2985]: E1104 04:54:35.141534 2985 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-747674d8bc-lrrzc" podUID="f119f006-6b2e-4995-831f-8cf672906209" Nov 4 04:54:36.045215 systemd[1]: Started sshd@22-139.178.70.105:22-147.75.109.163:35664.service - OpenSSH per-connection server daemon (147.75.109.163:35664). Nov 4 04:54:36.087493 sshd[5373]: Accepted publickey for core from 147.75.109.163 port 35664 ssh2: RSA SHA256:0ivB5Oq6GZf2x7NCrEMZD9tOsD3Rg6IbL/XhZjctde4 Nov 4 04:54:36.087232 sshd-session[5373]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 04:54:36.095083 systemd-logind[1653]: New session 25 of user core. Nov 4 04:54:36.101294 systemd[1]: Started session-25.scope - Session 25 of User core. Nov 4 04:54:36.268578 sshd[5376]: Connection closed by 147.75.109.163 port 35664 Nov 4 04:54:36.269045 sshd-session[5373]: pam_unix(sshd:session): session closed for user core Nov 4 04:54:36.272347 systemd[1]: sshd@22-139.178.70.105:22-147.75.109.163:35664.service: Deactivated successfully. Nov 4 04:54:36.273789 systemd[1]: session-25.scope: Deactivated successfully. Nov 4 04:54:36.274487 systemd-logind[1653]: Session 25 logged out. Waiting for processes to exit. Nov 4 04:54:36.275527 systemd-logind[1653]: Removed session 25. Nov 4 04:54:38.143053 kubelet[2985]: E1104 04:54:38.143025 2985 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-n8l85" podUID="fcffefa6-5e43-470d-ba72-4fb9af0e6455"