Oct 31 05:25:08.866870 kernel: Linux version 6.12.54-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Fri Oct 31 03:34:59 -00 2025 Oct 31 05:25:08.866890 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=901c893e02c15be8852b9ecc6e1436f31ef98f77ceb926ac27b04b3e43d366de Oct 31 05:25:08.866896 kernel: Disabled fast string operations Oct 31 05:25:08.866901 kernel: BIOS-provided physical RAM map: Oct 31 05:25:08.866905 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ebff] usable Oct 31 05:25:08.866909 kernel: BIOS-e820: [mem 0x000000000009ec00-0x000000000009ffff] reserved Oct 31 05:25:08.866931 kernel: BIOS-e820: [mem 0x00000000000dc000-0x00000000000fffff] reserved Oct 31 05:25:08.866939 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007fedffff] usable Oct 31 05:25:08.866944 kernel: BIOS-e820: [mem 0x000000007fee0000-0x000000007fefefff] ACPI data Oct 31 05:25:08.866949 kernel: BIOS-e820: [mem 0x000000007feff000-0x000000007fefffff] ACPI NVS Oct 31 05:25:08.866953 kernel: BIOS-e820: [mem 0x000000007ff00000-0x000000007fffffff] usable Oct 31 05:25:08.866958 kernel: BIOS-e820: [mem 0x00000000f0000000-0x00000000f7ffffff] reserved Oct 31 05:25:08.866963 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec0ffff] reserved Oct 31 05:25:08.866967 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Oct 31 05:25:08.866974 kernel: BIOS-e820: [mem 0x00000000fffe0000-0x00000000ffffffff] reserved Oct 31 05:25:08.866979 kernel: NX (Execute Disable) protection: active Oct 31 05:25:08.866984 kernel: APIC: Static calls initialized Oct 31 05:25:08.866993 kernel: SMBIOS 2.7 present. Oct 31 05:25:08.867002 kernel: DMI: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 05/28/2020 Oct 31 05:25:08.867009 kernel: DMI: Memory slots populated: 1/128 Oct 31 05:25:08.867018 kernel: vmware: hypercall mode: 0x00 Oct 31 05:25:08.867025 kernel: Hypervisor detected: VMware Oct 31 05:25:08.867032 kernel: vmware: TSC freq read from hypervisor : 3408.000 MHz Oct 31 05:25:08.867040 kernel: vmware: Host bus clock speed read from hypervisor : 66000000 Hz Oct 31 05:25:08.867047 kernel: vmware: using clock offset of 3220317366 ns Oct 31 05:25:08.867054 kernel: tsc: Detected 3408.000 MHz processor Oct 31 05:25:08.867061 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Oct 31 05:25:08.867068 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Oct 31 05:25:08.867075 kernel: last_pfn = 0x80000 max_arch_pfn = 0x400000000 Oct 31 05:25:08.867081 kernel: total RAM covered: 3072M Oct 31 05:25:08.867088 kernel: Found optimal setting for mtrr clean up Oct 31 05:25:08.867094 kernel: gran_size: 64K chunk_size: 64K num_reg: 2 lose cover RAM: 0G Oct 31 05:25:08.867099 kernel: MTRR map: 6 entries (5 fixed + 1 variable; max 21), built from 8 variable MTRRs Oct 31 05:25:08.867105 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Oct 31 05:25:08.867110 kernel: Using GB pages for direct mapping Oct 31 05:25:08.867116 kernel: ACPI: Early table checksum verification disabled Oct 31 05:25:08.867121 kernel: ACPI: RSDP 0x00000000000F6A00 000024 (v02 PTLTD ) Oct 31 05:25:08.867128 kernel: ACPI: XSDT 0x000000007FEE965B 00005C (v01 INTEL 440BX 06040000 VMW 01324272) Oct 31 05:25:08.867133 kernel: ACPI: FACP 0x000000007FEFEE73 0000F4 (v04 INTEL 440BX 06040000 PTL 000F4240) Oct 31 05:25:08.867139 kernel: ACPI: DSDT 0x000000007FEEAD55 01411E (v01 PTLTD Custom 06040000 MSFT 03000001) Oct 31 05:25:08.867146 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Oct 31 05:25:08.867152 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Oct 31 05:25:08.867157 kernel: ACPI: BOOT 0x000000007FEEAD2D 000028 (v01 PTLTD $SBFTBL$ 06040000 LTP 00000001) Oct 31 05:25:08.867164 kernel: ACPI: APIC 0x000000007FEEA5EB 000742 (v01 PTLTD ? APIC 06040000 LTP 00000000) Oct 31 05:25:08.867170 kernel: ACPI: MCFG 0x000000007FEEA5AF 00003C (v01 PTLTD $PCITBL$ 06040000 LTP 00000001) Oct 31 05:25:08.867176 kernel: ACPI: SRAT 0x000000007FEE9757 0008A8 (v02 VMWARE MEMPLUG 06040000 VMW 00000001) Oct 31 05:25:08.867182 kernel: ACPI: HPET 0x000000007FEE971F 000038 (v01 VMWARE VMW HPET 06040000 VMW 00000001) Oct 31 05:25:08.867187 kernel: ACPI: WAET 0x000000007FEE96F7 000028 (v01 VMWARE VMW WAET 06040000 VMW 00000001) Oct 31 05:25:08.867193 kernel: ACPI: Reserving FACP table memory at [mem 0x7fefee73-0x7fefef66] Oct 31 05:25:08.867200 kernel: ACPI: Reserving DSDT table memory at [mem 0x7feead55-0x7fefee72] Oct 31 05:25:08.867206 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Oct 31 05:25:08.867211 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Oct 31 05:25:08.867217 kernel: ACPI: Reserving BOOT table memory at [mem 0x7feead2d-0x7feead54] Oct 31 05:25:08.867223 kernel: ACPI: Reserving APIC table memory at [mem 0x7feea5eb-0x7feead2c] Oct 31 05:25:08.867228 kernel: ACPI: Reserving MCFG table memory at [mem 0x7feea5af-0x7feea5ea] Oct 31 05:25:08.867234 kernel: ACPI: Reserving SRAT table memory at [mem 0x7fee9757-0x7fee9ffe] Oct 31 05:25:08.867240 kernel: ACPI: Reserving HPET table memory at [mem 0x7fee971f-0x7fee9756] Oct 31 05:25:08.867246 kernel: ACPI: Reserving WAET table memory at [mem 0x7fee96f7-0x7fee971e] Oct 31 05:25:08.867251 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Oct 31 05:25:08.867257 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Oct 31 05:25:08.867263 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000-0xbfffffff] hotplug Oct 31 05:25:08.867268 kernel: NUMA: Node 0 [mem 0x00001000-0x0009ffff] + [mem 0x00100000-0x7fffffff] -> [mem 0x00001000-0x7fffffff] Oct 31 05:25:08.867274 kernel: NODE_DATA(0) allocated [mem 0x7fff8dc0-0x7fffffff] Oct 31 05:25:08.867280 kernel: Zone ranges: Oct 31 05:25:08.867287 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Oct 31 05:25:08.867293 kernel: DMA32 [mem 0x0000000001000000-0x000000007fffffff] Oct 31 05:25:08.867298 kernel: Normal empty Oct 31 05:25:08.867304 kernel: Device empty Oct 31 05:25:08.867309 kernel: Movable zone start for each node Oct 31 05:25:08.867315 kernel: Early memory node ranges Oct 31 05:25:08.867320 kernel: node 0: [mem 0x0000000000001000-0x000000000009dfff] Oct 31 05:25:08.867326 kernel: node 0: [mem 0x0000000000100000-0x000000007fedffff] Oct 31 05:25:08.867333 kernel: node 0: [mem 0x000000007ff00000-0x000000007fffffff] Oct 31 05:25:08.867339 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007fffffff] Oct 31 05:25:08.867344 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Oct 31 05:25:08.867350 kernel: On node 0, zone DMA: 98 pages in unavailable ranges Oct 31 05:25:08.867355 kernel: On node 0, zone DMA32: 32 pages in unavailable ranges Oct 31 05:25:08.867361 kernel: ACPI: PM-Timer IO Port: 0x1008 Oct 31 05:25:08.867367 kernel: ACPI: LAPIC_NMI (acpi_id[0x00] high edge lint[0x1]) Oct 31 05:25:08.867373 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Oct 31 05:25:08.867379 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Oct 31 05:25:08.867384 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Oct 31 05:25:08.867390 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Oct 31 05:25:08.867395 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Oct 31 05:25:08.867401 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Oct 31 05:25:08.867406 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Oct 31 05:25:08.867412 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Oct 31 05:25:08.867418 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Oct 31 05:25:08.867424 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Oct 31 05:25:08.867429 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Oct 31 05:25:08.867434 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Oct 31 05:25:08.867440 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Oct 31 05:25:08.867445 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Oct 31 05:25:08.867451 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Oct 31 05:25:08.867456 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Oct 31 05:25:08.867463 kernel: ACPI: LAPIC_NMI (acpi_id[0x11] high edge lint[0x1]) Oct 31 05:25:08.867468 kernel: ACPI: LAPIC_NMI (acpi_id[0x12] high edge lint[0x1]) Oct 31 05:25:08.867474 kernel: ACPI: LAPIC_NMI (acpi_id[0x13] high edge lint[0x1]) Oct 31 05:25:08.867479 kernel: ACPI: LAPIC_NMI (acpi_id[0x14] high edge lint[0x1]) Oct 31 05:25:08.867484 kernel: ACPI: LAPIC_NMI (acpi_id[0x15] high edge lint[0x1]) Oct 31 05:25:08.867490 kernel: ACPI: LAPIC_NMI (acpi_id[0x16] high edge lint[0x1]) Oct 31 05:25:08.867495 kernel: ACPI: LAPIC_NMI (acpi_id[0x17] high edge lint[0x1]) Oct 31 05:25:08.867501 kernel: ACPI: LAPIC_NMI (acpi_id[0x18] high edge lint[0x1]) Oct 31 05:25:08.867507 kernel: ACPI: LAPIC_NMI (acpi_id[0x19] high edge lint[0x1]) Oct 31 05:25:08.867513 kernel: ACPI: LAPIC_NMI (acpi_id[0x1a] high edge lint[0x1]) Oct 31 05:25:08.867518 kernel: ACPI: LAPIC_NMI (acpi_id[0x1b] high edge lint[0x1]) Oct 31 05:25:08.867523 kernel: ACPI: LAPIC_NMI (acpi_id[0x1c] high edge lint[0x1]) Oct 31 05:25:08.867529 kernel: ACPI: LAPIC_NMI (acpi_id[0x1d] high edge lint[0x1]) Oct 31 05:25:08.867534 kernel: ACPI: LAPIC_NMI (acpi_id[0x1e] high edge lint[0x1]) Oct 31 05:25:08.867540 kernel: ACPI: LAPIC_NMI (acpi_id[0x1f] high edge lint[0x1]) Oct 31 05:25:08.867545 kernel: ACPI: LAPIC_NMI (acpi_id[0x20] high edge lint[0x1]) Oct 31 05:25:08.867552 kernel: ACPI: LAPIC_NMI (acpi_id[0x21] high edge lint[0x1]) Oct 31 05:25:08.867557 kernel: ACPI: LAPIC_NMI (acpi_id[0x22] high edge lint[0x1]) Oct 31 05:25:08.867563 kernel: ACPI: LAPIC_NMI (acpi_id[0x23] high edge lint[0x1]) Oct 31 05:25:08.867568 kernel: ACPI: LAPIC_NMI (acpi_id[0x24] high edge lint[0x1]) Oct 31 05:25:08.867574 kernel: ACPI: LAPIC_NMI (acpi_id[0x25] high edge lint[0x1]) Oct 31 05:25:08.867579 kernel: ACPI: LAPIC_NMI (acpi_id[0x26] high edge lint[0x1]) Oct 31 05:25:08.867586 kernel: ACPI: LAPIC_NMI (acpi_id[0x27] high edge lint[0x1]) Oct 31 05:25:08.867595 kernel: ACPI: LAPIC_NMI (acpi_id[0x28] high edge lint[0x1]) Oct 31 05:25:08.867601 kernel: ACPI: LAPIC_NMI (acpi_id[0x29] high edge lint[0x1]) Oct 31 05:25:08.867607 kernel: ACPI: LAPIC_NMI (acpi_id[0x2a] high edge lint[0x1]) Oct 31 05:25:08.867614 kernel: ACPI: LAPIC_NMI (acpi_id[0x2b] high edge lint[0x1]) Oct 31 05:25:08.867620 kernel: ACPI: LAPIC_NMI (acpi_id[0x2c] high edge lint[0x1]) Oct 31 05:25:08.867625 kernel: ACPI: LAPIC_NMI (acpi_id[0x2d] high edge lint[0x1]) Oct 31 05:25:08.867631 kernel: ACPI: LAPIC_NMI (acpi_id[0x2e] high edge lint[0x1]) Oct 31 05:25:08.867638 kernel: ACPI: LAPIC_NMI (acpi_id[0x2f] high edge lint[0x1]) Oct 31 05:25:08.867644 kernel: ACPI: LAPIC_NMI (acpi_id[0x30] high edge lint[0x1]) Oct 31 05:25:08.867650 kernel: ACPI: LAPIC_NMI (acpi_id[0x31] high edge lint[0x1]) Oct 31 05:25:08.867656 kernel: ACPI: LAPIC_NMI (acpi_id[0x32] high edge lint[0x1]) Oct 31 05:25:08.867662 kernel: ACPI: LAPIC_NMI (acpi_id[0x33] high edge lint[0x1]) Oct 31 05:25:08.867668 kernel: ACPI: LAPIC_NMI (acpi_id[0x34] high edge lint[0x1]) Oct 31 05:25:08.867674 kernel: ACPI: LAPIC_NMI (acpi_id[0x35] high edge lint[0x1]) Oct 31 05:25:08.867680 kernel: ACPI: LAPIC_NMI (acpi_id[0x36] high edge lint[0x1]) Oct 31 05:25:08.867687 kernel: ACPI: LAPIC_NMI (acpi_id[0x37] high edge lint[0x1]) Oct 31 05:25:08.867692 kernel: ACPI: LAPIC_NMI (acpi_id[0x38] high edge lint[0x1]) Oct 31 05:25:08.867698 kernel: ACPI: LAPIC_NMI (acpi_id[0x39] high edge lint[0x1]) Oct 31 05:25:08.867704 kernel: ACPI: LAPIC_NMI (acpi_id[0x3a] high edge lint[0x1]) Oct 31 05:25:08.867710 kernel: ACPI: LAPIC_NMI (acpi_id[0x3b] high edge lint[0x1]) Oct 31 05:25:08.867716 kernel: ACPI: LAPIC_NMI (acpi_id[0x3c] high edge lint[0x1]) Oct 31 05:25:08.867722 kernel: ACPI: LAPIC_NMI (acpi_id[0x3d] high edge lint[0x1]) Oct 31 05:25:08.867729 kernel: ACPI: LAPIC_NMI (acpi_id[0x3e] high edge lint[0x1]) Oct 31 05:25:08.867735 kernel: ACPI: LAPIC_NMI (acpi_id[0x3f] high edge lint[0x1]) Oct 31 05:25:08.867741 kernel: ACPI: LAPIC_NMI (acpi_id[0x40] high edge lint[0x1]) Oct 31 05:25:08.867746 kernel: ACPI: LAPIC_NMI (acpi_id[0x41] high edge lint[0x1]) Oct 31 05:25:08.867752 kernel: ACPI: LAPIC_NMI (acpi_id[0x42] high edge lint[0x1]) Oct 31 05:25:08.867758 kernel: ACPI: LAPIC_NMI (acpi_id[0x43] high edge lint[0x1]) Oct 31 05:25:08.867764 kernel: ACPI: LAPIC_NMI (acpi_id[0x44] high edge lint[0x1]) Oct 31 05:25:08.867770 kernel: ACPI: LAPIC_NMI (acpi_id[0x45] high edge lint[0x1]) Oct 31 05:25:08.867777 kernel: ACPI: LAPIC_NMI (acpi_id[0x46] high edge lint[0x1]) Oct 31 05:25:08.867783 kernel: ACPI: LAPIC_NMI (acpi_id[0x47] high edge lint[0x1]) Oct 31 05:25:08.867789 kernel: ACPI: LAPIC_NMI (acpi_id[0x48] high edge lint[0x1]) Oct 31 05:25:08.867795 kernel: ACPI: LAPIC_NMI (acpi_id[0x49] high edge lint[0x1]) Oct 31 05:25:08.867801 kernel: ACPI: LAPIC_NMI (acpi_id[0x4a] high edge lint[0x1]) Oct 31 05:25:08.867806 kernel: ACPI: LAPIC_NMI (acpi_id[0x4b] high edge lint[0x1]) Oct 31 05:25:08.867812 kernel: ACPI: LAPIC_NMI (acpi_id[0x4c] high edge lint[0x1]) Oct 31 05:25:08.867819 kernel: ACPI: LAPIC_NMI (acpi_id[0x4d] high edge lint[0x1]) Oct 31 05:25:08.867825 kernel: ACPI: LAPIC_NMI (acpi_id[0x4e] high edge lint[0x1]) Oct 31 05:25:08.867831 kernel: ACPI: LAPIC_NMI (acpi_id[0x4f] high edge lint[0x1]) Oct 31 05:25:08.867837 kernel: ACPI: LAPIC_NMI (acpi_id[0x50] high edge lint[0x1]) Oct 31 05:25:08.867843 kernel: ACPI: LAPIC_NMI (acpi_id[0x51] high edge lint[0x1]) Oct 31 05:25:08.867849 kernel: ACPI: LAPIC_NMI (acpi_id[0x52] high edge lint[0x1]) Oct 31 05:25:08.867855 kernel: ACPI: LAPIC_NMI (acpi_id[0x53] high edge lint[0x1]) Oct 31 05:25:08.867861 kernel: ACPI: LAPIC_NMI (acpi_id[0x54] high edge lint[0x1]) Oct 31 05:25:08.867868 kernel: ACPI: LAPIC_NMI (acpi_id[0x55] high edge lint[0x1]) Oct 31 05:25:08.867874 kernel: ACPI: LAPIC_NMI (acpi_id[0x56] high edge lint[0x1]) Oct 31 05:25:08.867879 kernel: ACPI: LAPIC_NMI (acpi_id[0x57] high edge lint[0x1]) Oct 31 05:25:08.867885 kernel: ACPI: LAPIC_NMI (acpi_id[0x58] high edge lint[0x1]) Oct 31 05:25:08.867891 kernel: ACPI: LAPIC_NMI (acpi_id[0x59] high edge lint[0x1]) Oct 31 05:25:08.867897 kernel: ACPI: LAPIC_NMI (acpi_id[0x5a] high edge lint[0x1]) Oct 31 05:25:08.867903 kernel: ACPI: LAPIC_NMI (acpi_id[0x5b] high edge lint[0x1]) Oct 31 05:25:08.867910 kernel: ACPI: LAPIC_NMI (acpi_id[0x5c] high edge lint[0x1]) Oct 31 05:25:08.867924 kernel: ACPI: LAPIC_NMI (acpi_id[0x5d] high edge lint[0x1]) Oct 31 05:25:08.867934 kernel: ACPI: LAPIC_NMI (acpi_id[0x5e] high edge lint[0x1]) Oct 31 05:25:08.867945 kernel: ACPI: LAPIC_NMI (acpi_id[0x5f] high edge lint[0x1]) Oct 31 05:25:08.867952 kernel: ACPI: LAPIC_NMI (acpi_id[0x60] high edge lint[0x1]) Oct 31 05:25:08.867958 kernel: ACPI: LAPIC_NMI (acpi_id[0x61] high edge lint[0x1]) Oct 31 05:25:08.867963 kernel: ACPI: LAPIC_NMI (acpi_id[0x62] high edge lint[0x1]) Oct 31 05:25:08.867969 kernel: ACPI: LAPIC_NMI (acpi_id[0x63] high edge lint[0x1]) Oct 31 05:25:08.867977 kernel: ACPI: LAPIC_NMI (acpi_id[0x64] high edge lint[0x1]) Oct 31 05:25:08.867983 kernel: ACPI: LAPIC_NMI (acpi_id[0x65] high edge lint[0x1]) Oct 31 05:25:08.867989 kernel: ACPI: LAPIC_NMI (acpi_id[0x66] high edge lint[0x1]) Oct 31 05:25:08.867995 kernel: ACPI: LAPIC_NMI (acpi_id[0x67] high edge lint[0x1]) Oct 31 05:25:08.868001 kernel: ACPI: LAPIC_NMI (acpi_id[0x68] high edge lint[0x1]) Oct 31 05:25:08.868006 kernel: ACPI: LAPIC_NMI (acpi_id[0x69] high edge lint[0x1]) Oct 31 05:25:08.868012 kernel: ACPI: LAPIC_NMI (acpi_id[0x6a] high edge lint[0x1]) Oct 31 05:25:08.868019 kernel: ACPI: LAPIC_NMI (acpi_id[0x6b] high edge lint[0x1]) Oct 31 05:25:08.868025 kernel: ACPI: LAPIC_NMI (acpi_id[0x6c] high edge lint[0x1]) Oct 31 05:25:08.868031 kernel: ACPI: LAPIC_NMI (acpi_id[0x6d] high edge lint[0x1]) Oct 31 05:25:08.868037 kernel: ACPI: LAPIC_NMI (acpi_id[0x6e] high edge lint[0x1]) Oct 31 05:25:08.868043 kernel: ACPI: LAPIC_NMI (acpi_id[0x6f] high edge lint[0x1]) Oct 31 05:25:08.868049 kernel: ACPI: LAPIC_NMI (acpi_id[0x70] high edge lint[0x1]) Oct 31 05:25:08.868055 kernel: ACPI: LAPIC_NMI (acpi_id[0x71] high edge lint[0x1]) Oct 31 05:25:08.868060 kernel: ACPI: LAPIC_NMI (acpi_id[0x72] high edge lint[0x1]) Oct 31 05:25:08.868068 kernel: ACPI: LAPIC_NMI (acpi_id[0x73] high edge lint[0x1]) Oct 31 05:25:08.868074 kernel: ACPI: LAPIC_NMI (acpi_id[0x74] high edge lint[0x1]) Oct 31 05:25:08.868080 kernel: ACPI: LAPIC_NMI (acpi_id[0x75] high edge lint[0x1]) Oct 31 05:25:08.868085 kernel: ACPI: LAPIC_NMI (acpi_id[0x76] high edge lint[0x1]) Oct 31 05:25:08.868091 kernel: ACPI: LAPIC_NMI (acpi_id[0x77] high edge lint[0x1]) Oct 31 05:25:08.868097 kernel: ACPI: LAPIC_NMI (acpi_id[0x78] high edge lint[0x1]) Oct 31 05:25:08.868103 kernel: ACPI: LAPIC_NMI (acpi_id[0x79] high edge lint[0x1]) Oct 31 05:25:08.868110 kernel: ACPI: LAPIC_NMI (acpi_id[0x7a] high edge lint[0x1]) Oct 31 05:25:08.868116 kernel: ACPI: LAPIC_NMI (acpi_id[0x7b] high edge lint[0x1]) Oct 31 05:25:08.868121 kernel: ACPI: LAPIC_NMI (acpi_id[0x7c] high edge lint[0x1]) Oct 31 05:25:08.868127 kernel: ACPI: LAPIC_NMI (acpi_id[0x7d] high edge lint[0x1]) Oct 31 05:25:08.868133 kernel: ACPI: LAPIC_NMI (acpi_id[0x7e] high edge lint[0x1]) Oct 31 05:25:08.868139 kernel: ACPI: LAPIC_NMI (acpi_id[0x7f] high edge lint[0x1]) Oct 31 05:25:08.868145 kernel: IOAPIC[0]: apic_id 1, version 17, address 0xfec00000, GSI 0-23 Oct 31 05:25:08.868151 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 high edge) Oct 31 05:25:08.868158 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Oct 31 05:25:08.868164 kernel: ACPI: HPET id: 0x8086af01 base: 0xfed00000 Oct 31 05:25:08.868170 kernel: TSC deadline timer available Oct 31 05:25:08.868176 kernel: CPU topo: Max. logical packages: 128 Oct 31 05:25:08.868182 kernel: CPU topo: Max. logical dies: 128 Oct 31 05:25:08.868188 kernel: CPU topo: Max. dies per package: 1 Oct 31 05:25:08.868193 kernel: CPU topo: Max. threads per core: 1 Oct 31 05:25:08.868200 kernel: CPU topo: Num. cores per package: 1 Oct 31 05:25:08.868206 kernel: CPU topo: Num. threads per package: 1 Oct 31 05:25:08.868212 kernel: CPU topo: Allowing 2 present CPUs plus 126 hotplug CPUs Oct 31 05:25:08.868218 kernel: [mem 0x80000000-0xefffffff] available for PCI devices Oct 31 05:25:08.868224 kernel: Booting paravirtualized kernel on VMware hypervisor Oct 31 05:25:08.868230 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Oct 31 05:25:08.868236 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:128 nr_cpu_ids:128 nr_node_ids:1 Oct 31 05:25:08.868243 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u262144 Oct 31 05:25:08.868250 kernel: pcpu-alloc: s207832 r8192 d29736 u262144 alloc=1*2097152 Oct 31 05:25:08.868256 kernel: pcpu-alloc: [0] 000 001 002 003 004 005 006 007 Oct 31 05:25:08.868262 kernel: pcpu-alloc: [0] 008 009 010 011 012 013 014 015 Oct 31 05:25:08.868268 kernel: pcpu-alloc: [0] 016 017 018 019 020 021 022 023 Oct 31 05:25:08.868273 kernel: pcpu-alloc: [0] 024 025 026 027 028 029 030 031 Oct 31 05:25:08.868279 kernel: pcpu-alloc: [0] 032 033 034 035 036 037 038 039 Oct 31 05:25:08.868285 kernel: pcpu-alloc: [0] 040 041 042 043 044 045 046 047 Oct 31 05:25:08.868292 kernel: pcpu-alloc: [0] 048 049 050 051 052 053 054 055 Oct 31 05:25:08.868298 kernel: pcpu-alloc: [0] 056 057 058 059 060 061 062 063 Oct 31 05:25:08.868304 kernel: pcpu-alloc: [0] 064 065 066 067 068 069 070 071 Oct 31 05:25:08.868309 kernel: pcpu-alloc: [0] 072 073 074 075 076 077 078 079 Oct 31 05:25:08.868315 kernel: pcpu-alloc: [0] 080 081 082 083 084 085 086 087 Oct 31 05:25:08.868321 kernel: pcpu-alloc: [0] 088 089 090 091 092 093 094 095 Oct 31 05:25:08.868327 kernel: pcpu-alloc: [0] 096 097 098 099 100 101 102 103 Oct 31 05:25:08.868334 kernel: pcpu-alloc: [0] 104 105 106 107 108 109 110 111 Oct 31 05:25:08.868340 kernel: pcpu-alloc: [0] 112 113 114 115 116 117 118 119 Oct 31 05:25:08.868346 kernel: pcpu-alloc: [0] 120 121 122 123 124 125 126 127 Oct 31 05:25:08.868353 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=901c893e02c15be8852b9ecc6e1436f31ef98f77ceb926ac27b04b3e43d366de Oct 31 05:25:08.868360 kernel: random: crng init done Oct 31 05:25:08.868365 kernel: printk: log_buf_len individual max cpu contribution: 4096 bytes Oct 31 05:25:08.868373 kernel: printk: log_buf_len total cpu_extra contributions: 520192 bytes Oct 31 05:25:08.868379 kernel: printk: log_buf_len min size: 262144 bytes Oct 31 05:25:08.868385 kernel: printk: log_buf_len: 1048576 bytes Oct 31 05:25:08.868391 kernel: printk: early log buf free: 245688(93%) Oct 31 05:25:08.868397 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 31 05:25:08.868403 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Oct 31 05:25:08.868409 kernel: Fallback order for Node 0: 0 Oct 31 05:25:08.868415 kernel: Built 1 zonelists, mobility grouping on. Total pages: 524157 Oct 31 05:25:08.868422 kernel: Policy zone: DMA32 Oct 31 05:25:08.868428 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 31 05:25:08.868434 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=128, Nodes=1 Oct 31 05:25:08.868440 kernel: ftrace: allocating 40092 entries in 157 pages Oct 31 05:25:08.868446 kernel: ftrace: allocated 157 pages with 5 groups Oct 31 05:25:08.868452 kernel: Dynamic Preempt: voluntary Oct 31 05:25:08.868459 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 31 05:25:08.868470 kernel: rcu: RCU event tracing is enabled. Oct 31 05:25:08.868481 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=128. Oct 31 05:25:08.868491 kernel: Trampoline variant of Tasks RCU enabled. Oct 31 05:25:08.868497 kernel: Rude variant of Tasks RCU enabled. Oct 31 05:25:08.868503 kernel: Tracing variant of Tasks RCU enabled. Oct 31 05:25:08.868509 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 31 05:25:08.868515 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=128 Oct 31 05:25:08.868521 kernel: RCU Tasks: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Oct 31 05:25:08.868528 kernel: RCU Tasks Rude: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Oct 31 05:25:08.868535 kernel: RCU Tasks Trace: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Oct 31 05:25:08.868541 kernel: NR_IRQS: 33024, nr_irqs: 1448, preallocated irqs: 16 Oct 31 05:25:08.868546 kernel: rcu: srcu_init: Setting srcu_struct sizes to big. Oct 31 05:25:08.868553 kernel: Console: colour VGA+ 80x25 Oct 31 05:25:08.868559 kernel: printk: legacy console [tty0] enabled Oct 31 05:25:08.868565 kernel: printk: legacy console [ttyS0] enabled Oct 31 05:25:08.868572 kernel: ACPI: Core revision 20240827 Oct 31 05:25:08.868578 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 133484882848 ns Oct 31 05:25:08.868584 kernel: APIC: Switch to symmetric I/O mode setup Oct 31 05:25:08.868592 kernel: x2apic enabled Oct 31 05:25:08.868602 kernel: APIC: Switched APIC routing to: physical x2apic Oct 31 05:25:08.868611 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Oct 31 05:25:08.868619 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Oct 31 05:25:08.868627 kernel: Calibrating delay loop (skipped) preset value.. 6816.00 BogoMIPS (lpj=3408000) Oct 31 05:25:08.868633 kernel: Disabled fast string operations Oct 31 05:25:08.868640 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Oct 31 05:25:08.868650 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Oct 31 05:25:08.868659 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Oct 31 05:25:08.868666 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall and VM exit Oct 31 05:25:08.868675 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Oct 31 05:25:08.868687 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Oct 31 05:25:08.868697 kernel: RETBleed: Mitigation: Enhanced IBRS Oct 31 05:25:08.868706 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Oct 31 05:25:08.868717 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Oct 31 05:25:08.868726 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Oct 31 05:25:08.868737 kernel: SRBDS: Unknown: Dependent on hypervisor status Oct 31 05:25:08.868752 kernel: GDS: Unknown: Dependent on hypervisor status Oct 31 05:25:08.868760 kernel: active return thunk: its_return_thunk Oct 31 05:25:08.868766 kernel: ITS: Mitigation: Aligned branch/return thunks Oct 31 05:25:08.868773 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Oct 31 05:25:08.868779 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Oct 31 05:25:08.868786 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Oct 31 05:25:08.868793 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Oct 31 05:25:08.868800 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Oct 31 05:25:08.868811 kernel: Freeing SMP alternatives memory: 32K Oct 31 05:25:08.868821 kernel: pid_max: default: 131072 minimum: 1024 Oct 31 05:25:08.868829 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Oct 31 05:25:08.868836 kernel: landlock: Up and running. Oct 31 05:25:08.868846 kernel: SELinux: Initializing. Oct 31 05:25:08.868854 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Oct 31 05:25:08.868862 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Oct 31 05:25:08.868873 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Oct 31 05:25:08.868882 kernel: Performance Events: Skylake events, core PMU driver. Oct 31 05:25:08.868891 kernel: core: CPUID marked event: 'cpu cycles' unavailable Oct 31 05:25:08.868900 kernel: core: CPUID marked event: 'instructions' unavailable Oct 31 05:25:08.868910 kernel: core: CPUID marked event: 'bus cycles' unavailable Oct 31 05:25:08.868984 kernel: core: CPUID marked event: 'cache references' unavailable Oct 31 05:25:08.868994 kernel: core: CPUID marked event: 'cache misses' unavailable Oct 31 05:25:08.869005 kernel: core: CPUID marked event: 'branch instructions' unavailable Oct 31 05:25:08.869013 kernel: core: CPUID marked event: 'branch misses' unavailable Oct 31 05:25:08.869022 kernel: ... version: 1 Oct 31 05:25:08.869030 kernel: ... bit width: 48 Oct 31 05:25:08.869040 kernel: ... generic registers: 4 Oct 31 05:25:08.869049 kernel: ... value mask: 0000ffffffffffff Oct 31 05:25:08.869057 kernel: ... max period: 000000007fffffff Oct 31 05:25:08.869068 kernel: ... fixed-purpose events: 0 Oct 31 05:25:08.869078 kernel: ... event mask: 000000000000000f Oct 31 05:25:08.869087 kernel: signal: max sigframe size: 1776 Oct 31 05:25:08.869095 kernel: rcu: Hierarchical SRCU implementation. Oct 31 05:25:08.869105 kernel: rcu: Max phase no-delay instances is 400. Oct 31 05:25:08.869113 kernel: Timer migration: 3 hierarchy levels; 8 children per group; 3 crossnode level Oct 31 05:25:08.869122 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Oct 31 05:25:08.869134 kernel: smp: Bringing up secondary CPUs ... Oct 31 05:25:08.869144 kernel: smpboot: x86: Booting SMP configuration: Oct 31 05:25:08.869151 kernel: .... node #0, CPUs: #1 Oct 31 05:25:08.869157 kernel: Disabled fast string operations Oct 31 05:25:08.869163 kernel: smp: Brought up 1 node, 2 CPUs Oct 31 05:25:08.869169 kernel: smpboot: Total of 2 processors activated (13632.00 BogoMIPS) Oct 31 05:25:08.869175 kernel: Memory: 1942680K/2096628K available (14336K kernel code, 2443K rwdata, 29892K rodata, 15348K init, 2696K bss, 142568K reserved, 0K cma-reserved) Oct 31 05:25:08.869182 kernel: devtmpfs: initialized Oct 31 05:25:08.869189 kernel: x86/mm: Memory block size: 128MB Oct 31 05:25:08.869195 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7feff000-0x7fefffff] (4096 bytes) Oct 31 05:25:08.869201 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 31 05:25:08.869208 kernel: futex hash table entries: 32768 (order: 9, 2097152 bytes, linear) Oct 31 05:25:08.869214 kernel: pinctrl core: initialized pinctrl subsystem Oct 31 05:25:08.869220 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 31 05:25:08.869226 kernel: audit: initializing netlink subsys (disabled) Oct 31 05:25:08.869233 kernel: audit: type=2000 audit(1761888305.278:1): state=initialized audit_enabled=0 res=1 Oct 31 05:25:08.869239 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 31 05:25:08.869245 kernel: thermal_sys: Registered thermal governor 'user_space' Oct 31 05:25:08.869251 kernel: cpuidle: using governor menu Oct 31 05:25:08.869256 kernel: Simple Boot Flag at 0x36 set to 0x80 Oct 31 05:25:08.869263 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 31 05:25:08.869269 kernel: dca service started, version 1.12.1 Oct 31 05:25:08.869276 kernel: PCI: ECAM [mem 0xf0000000-0xf7ffffff] (base 0xf0000000) for domain 0000 [bus 00-7f] Oct 31 05:25:08.869289 kernel: PCI: Using configuration type 1 for base access Oct 31 05:25:08.869297 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Oct 31 05:25:08.869303 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Oct 31 05:25:08.869309 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Oct 31 05:25:08.869316 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Oct 31 05:25:08.869322 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Oct 31 05:25:08.869329 kernel: ACPI: Added _OSI(Module Device) Oct 31 05:25:08.869336 kernel: ACPI: Added _OSI(Processor Device) Oct 31 05:25:08.869342 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 31 05:25:08.870805 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 31 05:25:08.870817 kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored Oct 31 05:25:08.870824 kernel: ACPI: Interpreter enabled Oct 31 05:25:08.870830 kernel: ACPI: PM: (supports S0 S1 S5) Oct 31 05:25:08.870838 kernel: ACPI: Using IOAPIC for interrupt routing Oct 31 05:25:08.870845 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Oct 31 05:25:08.870852 kernel: PCI: Using E820 reservations for host bridge windows Oct 31 05:25:08.870858 kernel: ACPI: Enabled 4 GPEs in block 00 to 0F Oct 31 05:25:08.870864 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-7f]) Oct 31 05:25:08.872650 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Oct 31 05:25:08.872732 kernel: acpi PNP0A03:00: _OSC: platform does not support [AER LTR] Oct 31 05:25:08.872800 kernel: acpi PNP0A03:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability] Oct 31 05:25:08.872810 kernel: PCI host bridge to bus 0000:00 Oct 31 05:25:08.872876 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Oct 31 05:25:08.872979 kernel: pci_bus 0000:00: root bus resource [mem 0x000cc000-0x000dbfff window] Oct 31 05:25:08.873041 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Oct 31 05:25:08.873104 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Oct 31 05:25:08.873163 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xfeff window] Oct 31 05:25:08.873221 kernel: pci_bus 0000:00: root bus resource [bus 00-7f] Oct 31 05:25:08.873300 kernel: pci 0000:00:00.0: [8086:7190] type 00 class 0x060000 conventional PCI endpoint Oct 31 05:25:08.873374 kernel: pci 0000:00:01.0: [8086:7191] type 01 class 0x060400 conventional PCI bridge Oct 31 05:25:08.873445 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Oct 31 05:25:08.873520 kernel: pci 0000:00:07.0: [8086:7110] type 00 class 0x060100 conventional PCI endpoint Oct 31 05:25:08.873592 kernel: pci 0000:00:07.1: [8086:7111] type 00 class 0x01018a conventional PCI endpoint Oct 31 05:25:08.873665 kernel: pci 0000:00:07.1: BAR 4 [io 0x1060-0x106f] Oct 31 05:25:08.873732 kernel: pci 0000:00:07.1: BAR 0 [io 0x01f0-0x01f7]: legacy IDE quirk Oct 31 05:25:08.873805 kernel: pci 0000:00:07.1: BAR 1 [io 0x03f6]: legacy IDE quirk Oct 31 05:25:08.873872 kernel: pci 0000:00:07.1: BAR 2 [io 0x0170-0x0177]: legacy IDE quirk Oct 31 05:25:08.874139 kernel: pci 0000:00:07.1: BAR 3 [io 0x0376]: legacy IDE quirk Oct 31 05:25:08.874216 kernel: pci 0000:00:07.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint Oct 31 05:25:08.874288 kernel: pci 0000:00:07.3: quirk: [io 0x1000-0x103f] claimed by PIIX4 ACPI Oct 31 05:25:08.874355 kernel: pci 0000:00:07.3: quirk: [io 0x1040-0x104f] claimed by PIIX4 SMB Oct 31 05:25:08.874427 kernel: pci 0000:00:07.7: [15ad:0740] type 00 class 0x088000 conventional PCI endpoint Oct 31 05:25:08.874502 kernel: pci 0000:00:07.7: BAR 0 [io 0x1080-0x10bf] Oct 31 05:25:08.874570 kernel: pci 0000:00:07.7: BAR 1 [mem 0xfebfe000-0xfebfffff 64bit] Oct 31 05:25:08.874644 kernel: pci 0000:00:0f.0: [15ad:0405] type 00 class 0x030000 conventional PCI endpoint Oct 31 05:25:08.874711 kernel: pci 0000:00:0f.0: BAR 0 [io 0x1070-0x107f] Oct 31 05:25:08.874776 kernel: pci 0000:00:0f.0: BAR 1 [mem 0xe8000000-0xefffffff pref] Oct 31 05:25:08.874841 kernel: pci 0000:00:0f.0: BAR 2 [mem 0xfe000000-0xfe7fffff] Oct 31 05:25:08.874906 kernel: pci 0000:00:0f.0: ROM [mem 0x00000000-0x00007fff pref] Oct 31 05:25:08.874987 kernel: pci 0000:00:0f.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Oct 31 05:25:08.875061 kernel: pci 0000:00:11.0: [15ad:0790] type 01 class 0x060401 conventional PCI bridge Oct 31 05:25:08.875128 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] (subtractive decode) Oct 31 05:25:08.875197 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Oct 31 05:25:08.875263 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Oct 31 05:25:08.875328 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Oct 31 05:25:08.875401 kernel: pci 0000:00:15.0: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 31 05:25:08.875471 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Oct 31 05:25:08.875537 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Oct 31 05:25:08.875604 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Oct 31 05:25:08.875670 kernel: pci 0000:00:15.0: PME# supported from D0 D3hot D3cold Oct 31 05:25:08.875742 kernel: pci 0000:00:15.1: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 31 05:25:08.875809 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Oct 31 05:25:08.875878 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Oct 31 05:25:08.876092 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Oct 31 05:25:08.876162 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Oct 31 05:25:08.876228 kernel: pci 0000:00:15.1: PME# supported from D0 D3hot D3cold Oct 31 05:25:08.876298 kernel: pci 0000:00:15.2: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 31 05:25:08.876365 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Oct 31 05:25:08.876434 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Oct 31 05:25:08.876502 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Oct 31 05:25:08.876568 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Oct 31 05:25:08.876634 kernel: pci 0000:00:15.2: PME# supported from D0 D3hot D3cold Oct 31 05:25:08.876704 kernel: pci 0000:00:15.3: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 31 05:25:08.876784 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Oct 31 05:25:08.876853 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Oct 31 05:25:08.876927 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Oct 31 05:25:08.876997 kernel: pci 0000:00:15.3: PME# supported from D0 D3hot D3cold Oct 31 05:25:08.877070 kernel: pci 0000:00:15.4: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 31 05:25:08.877140 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Oct 31 05:25:08.877205 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Oct 31 05:25:08.877271 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Oct 31 05:25:08.877337 kernel: pci 0000:00:15.4: PME# supported from D0 D3hot D3cold Oct 31 05:25:08.877408 kernel: pci 0000:00:15.5: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 31 05:25:08.877474 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Oct 31 05:25:08.877542 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Oct 31 05:25:08.877609 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Oct 31 05:25:08.877674 kernel: pci 0000:00:15.5: PME# supported from D0 D3hot D3cold Oct 31 05:25:08.877743 kernel: pci 0000:00:15.6: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 31 05:25:08.877810 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Oct 31 05:25:08.877875 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Oct 31 05:25:08.877950 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Oct 31 05:25:08.878018 kernel: pci 0000:00:15.6: PME# supported from D0 D3hot D3cold Oct 31 05:25:08.878088 kernel: pci 0000:00:15.7: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 31 05:25:08.878155 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Oct 31 05:25:08.878220 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Oct 31 05:25:08.878286 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Oct 31 05:25:08.878355 kernel: pci 0000:00:15.7: PME# supported from D0 D3hot D3cold Oct 31 05:25:08.878427 kernel: pci 0000:00:16.0: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 31 05:25:08.878494 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Oct 31 05:25:08.878559 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Oct 31 05:25:08.878625 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Oct 31 05:25:08.878692 kernel: pci 0000:00:16.0: PME# supported from D0 D3hot D3cold Oct 31 05:25:08.878766 kernel: pci 0000:00:16.1: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 31 05:25:08.878833 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Oct 31 05:25:08.878898 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Oct 31 05:25:08.878978 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Oct 31 05:25:08.879044 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Oct 31 05:25:08.879109 kernel: pci 0000:00:16.1: PME# supported from D0 D3hot D3cold Oct 31 05:25:08.879181 kernel: pci 0000:00:16.2: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 31 05:25:08.879248 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Oct 31 05:25:08.879313 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Oct 31 05:25:08.879378 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Oct 31 05:25:08.879443 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Oct 31 05:25:08.879508 kernel: pci 0000:00:16.2: PME# supported from D0 D3hot D3cold Oct 31 05:25:08.879581 kernel: pci 0000:00:16.3: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 31 05:25:08.879648 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Oct 31 05:25:08.879713 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Oct 31 05:25:08.879792 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Oct 31 05:25:08.879860 kernel: pci 0000:00:16.3: PME# supported from D0 D3hot D3cold Oct 31 05:25:08.879942 kernel: pci 0000:00:16.4: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 31 05:25:08.880015 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Oct 31 05:25:08.880081 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Oct 31 05:25:08.880147 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Oct 31 05:25:08.880212 kernel: pci 0000:00:16.4: PME# supported from D0 D3hot D3cold Oct 31 05:25:08.880284 kernel: pci 0000:00:16.5: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 31 05:25:08.880350 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Oct 31 05:25:08.880419 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Oct 31 05:25:08.880484 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Oct 31 05:25:08.880556 kernel: pci 0000:00:16.5: PME# supported from D0 D3hot D3cold Oct 31 05:25:08.880627 kernel: pci 0000:00:16.6: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 31 05:25:08.880694 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Oct 31 05:25:08.880759 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Oct 31 05:25:08.880827 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Oct 31 05:25:08.880893 kernel: pci 0000:00:16.6: PME# supported from D0 D3hot D3cold Oct 31 05:25:08.880973 kernel: pci 0000:00:16.7: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 31 05:25:08.881039 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Oct 31 05:25:08.881108 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Oct 31 05:25:08.881174 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Oct 31 05:25:08.881241 kernel: pci 0000:00:16.7: PME# supported from D0 D3hot D3cold Oct 31 05:25:08.881311 kernel: pci 0000:00:17.0: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 31 05:25:08.881379 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Oct 31 05:25:08.881447 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Oct 31 05:25:08.881513 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Oct 31 05:25:08.881577 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Oct 31 05:25:08.881645 kernel: pci 0000:00:17.0: PME# supported from D0 D3hot D3cold Oct 31 05:25:08.881717 kernel: pci 0000:00:17.1: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 31 05:25:08.881782 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Oct 31 05:25:08.881851 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Oct 31 05:25:08.881922 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Oct 31 05:25:08.881998 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Oct 31 05:25:08.882089 kernel: pci 0000:00:17.1: PME# supported from D0 D3hot D3cold Oct 31 05:25:08.882192 kernel: pci 0000:00:17.2: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 31 05:25:08.882264 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Oct 31 05:25:08.882329 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Oct 31 05:25:08.882395 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Oct 31 05:25:08.882461 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Oct 31 05:25:08.882527 kernel: pci 0000:00:17.2: PME# supported from D0 D3hot D3cold Oct 31 05:25:08.882610 kernel: pci 0000:00:17.3: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 31 05:25:08.882682 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Oct 31 05:25:08.882748 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Oct 31 05:25:08.882813 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Oct 31 05:25:08.882878 kernel: pci 0000:00:17.3: PME# supported from D0 D3hot D3cold Oct 31 05:25:08.882956 kernel: pci 0000:00:17.4: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 31 05:25:08.883022 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Oct 31 05:25:08.883091 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Oct 31 05:25:08.883157 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Oct 31 05:25:08.883223 kernel: pci 0000:00:17.4: PME# supported from D0 D3hot D3cold Oct 31 05:25:08.883295 kernel: pci 0000:00:17.5: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 31 05:25:08.883361 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Oct 31 05:25:08.883428 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Oct 31 05:25:08.883497 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Oct 31 05:25:08.883563 kernel: pci 0000:00:17.5: PME# supported from D0 D3hot D3cold Oct 31 05:25:08.883633 kernel: pci 0000:00:17.6: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 31 05:25:08.883699 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Oct 31 05:25:08.883765 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Oct 31 05:25:08.883831 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Oct 31 05:25:08.883899 kernel: pci 0000:00:17.6: PME# supported from D0 D3hot D3cold Oct 31 05:25:08.883989 kernel: pci 0000:00:17.7: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 31 05:25:08.884057 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Oct 31 05:25:08.884122 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Oct 31 05:25:08.884187 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Oct 31 05:25:08.884253 kernel: pci 0000:00:17.7: PME# supported from D0 D3hot D3cold Oct 31 05:25:08.884327 kernel: pci 0000:00:18.0: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 31 05:25:08.884394 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Oct 31 05:25:08.884459 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Oct 31 05:25:08.884525 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Oct 31 05:25:08.884591 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Oct 31 05:25:08.884665 kernel: pci 0000:00:18.0: PME# supported from D0 D3hot D3cold Oct 31 05:25:08.884740 kernel: pci 0000:00:18.1: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 31 05:25:08.884819 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Oct 31 05:25:08.884885 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Oct 31 05:25:08.884998 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Oct 31 05:25:08.885067 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Oct 31 05:25:08.885132 kernel: pci 0000:00:18.1: PME# supported from D0 D3hot D3cold Oct 31 05:25:08.885209 kernel: pci 0000:00:18.2: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 31 05:25:08.885275 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Oct 31 05:25:08.885341 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Oct 31 05:25:08.885407 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Oct 31 05:25:08.885472 kernel: pci 0000:00:18.2: PME# supported from D0 D3hot D3cold Oct 31 05:25:08.885545 kernel: pci 0000:00:18.3: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 31 05:25:08.885611 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Oct 31 05:25:08.885677 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Oct 31 05:25:08.885742 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Oct 31 05:25:08.885807 kernel: pci 0000:00:18.3: PME# supported from D0 D3hot D3cold Oct 31 05:25:08.886612 kernel: pci 0000:00:18.4: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 31 05:25:08.886693 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Oct 31 05:25:08.886763 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Oct 31 05:25:08.886831 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Oct 31 05:25:08.886898 kernel: pci 0000:00:18.4: PME# supported from D0 D3hot D3cold Oct 31 05:25:08.886989 kernel: pci 0000:00:18.5: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 31 05:25:08.887059 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Oct 31 05:25:08.887964 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Oct 31 05:25:08.888044 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Oct 31 05:25:08.888143 kernel: pci 0000:00:18.5: PME# supported from D0 D3hot D3cold Oct 31 05:25:08.888262 kernel: pci 0000:00:18.6: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 31 05:25:08.888374 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Oct 31 05:25:08.888467 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Oct 31 05:25:08.888539 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Oct 31 05:25:08.888619 kernel: pci 0000:00:18.6: PME# supported from D0 D3hot D3cold Oct 31 05:25:08.888704 kernel: pci 0000:00:18.7: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 31 05:25:08.888789 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Oct 31 05:25:08.888870 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Oct 31 05:25:08.888957 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Oct 31 05:25:08.889037 kernel: pci 0000:00:18.7: PME# supported from D0 D3hot D3cold Oct 31 05:25:08.889387 kernel: pci_bus 0000:01: extended config space not accessible Oct 31 05:25:08.889500 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Oct 31 05:25:08.889582 kernel: pci_bus 0000:02: extended config space not accessible Oct 31 05:25:08.889594 kernel: acpiphp: Slot [32] registered Oct 31 05:25:08.889600 kernel: acpiphp: Slot [33] registered Oct 31 05:25:08.889609 kernel: acpiphp: Slot [34] registered Oct 31 05:25:08.889616 kernel: acpiphp: Slot [35] registered Oct 31 05:25:08.889623 kernel: acpiphp: Slot [36] registered Oct 31 05:25:08.889629 kernel: acpiphp: Slot [37] registered Oct 31 05:25:08.889636 kernel: acpiphp: Slot [38] registered Oct 31 05:25:08.889642 kernel: acpiphp: Slot [39] registered Oct 31 05:25:08.889648 kernel: acpiphp: Slot [40] registered Oct 31 05:25:08.889655 kernel: acpiphp: Slot [41] registered Oct 31 05:25:08.889663 kernel: acpiphp: Slot [42] registered Oct 31 05:25:08.889669 kernel: acpiphp: Slot [43] registered Oct 31 05:25:08.889676 kernel: acpiphp: Slot [44] registered Oct 31 05:25:08.889682 kernel: acpiphp: Slot [45] registered Oct 31 05:25:08.889689 kernel: acpiphp: Slot [46] registered Oct 31 05:25:08.889695 kernel: acpiphp: Slot [47] registered Oct 31 05:25:08.889701 kernel: acpiphp: Slot [48] registered Oct 31 05:25:08.889709 kernel: acpiphp: Slot [49] registered Oct 31 05:25:08.889716 kernel: acpiphp: Slot [50] registered Oct 31 05:25:08.889722 kernel: acpiphp: Slot [51] registered Oct 31 05:25:08.889728 kernel: acpiphp: Slot [52] registered Oct 31 05:25:08.889739 kernel: acpiphp: Slot [53] registered Oct 31 05:25:08.889751 kernel: acpiphp: Slot [54] registered Oct 31 05:25:08.889761 kernel: acpiphp: Slot [55] registered Oct 31 05:25:08.889768 kernel: acpiphp: Slot [56] registered Oct 31 05:25:08.889776 kernel: acpiphp: Slot [57] registered Oct 31 05:25:08.889782 kernel: acpiphp: Slot [58] registered Oct 31 05:25:08.889789 kernel: acpiphp: Slot [59] registered Oct 31 05:25:08.889796 kernel: acpiphp: Slot [60] registered Oct 31 05:25:08.889802 kernel: acpiphp: Slot [61] registered Oct 31 05:25:08.889809 kernel: acpiphp: Slot [62] registered Oct 31 05:25:08.889815 kernel: acpiphp: Slot [63] registered Oct 31 05:25:08.889888 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] (subtractive decode) Oct 31 05:25:08.889979 kernel: pci 0000:00:11.0: bridge window [mem 0x000a0000-0x000bffff window] (subtractive decode) Oct 31 05:25:08.890048 kernel: pci 0000:00:11.0: bridge window [mem 0x000cc000-0x000dbfff window] (subtractive decode) Oct 31 05:25:08.890115 kernel: pci 0000:00:11.0: bridge window [mem 0xc0000000-0xfebfffff window] (subtractive decode) Oct 31 05:25:08.890180 kernel: pci 0000:00:11.0: bridge window [io 0x0000-0x0cf7 window] (subtractive decode) Oct 31 05:25:08.890253 kernel: pci 0000:00:11.0: bridge window [io 0x0d00-0xfeff window] (subtractive decode) Oct 31 05:25:08.890358 kernel: pci 0000:03:00.0: [15ad:07c0] type 00 class 0x010700 PCIe Endpoint Oct 31 05:25:08.890464 kernel: pci 0000:03:00.0: BAR 0 [io 0x4000-0x4007] Oct 31 05:25:08.890542 kernel: pci 0000:03:00.0: BAR 1 [mem 0xfd5f8000-0xfd5fffff 64bit] Oct 31 05:25:08.891222 kernel: pci 0000:03:00.0: ROM [mem 0x00000000-0x0000ffff pref] Oct 31 05:25:08.891298 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Oct 31 05:25:08.891369 kernel: pci 0000:03:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Oct 31 05:25:08.891443 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Oct 31 05:25:08.891512 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Oct 31 05:25:08.891581 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Oct 31 05:25:08.891649 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Oct 31 05:25:08.891719 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Oct 31 05:25:08.891787 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Oct 31 05:25:08.891858 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Oct 31 05:25:08.891937 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Oct 31 05:25:08.892014 kernel: pci 0000:0b:00.0: [15ad:07b0] type 00 class 0x020000 PCIe Endpoint Oct 31 05:25:08.892082 kernel: pci 0000:0b:00.0: BAR 0 [mem 0xfd4fc000-0xfd4fcfff] Oct 31 05:25:08.892149 kernel: pci 0000:0b:00.0: BAR 1 [mem 0xfd4fd000-0xfd4fdfff] Oct 31 05:25:08.892215 kernel: pci 0000:0b:00.0: BAR 2 [mem 0xfd4fe000-0xfd4fffff] Oct 31 05:25:08.892285 kernel: pci 0000:0b:00.0: BAR 3 [io 0x5000-0x500f] Oct 31 05:25:08.892352 kernel: pci 0000:0b:00.0: ROM [mem 0x00000000-0x0000ffff pref] Oct 31 05:25:08.892419 kernel: pci 0000:0b:00.0: supports D1 D2 Oct 31 05:25:08.892485 kernel: pci 0000:0b:00.0: PME# supported from D0 D1 D2 D3hot D3cold Oct 31 05:25:08.892553 kernel: pci 0000:0b:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Oct 31 05:25:08.892621 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Oct 31 05:25:08.892692 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Oct 31 05:25:08.892761 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Oct 31 05:25:08.892837 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Oct 31 05:25:08.892904 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Oct 31 05:25:08.892988 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Oct 31 05:25:08.893058 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Oct 31 05:25:08.893130 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Oct 31 05:25:08.893203 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Oct 31 05:25:08.893276 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Oct 31 05:25:08.893343 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Oct 31 05:25:08.893410 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Oct 31 05:25:08.893477 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Oct 31 05:25:08.893547 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Oct 31 05:25:08.893615 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Oct 31 05:25:08.893683 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Oct 31 05:25:08.893754 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Oct 31 05:25:08.893822 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Oct 31 05:25:08.893891 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Oct 31 05:25:08.893968 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Oct 31 05:25:08.894039 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Oct 31 05:25:08.894106 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Oct 31 05:25:08.894172 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Oct 31 05:25:08.894238 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Oct 31 05:25:08.894248 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 9 Oct 31 05:25:08.894257 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 0 Oct 31 05:25:08.894263 kernel: ACPI: PCI: Interrupt link LNKB disabled Oct 31 05:25:08.894270 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Oct 31 05:25:08.894276 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 10 Oct 31 05:25:08.894283 kernel: iommu: Default domain type: Translated Oct 31 05:25:08.894289 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Oct 31 05:25:08.894295 kernel: PCI: Using ACPI for IRQ routing Oct 31 05:25:08.894302 kernel: PCI: pci_cache_line_size set to 64 bytes Oct 31 05:25:08.894309 kernel: e820: reserve RAM buffer [mem 0x0009ec00-0x0009ffff] Oct 31 05:25:08.894316 kernel: e820: reserve RAM buffer [mem 0x7fee0000-0x7fffffff] Oct 31 05:25:08.894381 kernel: pci 0000:00:0f.0: vgaarb: setting as boot VGA device Oct 31 05:25:08.894446 kernel: pci 0000:00:0f.0: vgaarb: bridge control possible Oct 31 05:25:08.894512 kernel: pci 0000:00:0f.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Oct 31 05:25:08.894521 kernel: vgaarb: loaded Oct 31 05:25:08.894528 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 Oct 31 05:25:08.894537 kernel: hpet0: 16 comparators, 64-bit 14.318180 MHz counter Oct 31 05:25:08.894544 kernel: clocksource: Switched to clocksource tsc-early Oct 31 05:25:08.894550 kernel: VFS: Disk quotas dquot_6.6.0 Oct 31 05:25:08.894557 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 31 05:25:08.894563 kernel: pnp: PnP ACPI init Oct 31 05:25:08.894634 kernel: system 00:00: [io 0x1000-0x103f] has been reserved Oct 31 05:25:08.894700 kernel: system 00:00: [io 0x1040-0x104f] has been reserved Oct 31 05:25:08.894761 kernel: system 00:00: [io 0x0cf0-0x0cf1] has been reserved Oct 31 05:25:08.894827 kernel: system 00:04: [mem 0xfed00000-0xfed003ff] has been reserved Oct 31 05:25:08.894893 kernel: pnp 00:06: [dma 2] Oct 31 05:25:08.894970 kernel: system 00:07: [io 0xfce0-0xfcff] has been reserved Oct 31 05:25:08.895035 kernel: system 00:07: [mem 0xf0000000-0xf7ffffff] has been reserved Oct 31 05:25:08.895097 kernel: system 00:07: [mem 0xfe800000-0xfe9fffff] has been reserved Oct 31 05:25:08.895106 kernel: pnp: PnP ACPI: found 8 devices Oct 31 05:25:08.895112 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Oct 31 05:25:08.895119 kernel: NET: Registered PF_INET protocol family Oct 31 05:25:08.895126 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 31 05:25:08.895132 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Oct 31 05:25:08.895141 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 31 05:25:08.895147 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Oct 31 05:25:08.895154 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Oct 31 05:25:08.895161 kernel: TCP: Hash tables configured (established 16384 bind 16384) Oct 31 05:25:08.895167 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Oct 31 05:25:08.895174 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Oct 31 05:25:08.895180 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 31 05:25:08.895188 kernel: NET: Registered PF_XDP protocol family Oct 31 05:25:08.895253 kernel: pci 0000:00:15.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Oct 31 05:25:08.895322 kernel: pci 0000:00:15.3: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Oct 31 05:25:08.895388 kernel: pci 0000:00:15.4: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Oct 31 05:25:08.895460 kernel: pci 0000:00:15.5: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Oct 31 05:25:08.895527 kernel: pci 0000:00:15.6: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Oct 31 05:25:08.895597 kernel: pci 0000:00:15.7: bridge window [io 0x1000-0x0fff] to [bus 0a] add_size 1000 Oct 31 05:25:08.895663 kernel: pci 0000:00:16.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0b] add_size 200000 add_align 100000 Oct 31 05:25:08.895730 kernel: pci 0000:00:16.3: bridge window [io 0x1000-0x0fff] to [bus 0e] add_size 1000 Oct 31 05:25:08.895796 kernel: pci 0000:00:16.4: bridge window [io 0x1000-0x0fff] to [bus 0f] add_size 1000 Oct 31 05:25:08.895864 kernel: pci 0000:00:16.5: bridge window [io 0x1000-0x0fff] to [bus 10] add_size 1000 Oct 31 05:25:08.895944 kernel: pci 0000:00:16.6: bridge window [io 0x1000-0x0fff] to [bus 11] add_size 1000 Oct 31 05:25:08.896014 kernel: pci 0000:00:16.7: bridge window [io 0x1000-0x0fff] to [bus 12] add_size 1000 Oct 31 05:25:08.896084 kernel: pci 0000:00:17.3: bridge window [io 0x1000-0x0fff] to [bus 16] add_size 1000 Oct 31 05:25:08.896150 kernel: pci 0000:00:17.4: bridge window [io 0x1000-0x0fff] to [bus 17] add_size 1000 Oct 31 05:25:08.896216 kernel: pci 0000:00:17.5: bridge window [io 0x1000-0x0fff] to [bus 18] add_size 1000 Oct 31 05:25:08.896283 kernel: pci 0000:00:17.6: bridge window [io 0x1000-0x0fff] to [bus 19] add_size 1000 Oct 31 05:25:08.896349 kernel: pci 0000:00:17.7: bridge window [io 0x1000-0x0fff] to [bus 1a] add_size 1000 Oct 31 05:25:08.896416 kernel: pci 0000:00:18.2: bridge window [io 0x1000-0x0fff] to [bus 1d] add_size 1000 Oct 31 05:25:08.896485 kernel: pci 0000:00:18.3: bridge window [io 0x1000-0x0fff] to [bus 1e] add_size 1000 Oct 31 05:25:08.896551 kernel: pci 0000:00:18.4: bridge window [io 0x1000-0x0fff] to [bus 1f] add_size 1000 Oct 31 05:25:08.896617 kernel: pci 0000:00:18.5: bridge window [io 0x1000-0x0fff] to [bus 20] add_size 1000 Oct 31 05:25:08.896682 kernel: pci 0000:00:18.6: bridge window [io 0x1000-0x0fff] to [bus 21] add_size 1000 Oct 31 05:25:08.896752 kernel: pci 0000:00:18.7: bridge window [io 0x1000-0x0fff] to [bus 22] add_size 1000 Oct 31 05:25:08.896820 kernel: pci 0000:00:15.0: bridge window [mem 0xc0000000-0xc01fffff 64bit pref]: assigned Oct 31 05:25:08.896889 kernel: pci 0000:00:16.0: bridge window [mem 0xc0200000-0xc03fffff 64bit pref]: assigned Oct 31 05:25:08.897107 kernel: pci 0000:00:15.3: bridge window [io size 0x1000]: can't assign; no space Oct 31 05:25:08.897177 kernel: pci 0000:00:15.3: bridge window [io size 0x1000]: failed to assign Oct 31 05:25:08.897245 kernel: pci 0000:00:15.4: bridge window [io size 0x1000]: can't assign; no space Oct 31 05:25:08.897311 kernel: pci 0000:00:15.4: bridge window [io size 0x1000]: failed to assign Oct 31 05:25:08.897378 kernel: pci 0000:00:15.5: bridge window [io size 0x1000]: can't assign; no space Oct 31 05:25:08.897445 kernel: pci 0000:00:15.5: bridge window [io size 0x1000]: failed to assign Oct 31 05:25:08.897515 kernel: pci 0000:00:15.6: bridge window [io size 0x1000]: can't assign; no space Oct 31 05:25:08.897582 kernel: pci 0000:00:15.6: bridge window [io size 0x1000]: failed to assign Oct 31 05:25:08.897648 kernel: pci 0000:00:15.7: bridge window [io size 0x1000]: can't assign; no space Oct 31 05:25:08.897714 kernel: pci 0000:00:15.7: bridge window [io size 0x1000]: failed to assign Oct 31 05:25:08.897785 kernel: pci 0000:00:16.3: bridge window [io size 0x1000]: can't assign; no space Oct 31 05:25:08.897852 kernel: pci 0000:00:16.3: bridge window [io size 0x1000]: failed to assign Oct 31 05:25:08.897936 kernel: pci 0000:00:16.4: bridge window [io size 0x1000]: can't assign; no space Oct 31 05:25:08.898007 kernel: pci 0000:00:16.4: bridge window [io size 0x1000]: failed to assign Oct 31 05:25:08.898072 kernel: pci 0000:00:16.5: bridge window [io size 0x1000]: can't assign; no space Oct 31 05:25:08.898137 kernel: pci 0000:00:16.5: bridge window [io size 0x1000]: failed to assign Oct 31 05:25:08.898210 kernel: pci 0000:00:16.6: bridge window [io size 0x1000]: can't assign; no space Oct 31 05:25:08.898276 kernel: pci 0000:00:16.6: bridge window [io size 0x1000]: failed to assign Oct 31 05:25:08.898346 kernel: pci 0000:00:16.7: bridge window [io size 0x1000]: can't assign; no space Oct 31 05:25:08.898411 kernel: pci 0000:00:16.7: bridge window [io size 0x1000]: failed to assign Oct 31 05:25:08.898476 kernel: pci 0000:00:17.3: bridge window [io size 0x1000]: can't assign; no space Oct 31 05:25:08.898541 kernel: pci 0000:00:17.3: bridge window [io size 0x1000]: failed to assign Oct 31 05:25:08.898607 kernel: pci 0000:00:17.4: bridge window [io size 0x1000]: can't assign; no space Oct 31 05:25:08.898671 kernel: pci 0000:00:17.4: bridge window [io size 0x1000]: failed to assign Oct 31 05:25:08.898738 kernel: pci 0000:00:17.5: bridge window [io size 0x1000]: can't assign; no space Oct 31 05:25:08.899811 kernel: pci 0000:00:17.5: bridge window [io size 0x1000]: failed to assign Oct 31 05:25:08.899891 kernel: pci 0000:00:17.6: bridge window [io size 0x1000]: can't assign; no space Oct 31 05:25:08.899974 kernel: pci 0000:00:17.6: bridge window [io size 0x1000]: failed to assign Oct 31 05:25:08.900044 kernel: pci 0000:00:17.7: bridge window [io size 0x1000]: can't assign; no space Oct 31 05:25:08.900112 kernel: pci 0000:00:17.7: bridge window [io size 0x1000]: failed to assign Oct 31 05:25:08.900181 kernel: pci 0000:00:18.2: bridge window [io size 0x1000]: can't assign; no space Oct 31 05:25:08.900257 kernel: pci 0000:00:18.2: bridge window [io size 0x1000]: failed to assign Oct 31 05:25:08.900331 kernel: pci 0000:00:18.3: bridge window [io size 0x1000]: can't assign; no space Oct 31 05:25:08.900398 kernel: pci 0000:00:18.3: bridge window [io size 0x1000]: failed to assign Oct 31 05:25:08.900466 kernel: pci 0000:00:18.4: bridge window [io size 0x1000]: can't assign; no space Oct 31 05:25:08.900533 kernel: pci 0000:00:18.4: bridge window [io size 0x1000]: failed to assign Oct 31 05:25:08.900599 kernel: pci 0000:00:18.5: bridge window [io size 0x1000]: can't assign; no space Oct 31 05:25:08.900666 kernel: pci 0000:00:18.5: bridge window [io size 0x1000]: failed to assign Oct 31 05:25:08.900736 kernel: pci 0000:00:18.6: bridge window [io size 0x1000]: can't assign; no space Oct 31 05:25:08.900801 kernel: pci 0000:00:18.6: bridge window [io size 0x1000]: failed to assign Oct 31 05:25:08.900868 kernel: pci 0000:00:18.7: bridge window [io size 0x1000]: can't assign; no space Oct 31 05:25:08.900943 kernel: pci 0000:00:18.7: bridge window [io size 0x1000]: failed to assign Oct 31 05:25:08.901010 kernel: pci 0000:00:18.7: bridge window [io size 0x1000]: can't assign; no space Oct 31 05:25:08.901077 kernel: pci 0000:00:18.7: bridge window [io size 0x1000]: failed to assign Oct 31 05:25:08.901147 kernel: pci 0000:00:18.6: bridge window [io size 0x1000]: can't assign; no space Oct 31 05:25:08.901224 kernel: pci 0000:00:18.6: bridge window [io size 0x1000]: failed to assign Oct 31 05:25:08.901292 kernel: pci 0000:00:18.5: bridge window [io size 0x1000]: can't assign; no space Oct 31 05:25:08.901358 kernel: pci 0000:00:18.5: bridge window [io size 0x1000]: failed to assign Oct 31 05:25:08.901424 kernel: pci 0000:00:18.4: bridge window [io size 0x1000]: can't assign; no space Oct 31 05:25:08.901490 kernel: pci 0000:00:18.4: bridge window [io size 0x1000]: failed to assign Oct 31 05:25:08.901557 kernel: pci 0000:00:18.3: bridge window [io size 0x1000]: can't assign; no space Oct 31 05:25:08.901623 kernel: pci 0000:00:18.3: bridge window [io size 0x1000]: failed to assign Oct 31 05:25:08.901691 kernel: pci 0000:00:18.2: bridge window [io size 0x1000]: can't assign; no space Oct 31 05:25:08.901758 kernel: pci 0000:00:18.2: bridge window [io size 0x1000]: failed to assign Oct 31 05:25:08.901824 kernel: pci 0000:00:17.7: bridge window [io size 0x1000]: can't assign; no space Oct 31 05:25:08.901891 kernel: pci 0000:00:17.7: bridge window [io size 0x1000]: failed to assign Oct 31 05:25:08.901975 kernel: pci 0000:00:17.6: bridge window [io size 0x1000]: can't assign; no space Oct 31 05:25:08.902042 kernel: pci 0000:00:17.6: bridge window [io size 0x1000]: failed to assign Oct 31 05:25:08.902109 kernel: pci 0000:00:17.5: bridge window [io size 0x1000]: can't assign; no space Oct 31 05:25:08.902189 kernel: pci 0000:00:17.5: bridge window [io size 0x1000]: failed to assign Oct 31 05:25:08.902258 kernel: pci 0000:00:17.4: bridge window [io size 0x1000]: can't assign; no space Oct 31 05:25:08.902326 kernel: pci 0000:00:17.4: bridge window [io size 0x1000]: failed to assign Oct 31 05:25:08.902392 kernel: pci 0000:00:17.3: bridge window [io size 0x1000]: can't assign; no space Oct 31 05:25:08.902459 kernel: pci 0000:00:17.3: bridge window [io size 0x1000]: failed to assign Oct 31 05:25:08.902525 kernel: pci 0000:00:16.7: bridge window [io size 0x1000]: can't assign; no space Oct 31 05:25:08.902591 kernel: pci 0000:00:16.7: bridge window [io size 0x1000]: failed to assign Oct 31 05:25:08.902660 kernel: pci 0000:00:16.6: bridge window [io size 0x1000]: can't assign; no space Oct 31 05:25:08.902726 kernel: pci 0000:00:16.6: bridge window [io size 0x1000]: failed to assign Oct 31 05:25:08.902797 kernel: pci 0000:00:16.5: bridge window [io size 0x1000]: can't assign; no space Oct 31 05:25:08.902865 kernel: pci 0000:00:16.5: bridge window [io size 0x1000]: failed to assign Oct 31 05:25:08.902945 kernel: pci 0000:00:16.4: bridge window [io size 0x1000]: can't assign; no space Oct 31 05:25:08.903026 kernel: pci 0000:00:16.4: bridge window [io size 0x1000]: failed to assign Oct 31 05:25:08.903096 kernel: pci 0000:00:16.3: bridge window [io size 0x1000]: can't assign; no space Oct 31 05:25:08.903161 kernel: pci 0000:00:16.3: bridge window [io size 0x1000]: failed to assign Oct 31 05:25:08.903233 kernel: pci 0000:00:15.7: bridge window [io size 0x1000]: can't assign; no space Oct 31 05:25:08.903300 kernel: pci 0000:00:15.7: bridge window [io size 0x1000]: failed to assign Oct 31 05:25:08.903366 kernel: pci 0000:00:15.6: bridge window [io size 0x1000]: can't assign; no space Oct 31 05:25:08.903432 kernel: pci 0000:00:15.6: bridge window [io size 0x1000]: failed to assign Oct 31 05:25:08.903501 kernel: pci 0000:00:15.5: bridge window [io size 0x1000]: can't assign; no space Oct 31 05:25:08.903567 kernel: pci 0000:00:15.5: bridge window [io size 0x1000]: failed to assign Oct 31 05:25:08.903633 kernel: pci 0000:00:15.4: bridge window [io size 0x1000]: can't assign; no space Oct 31 05:25:08.903698 kernel: pci 0000:00:15.4: bridge window [io size 0x1000]: failed to assign Oct 31 05:25:08.903764 kernel: pci 0000:00:15.3: bridge window [io size 0x1000]: can't assign; no space Oct 31 05:25:08.903830 kernel: pci 0000:00:15.3: bridge window [io size 0x1000]: failed to assign Oct 31 05:25:08.903896 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Oct 31 05:25:08.903975 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] Oct 31 05:25:08.904046 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Oct 31 05:25:08.904115 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Oct 31 05:25:08.904199 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Oct 31 05:25:08.904268 kernel: pci 0000:03:00.0: ROM [mem 0xfd500000-0xfd50ffff pref]: assigned Oct 31 05:25:08.904336 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Oct 31 05:25:08.904401 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Oct 31 05:25:08.904470 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Oct 31 05:25:08.904536 kernel: pci 0000:00:15.0: bridge window [mem 0xc0000000-0xc01fffff 64bit pref] Oct 31 05:25:08.904604 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Oct 31 05:25:08.904670 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Oct 31 05:25:08.904735 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Oct 31 05:25:08.904800 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Oct 31 05:25:08.904867 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Oct 31 05:25:08.904949 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Oct 31 05:25:08.905020 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Oct 31 05:25:08.905091 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Oct 31 05:25:08.905168 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Oct 31 05:25:08.905456 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Oct 31 05:25:08.905525 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Oct 31 05:25:08.905592 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Oct 31 05:25:08.905657 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Oct 31 05:25:08.905727 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Oct 31 05:25:08.905793 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Oct 31 05:25:08.905859 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Oct 31 05:25:08.906348 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Oct 31 05:25:08.906428 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Oct 31 05:25:08.906498 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Oct 31 05:25:08.906569 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Oct 31 05:25:08.906636 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Oct 31 05:25:08.906702 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Oct 31 05:25:08.906768 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Oct 31 05:25:08.906837 kernel: pci 0000:0b:00.0: ROM [mem 0xfd400000-0xfd40ffff pref]: assigned Oct 31 05:25:08.906907 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Oct 31 05:25:08.906991 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Oct 31 05:25:08.907058 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Oct 31 05:25:08.907124 kernel: pci 0000:00:16.0: bridge window [mem 0xc0200000-0xc03fffff 64bit pref] Oct 31 05:25:08.907202 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Oct 31 05:25:08.907270 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Oct 31 05:25:08.907337 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Oct 31 05:25:08.907405 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Oct 31 05:25:08.907476 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Oct 31 05:25:08.907542 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Oct 31 05:25:08.907608 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Oct 31 05:25:08.907674 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Oct 31 05:25:08.907741 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Oct 31 05:25:08.907806 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Oct 31 05:25:08.907872 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Oct 31 05:25:08.907959 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Oct 31 05:25:08.908027 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Oct 31 05:25:08.908094 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Oct 31 05:25:08.908161 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Oct 31 05:25:08.908227 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Oct 31 05:25:08.908293 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Oct 31 05:25:08.908363 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Oct 31 05:25:08.908430 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Oct 31 05:25:08.908495 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Oct 31 05:25:08.908562 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Oct 31 05:25:08.908628 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Oct 31 05:25:08.908694 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Oct 31 05:25:08.908763 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Oct 31 05:25:08.908829 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Oct 31 05:25:08.908895 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Oct 31 05:25:08.908974 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Oct 31 05:25:08.909044 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Oct 31 05:25:08.909110 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Oct 31 05:25:08.909176 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Oct 31 05:25:08.909245 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Oct 31 05:25:08.909324 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Oct 31 05:25:08.909394 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Oct 31 05:25:08.909461 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Oct 31 05:25:08.909528 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Oct 31 05:25:08.909594 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Oct 31 05:25:08.909660 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Oct 31 05:25:08.909725 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Oct 31 05:25:08.909808 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Oct 31 05:25:08.909885 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Oct 31 05:25:08.909972 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Oct 31 05:25:08.910040 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Oct 31 05:25:08.910107 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Oct 31 05:25:08.910174 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Oct 31 05:25:08.910251 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Oct 31 05:25:08.910318 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Oct 31 05:25:08.910385 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Oct 31 05:25:08.910451 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Oct 31 05:25:08.910517 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Oct 31 05:25:08.910584 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Oct 31 05:25:08.910656 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Oct 31 05:25:08.910723 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Oct 31 05:25:08.910788 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Oct 31 05:25:08.910853 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Oct 31 05:25:08.910937 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Oct 31 05:25:08.911008 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Oct 31 05:25:08.911077 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Oct 31 05:25:08.911142 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Oct 31 05:25:08.911223 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Oct 31 05:25:08.911289 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Oct 31 05:25:08.911355 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Oct 31 05:25:08.911421 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Oct 31 05:25:08.911486 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Oct 31 05:25:08.911555 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Oct 31 05:25:08.911621 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Oct 31 05:25:08.911687 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Oct 31 05:25:08.911753 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Oct 31 05:25:08.911820 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Oct 31 05:25:08.911894 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Oct 31 05:25:08.911977 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Oct 31 05:25:08.912044 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Oct 31 05:25:08.912111 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Oct 31 05:25:08.912196 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Oct 31 05:25:08.912263 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Oct 31 05:25:08.912329 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Oct 31 05:25:08.912399 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Oct 31 05:25:08.912462 kernel: pci_bus 0000:00: resource 4 [mem 0x000a0000-0x000bffff window] Oct 31 05:25:08.912522 kernel: pci_bus 0000:00: resource 5 [mem 0x000cc000-0x000dbfff window] Oct 31 05:25:08.912582 kernel: pci_bus 0000:00: resource 6 [mem 0xc0000000-0xfebfffff window] Oct 31 05:25:08.912641 kernel: pci_bus 0000:00: resource 7 [io 0x0000-0x0cf7 window] Oct 31 05:25:08.912699 kernel: pci_bus 0000:00: resource 8 [io 0x0d00-0xfeff window] Oct 31 05:25:08.912765 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x3fff] Oct 31 05:25:08.912826 kernel: pci_bus 0000:02: resource 1 [mem 0xfd600000-0xfdffffff] Oct 31 05:25:08.912886 kernel: pci_bus 0000:02: resource 2 [mem 0xe7b00000-0xe7ffffff 64bit pref] Oct 31 05:25:08.912960 kernel: pci_bus 0000:02: resource 4 [mem 0x000a0000-0x000bffff window] Oct 31 05:25:08.913023 kernel: pci_bus 0000:02: resource 5 [mem 0x000cc000-0x000dbfff window] Oct 31 05:25:08.913084 kernel: pci_bus 0000:02: resource 6 [mem 0xc0000000-0xfebfffff window] Oct 31 05:25:08.913149 kernel: pci_bus 0000:02: resource 7 [io 0x0000-0x0cf7 window] Oct 31 05:25:08.913211 kernel: pci_bus 0000:02: resource 8 [io 0x0d00-0xfeff window] Oct 31 05:25:08.913279 kernel: pci_bus 0000:03: resource 0 [io 0x4000-0x4fff] Oct 31 05:25:08.913341 kernel: pci_bus 0000:03: resource 1 [mem 0xfd500000-0xfd5fffff] Oct 31 05:25:08.913402 kernel: pci_bus 0000:03: resource 2 [mem 0xc0000000-0xc01fffff 64bit pref] Oct 31 05:25:08.913467 kernel: pci_bus 0000:04: resource 0 [io 0x8000-0x8fff] Oct 31 05:25:08.913531 kernel: pci_bus 0000:04: resource 1 [mem 0xfd100000-0xfd1fffff] Oct 31 05:25:08.913591 kernel: pci_bus 0000:04: resource 2 [mem 0xe7800000-0xe78fffff 64bit pref] Oct 31 05:25:08.913657 kernel: pci_bus 0000:05: resource 0 [io 0xc000-0xcfff] Oct 31 05:25:08.913719 kernel: pci_bus 0000:05: resource 1 [mem 0xfcd00000-0xfcdfffff] Oct 31 05:25:08.913784 kernel: pci_bus 0000:05: resource 2 [mem 0xe7400000-0xe74fffff 64bit pref] Oct 31 05:25:08.913855 kernel: pci_bus 0000:06: resource 1 [mem 0xfc900000-0xfc9fffff] Oct 31 05:25:08.913933 kernel: pci_bus 0000:06: resource 2 [mem 0xe7000000-0xe70fffff 64bit pref] Oct 31 05:25:08.914004 kernel: pci_bus 0000:07: resource 1 [mem 0xfc500000-0xfc5fffff] Oct 31 05:25:08.914065 kernel: pci_bus 0000:07: resource 2 [mem 0xe6c00000-0xe6cfffff 64bit pref] Oct 31 05:25:08.914132 kernel: pci_bus 0000:08: resource 1 [mem 0xfc100000-0xfc1fffff] Oct 31 05:25:08.914202 kernel: pci_bus 0000:08: resource 2 [mem 0xe6800000-0xe68fffff 64bit pref] Oct 31 05:25:08.914271 kernel: pci_bus 0000:09: resource 1 [mem 0xfbd00000-0xfbdfffff] Oct 31 05:25:08.914333 kernel: pci_bus 0000:09: resource 2 [mem 0xe6400000-0xe64fffff 64bit pref] Oct 31 05:25:08.914398 kernel: pci_bus 0000:0a: resource 1 [mem 0xfb900000-0xfb9fffff] Oct 31 05:25:08.916482 kernel: pci_bus 0000:0a: resource 2 [mem 0xe6000000-0xe60fffff 64bit pref] Oct 31 05:25:08.916559 kernel: pci_bus 0000:0b: resource 0 [io 0x5000-0x5fff] Oct 31 05:25:08.916623 kernel: pci_bus 0000:0b: resource 1 [mem 0xfd400000-0xfd4fffff] Oct 31 05:25:08.916685 kernel: pci_bus 0000:0b: resource 2 [mem 0xc0200000-0xc03fffff 64bit pref] Oct 31 05:25:08.916751 kernel: pci_bus 0000:0c: resource 0 [io 0x9000-0x9fff] Oct 31 05:25:08.916813 kernel: pci_bus 0000:0c: resource 1 [mem 0xfd000000-0xfd0fffff] Oct 31 05:25:08.916875 kernel: pci_bus 0000:0c: resource 2 [mem 0xe7700000-0xe77fffff 64bit pref] Oct 31 05:25:08.917041 kernel: pci_bus 0000:0d: resource 0 [io 0xd000-0xdfff] Oct 31 05:25:08.917106 kernel: pci_bus 0000:0d: resource 1 [mem 0xfcc00000-0xfccfffff] Oct 31 05:25:08.917167 kernel: pci_bus 0000:0d: resource 2 [mem 0xe7300000-0xe73fffff 64bit pref] Oct 31 05:25:08.917233 kernel: pci_bus 0000:0e: resource 1 [mem 0xfc800000-0xfc8fffff] Oct 31 05:25:08.917298 kernel: pci_bus 0000:0e: resource 2 [mem 0xe6f00000-0xe6ffffff 64bit pref] Oct 31 05:25:08.917377 kernel: pci_bus 0000:0f: resource 1 [mem 0xfc400000-0xfc4fffff] Oct 31 05:25:08.917439 kernel: pci_bus 0000:0f: resource 2 [mem 0xe6b00000-0xe6bfffff 64bit pref] Oct 31 05:25:08.917505 kernel: pci_bus 0000:10: resource 1 [mem 0xfc000000-0xfc0fffff] Oct 31 05:25:08.917580 kernel: pci_bus 0000:10: resource 2 [mem 0xe6700000-0xe67fffff 64bit pref] Oct 31 05:25:08.917649 kernel: pci_bus 0000:11: resource 1 [mem 0xfbc00000-0xfbcfffff] Oct 31 05:25:08.917714 kernel: pci_bus 0000:11: resource 2 [mem 0xe6300000-0xe63fffff 64bit pref] Oct 31 05:25:08.917778 kernel: pci_bus 0000:12: resource 1 [mem 0xfb800000-0xfb8fffff] Oct 31 05:25:08.917840 kernel: pci_bus 0000:12: resource 2 [mem 0xe5f00000-0xe5ffffff 64bit pref] Oct 31 05:25:08.917907 kernel: pci_bus 0000:13: resource 0 [io 0x6000-0x6fff] Oct 31 05:25:08.917987 kernel: pci_bus 0000:13: resource 1 [mem 0xfd300000-0xfd3fffff] Oct 31 05:25:08.918048 kernel: pci_bus 0000:13: resource 2 [mem 0xe7a00000-0xe7afffff 64bit pref] Oct 31 05:25:08.918116 kernel: pci_bus 0000:14: resource 0 [io 0xa000-0xafff] Oct 31 05:25:08.918178 kernel: pci_bus 0000:14: resource 1 [mem 0xfcf00000-0xfcffffff] Oct 31 05:25:08.918239 kernel: pci_bus 0000:14: resource 2 [mem 0xe7600000-0xe76fffff 64bit pref] Oct 31 05:25:08.918304 kernel: pci_bus 0000:15: resource 0 [io 0xe000-0xefff] Oct 31 05:25:08.918365 kernel: pci_bus 0000:15: resource 1 [mem 0xfcb00000-0xfcbfffff] Oct 31 05:25:08.918429 kernel: pci_bus 0000:15: resource 2 [mem 0xe7200000-0xe72fffff 64bit pref] Oct 31 05:25:08.918494 kernel: pci_bus 0000:16: resource 1 [mem 0xfc700000-0xfc7fffff] Oct 31 05:25:08.918556 kernel: pci_bus 0000:16: resource 2 [mem 0xe6e00000-0xe6efffff 64bit pref] Oct 31 05:25:08.918623 kernel: pci_bus 0000:17: resource 1 [mem 0xfc300000-0xfc3fffff] Oct 31 05:25:08.918685 kernel: pci_bus 0000:17: resource 2 [mem 0xe6a00000-0xe6afffff 64bit pref] Oct 31 05:25:08.918750 kernel: pci_bus 0000:18: resource 1 [mem 0xfbf00000-0xfbffffff] Oct 31 05:25:08.918815 kernel: pci_bus 0000:18: resource 2 [mem 0xe6600000-0xe66fffff 64bit pref] Oct 31 05:25:08.918879 kernel: pci_bus 0000:19: resource 1 [mem 0xfbb00000-0xfbbfffff] Oct 31 05:25:08.918957 kernel: pci_bus 0000:19: resource 2 [mem 0xe6200000-0xe62fffff 64bit pref] Oct 31 05:25:08.919024 kernel: pci_bus 0000:1a: resource 1 [mem 0xfb700000-0xfb7fffff] Oct 31 05:25:08.919086 kernel: pci_bus 0000:1a: resource 2 [mem 0xe5e00000-0xe5efffff 64bit pref] Oct 31 05:25:08.919156 kernel: pci_bus 0000:1b: resource 0 [io 0x7000-0x7fff] Oct 31 05:25:08.919217 kernel: pci_bus 0000:1b: resource 1 [mem 0xfd200000-0xfd2fffff] Oct 31 05:25:08.919278 kernel: pci_bus 0000:1b: resource 2 [mem 0xe7900000-0xe79fffff 64bit pref] Oct 31 05:25:08.919353 kernel: pci_bus 0000:1c: resource 0 [io 0xb000-0xbfff] Oct 31 05:25:08.919416 kernel: pci_bus 0000:1c: resource 1 [mem 0xfce00000-0xfcefffff] Oct 31 05:25:08.919477 kernel: pci_bus 0000:1c: resource 2 [mem 0xe7500000-0xe75fffff 64bit pref] Oct 31 05:25:08.919545 kernel: pci_bus 0000:1d: resource 1 [mem 0xfca00000-0xfcafffff] Oct 31 05:25:08.919606 kernel: pci_bus 0000:1d: resource 2 [mem 0xe7100000-0xe71fffff 64bit pref] Oct 31 05:25:08.919673 kernel: pci_bus 0000:1e: resource 1 [mem 0xfc600000-0xfc6fffff] Oct 31 05:25:08.919754 kernel: pci_bus 0000:1e: resource 2 [mem 0xe6d00000-0xe6dfffff 64bit pref] Oct 31 05:25:08.919820 kernel: pci_bus 0000:1f: resource 1 [mem 0xfc200000-0xfc2fffff] Oct 31 05:25:08.919886 kernel: pci_bus 0000:1f: resource 2 [mem 0xe6900000-0xe69fffff 64bit pref] Oct 31 05:25:08.919994 kernel: pci_bus 0000:20: resource 1 [mem 0xfbe00000-0xfbefffff] Oct 31 05:25:08.920057 kernel: pci_bus 0000:20: resource 2 [mem 0xe6500000-0xe65fffff 64bit pref] Oct 31 05:25:08.920123 kernel: pci_bus 0000:21: resource 1 [mem 0xfba00000-0xfbafffff] Oct 31 05:25:08.920185 kernel: pci_bus 0000:21: resource 2 [mem 0xe6100000-0xe61fffff 64bit pref] Oct 31 05:25:08.920252 kernel: pci_bus 0000:22: resource 1 [mem 0xfb600000-0xfb6fffff] Oct 31 05:25:08.920314 kernel: pci_bus 0000:22: resource 2 [mem 0xe5d00000-0xe5dfffff 64bit pref] Oct 31 05:25:08.920385 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Oct 31 05:25:08.920395 kernel: PCI: CLS 32 bytes, default 64 Oct 31 05:25:08.920401 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Oct 31 05:25:08.920408 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Oct 31 05:25:08.920417 kernel: clocksource: Switched to clocksource tsc Oct 31 05:25:08.920424 kernel: Initialise system trusted keyrings Oct 31 05:25:08.920431 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Oct 31 05:25:08.920437 kernel: Key type asymmetric registered Oct 31 05:25:08.920444 kernel: Asymmetric key parser 'x509' registered Oct 31 05:25:08.920450 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Oct 31 05:25:08.920456 kernel: io scheduler mq-deadline registered Oct 31 05:25:08.920464 kernel: io scheduler kyber registered Oct 31 05:25:08.920470 kernel: io scheduler bfq registered Oct 31 05:25:08.920538 kernel: pcieport 0000:00:15.0: PME: Signaling with IRQ 24 Oct 31 05:25:08.920607 kernel: pcieport 0000:00:15.0: pciehp: Slot #160 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 05:25:08.920689 kernel: pcieport 0000:00:15.1: PME: Signaling with IRQ 25 Oct 31 05:25:08.920773 kernel: pcieport 0000:00:15.1: pciehp: Slot #161 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 05:25:08.920848 kernel: pcieport 0000:00:15.2: PME: Signaling with IRQ 26 Oct 31 05:25:08.920938 kernel: pcieport 0000:00:15.2: pciehp: Slot #162 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 05:25:08.921011 kernel: pcieport 0000:00:15.3: PME: Signaling with IRQ 27 Oct 31 05:25:08.921080 kernel: pcieport 0000:00:15.3: pciehp: Slot #163 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 05:25:08.921148 kernel: pcieport 0000:00:15.4: PME: Signaling with IRQ 28 Oct 31 05:25:08.921216 kernel: pcieport 0000:00:15.4: pciehp: Slot #164 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 05:25:08.921285 kernel: pcieport 0000:00:15.5: PME: Signaling with IRQ 29 Oct 31 05:25:08.921352 kernel: pcieport 0000:00:15.5: pciehp: Slot #165 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 05:25:08.921419 kernel: pcieport 0000:00:15.6: PME: Signaling with IRQ 30 Oct 31 05:25:08.921485 kernel: pcieport 0000:00:15.6: pciehp: Slot #166 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 05:25:08.921552 kernel: pcieport 0000:00:15.7: PME: Signaling with IRQ 31 Oct 31 05:25:08.921619 kernel: pcieport 0000:00:15.7: pciehp: Slot #167 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 05:25:08.921698 kernel: pcieport 0000:00:16.0: PME: Signaling with IRQ 32 Oct 31 05:25:08.921779 kernel: pcieport 0000:00:16.0: pciehp: Slot #192 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 05:25:08.921848 kernel: pcieport 0000:00:16.1: PME: Signaling with IRQ 33 Oct 31 05:25:08.921927 kernel: pcieport 0000:00:16.1: pciehp: Slot #193 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 05:25:08.921998 kernel: pcieport 0000:00:16.2: PME: Signaling with IRQ 34 Oct 31 05:25:08.922065 kernel: pcieport 0000:00:16.2: pciehp: Slot #194 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 05:25:08.922137 kernel: pcieport 0000:00:16.3: PME: Signaling with IRQ 35 Oct 31 05:25:08.922205 kernel: pcieport 0000:00:16.3: pciehp: Slot #195 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 05:25:08.922281 kernel: pcieport 0000:00:16.4: PME: Signaling with IRQ 36 Oct 31 05:25:08.922351 kernel: pcieport 0000:00:16.4: pciehp: Slot #196 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 05:25:08.922420 kernel: pcieport 0000:00:16.5: PME: Signaling with IRQ 37 Oct 31 05:25:08.922487 kernel: pcieport 0000:00:16.5: pciehp: Slot #197 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 05:25:08.922557 kernel: pcieport 0000:00:16.6: PME: Signaling with IRQ 38 Oct 31 05:25:08.922624 kernel: pcieport 0000:00:16.6: pciehp: Slot #198 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 05:25:08.922693 kernel: pcieport 0000:00:16.7: PME: Signaling with IRQ 39 Oct 31 05:25:08.922770 kernel: pcieport 0000:00:16.7: pciehp: Slot #199 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 05:25:08.922840 kernel: pcieport 0000:00:17.0: PME: Signaling with IRQ 40 Oct 31 05:25:08.922908 kernel: pcieport 0000:00:17.0: pciehp: Slot #224 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 05:25:08.922991 kernel: pcieport 0000:00:17.1: PME: Signaling with IRQ 41 Oct 31 05:25:08.923058 kernel: pcieport 0000:00:17.1: pciehp: Slot #225 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 05:25:08.923125 kernel: pcieport 0000:00:17.2: PME: Signaling with IRQ 42 Oct 31 05:25:08.923192 kernel: pcieport 0000:00:17.2: pciehp: Slot #226 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 05:25:08.923261 kernel: pcieport 0000:00:17.3: PME: Signaling with IRQ 43 Oct 31 05:25:08.923328 kernel: pcieport 0000:00:17.3: pciehp: Slot #227 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 05:25:08.923396 kernel: pcieport 0000:00:17.4: PME: Signaling with IRQ 44 Oct 31 05:25:08.923466 kernel: pcieport 0000:00:17.4: pciehp: Slot #228 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 05:25:08.923534 kernel: pcieport 0000:00:17.5: PME: Signaling with IRQ 45 Oct 31 05:25:08.923601 kernel: pcieport 0000:00:17.5: pciehp: Slot #229 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 05:25:08.923668 kernel: pcieport 0000:00:17.6: PME: Signaling with IRQ 46 Oct 31 05:25:08.923735 kernel: pcieport 0000:00:17.6: pciehp: Slot #230 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 05:25:08.923819 kernel: pcieport 0000:00:17.7: PME: Signaling with IRQ 47 Oct 31 05:25:08.923891 kernel: pcieport 0000:00:17.7: pciehp: Slot #231 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 05:25:08.923984 kernel: pcieport 0000:00:18.0: PME: Signaling with IRQ 48 Oct 31 05:25:08.926018 kernel: pcieport 0000:00:18.0: pciehp: Slot #256 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 05:25:08.926092 kernel: pcieport 0000:00:18.1: PME: Signaling with IRQ 49 Oct 31 05:25:08.926163 kernel: pcieport 0000:00:18.1: pciehp: Slot #257 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 05:25:08.926233 kernel: pcieport 0000:00:18.2: PME: Signaling with IRQ 50 Oct 31 05:25:08.926305 kernel: pcieport 0000:00:18.2: pciehp: Slot #258 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 05:25:08.926380 kernel: pcieport 0000:00:18.3: PME: Signaling with IRQ 51 Oct 31 05:25:08.926473 kernel: pcieport 0000:00:18.3: pciehp: Slot #259 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 05:25:08.926541 kernel: pcieport 0000:00:18.4: PME: Signaling with IRQ 52 Oct 31 05:25:08.926610 kernel: pcieport 0000:00:18.4: pciehp: Slot #260 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 05:25:08.926677 kernel: pcieport 0000:00:18.5: PME: Signaling with IRQ 53 Oct 31 05:25:08.926749 kernel: pcieport 0000:00:18.5: pciehp: Slot #261 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 05:25:08.926817 kernel: pcieport 0000:00:18.6: PME: Signaling with IRQ 54 Oct 31 05:25:08.926885 kernel: pcieport 0000:00:18.6: pciehp: Slot #262 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 05:25:08.926970 kernel: pcieport 0000:00:18.7: PME: Signaling with IRQ 55 Oct 31 05:25:08.927039 kernel: pcieport 0000:00:18.7: pciehp: Slot #263 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 05:25:08.927053 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Oct 31 05:25:08.927060 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 31 05:25:08.927069 kernel: 00:05: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Oct 31 05:25:08.927076 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBC,PNP0f13:MOUS] at 0x60,0x64 irq 1,12 Oct 31 05:25:08.927082 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Oct 31 05:25:08.927089 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Oct 31 05:25:08.927159 kernel: rtc_cmos 00:01: registered as rtc0 Oct 31 05:25:08.927226 kernel: rtc_cmos 00:01: setting system clock to 2025-10-31T05:25:07 UTC (1761888307) Oct 31 05:25:08.927237 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Oct 31 05:25:08.927310 kernel: rtc_cmos 00:01: alarms up to one month, y3k, 114 bytes nvram Oct 31 05:25:08.927325 kernel: intel_pstate: CPU model not supported Oct 31 05:25:08.927332 kernel: NET: Registered PF_INET6 protocol family Oct 31 05:25:08.927339 kernel: Segment Routing with IPv6 Oct 31 05:25:08.927349 kernel: In-situ OAM (IOAM) with IPv6 Oct 31 05:25:08.927359 kernel: NET: Registered PF_PACKET protocol family Oct 31 05:25:08.927366 kernel: Key type dns_resolver registered Oct 31 05:25:08.927373 kernel: IPI shorthand broadcast: enabled Oct 31 05:25:08.927383 kernel: sched_clock: Marking stable (1848003555, 177744986)->(2039338777, -13590236) Oct 31 05:25:08.927390 kernel: registered taskstats version 1 Oct 31 05:25:08.927397 kernel: Loading compiled-in X.509 certificates Oct 31 05:25:08.927404 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.54-flatcar: 9f8fe332c8b542101b6f0d70ebc06e924c534496' Oct 31 05:25:08.927416 kernel: Demotion targets for Node 0: null Oct 31 05:25:08.927423 kernel: Key type .fscrypt registered Oct 31 05:25:08.927431 kernel: Key type fscrypt-provisioning registered Oct 31 05:25:08.927438 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 31 05:25:08.927448 kernel: ima: Allocated hash algorithm: sha1 Oct 31 05:25:08.927455 kernel: ima: No architecture policies found Oct 31 05:25:08.927462 kernel: clk: Disabling unused clocks Oct 31 05:25:08.927474 kernel: Freeing unused kernel image (initmem) memory: 15348K Oct 31 05:25:08.927482 kernel: Write protecting the kernel read-only data: 45056k Oct 31 05:25:08.927489 kernel: Freeing unused kernel image (rodata/data gap) memory: 828K Oct 31 05:25:08.927495 kernel: Run /init as init process Oct 31 05:25:08.927502 kernel: with arguments: Oct 31 05:25:08.927515 kernel: /init Oct 31 05:25:08.927522 kernel: with environment: Oct 31 05:25:08.927531 kernel: HOME=/ Oct 31 05:25:08.927542 kernel: TERM=linux Oct 31 05:25:08.927550 kernel: SCSI subsystem initialized Oct 31 05:25:08.927557 kernel: VMware PVSCSI driver - version 1.0.7.0-k Oct 31 05:25:08.927564 kernel: vmw_pvscsi: using 64bit dma Oct 31 05:25:08.927575 kernel: vmw_pvscsi: max_id: 16 Oct 31 05:25:08.927583 kernel: vmw_pvscsi: setting ring_pages to 8 Oct 31 05:25:08.927589 kernel: vmw_pvscsi: enabling reqCallThreshold Oct 31 05:25:08.927598 kernel: vmw_pvscsi: driver-based request coalescing enabled Oct 31 05:25:08.927611 kernel: vmw_pvscsi: using MSI-X Oct 31 05:25:08.927717 kernel: scsi host0: VMware PVSCSI storage adapter rev 2, req/cmp/msg rings: 8/8/1 pages, cmd_per_lun=254 Oct 31 05:25:08.927809 kernel: vmw_pvscsi 0000:03:00.0: VMware PVSCSI rev 2 host #0 Oct 31 05:25:08.927911 kernel: scsi 0:0:0:0: Direct-Access VMware Virtual disk 2.0 PQ: 0 ANSI: 6 Oct 31 05:25:08.928742 kernel: sd 0:0:0:0: [sda] 25804800 512-byte logical blocks: (13.2 GB/12.3 GiB) Oct 31 05:25:08.928827 kernel: sd 0:0:0:0: [sda] Write Protect is off Oct 31 05:25:08.928908 kernel: sd 0:0:0:0: [sda] Mode Sense: 31 00 00 00 Oct 31 05:25:08.929009 kernel: sd 0:0:0:0: [sda] Cache data unavailable Oct 31 05:25:08.929093 kernel: sd 0:0:0:0: [sda] Assuming drive cache: write through Oct 31 05:25:08.929104 kernel: libata version 3.00 loaded. Oct 31 05:25:08.929111 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Oct 31 05:25:08.929185 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Oct 31 05:25:08.929256 kernel: ata_piix 0000:00:07.1: version 2.13 Oct 31 05:25:08.929337 kernel: scsi host1: ata_piix Oct 31 05:25:08.929412 kernel: scsi host2: ata_piix Oct 31 05:25:08.929422 kernel: ata1: PATA max UDMA/33 cmd 0x1f0 ctl 0x3f6 bmdma 0x1060 irq 14 lpm-pol 0 Oct 31 05:25:08.929429 kernel: ata2: PATA max UDMA/33 cmd 0x170 ctl 0x376 bmdma 0x1068 irq 15 lpm-pol 0 Oct 31 05:25:08.929439 kernel: ata2.00: ATAPI: VMware Virtual IDE CDROM Drive, 00000001, max UDMA/33 Oct 31 05:25:08.929517 kernel: scsi 2:0:0:0: CD-ROM NECVMWar VMware IDE CDR10 1.00 PQ: 0 ANSI: 5 Oct 31 05:25:08.929592 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 1x/1x writer dvd-ram cd/rw xa/form2 cdda tray Oct 31 05:25:08.929603 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Oct 31 05:25:08.929610 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 31 05:25:08.929617 kernel: device-mapper: uevent: version 1.0.3 Oct 31 05:25:08.929689 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Oct 31 05:25:08.929699 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Oct 31 05:25:08.929706 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Oct 31 05:25:08.929713 kernel: raid6: avx2x4 gen() 46221 MB/s Oct 31 05:25:08.929720 kernel: raid6: avx2x2 gen() 52171 MB/s Oct 31 05:25:08.929727 kernel: raid6: avx2x1 gen() 42603 MB/s Oct 31 05:25:08.929734 kernel: raid6: using algorithm avx2x2 gen() 52171 MB/s Oct 31 05:25:08.929742 kernel: raid6: .... xor() 31754 MB/s, rmw enabled Oct 31 05:25:08.929755 kernel: raid6: using avx2x2 recovery algorithm Oct 31 05:25:08.929762 kernel: xor: automatically using best checksumming function avx Oct 31 05:25:08.929770 kernel: Btrfs loaded, zoned=no, fsverity=no Oct 31 05:25:08.929777 kernel: BTRFS: device fsid ae3439f8-e0b8-4816-9de2-ae3c9a4f72ac devid 1 transid 37 /dev/mapper/usr (254:0) scanned by mount (196) Oct 31 05:25:08.929784 kernel: BTRFS info (device dm-0): first mount of filesystem ae3439f8-e0b8-4816-9de2-ae3c9a4f72ac Oct 31 05:25:08.929791 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Oct 31 05:25:08.929799 kernel: BTRFS info (device dm-0): enabling ssd optimizations Oct 31 05:25:08.929806 kernel: BTRFS info (device dm-0): disabling log replay at mount time Oct 31 05:25:08.929813 kernel: BTRFS info (device dm-0): enabling free space tree Oct 31 05:25:08.929820 kernel: loop: module loaded Oct 31 05:25:08.929827 kernel: loop0: detected capacity change from 0 to 100136 Oct 31 05:25:08.929834 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Oct 31 05:25:08.929842 systemd[1]: Successfully made /usr/ read-only. Oct 31 05:25:08.929852 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Oct 31 05:25:08.929859 systemd[1]: Detected virtualization vmware. Oct 31 05:25:08.929866 systemd[1]: Detected architecture x86-64. Oct 31 05:25:08.929873 systemd[1]: Running in initrd. Oct 31 05:25:08.929880 systemd[1]: No hostname configured, using default hostname. Oct 31 05:25:08.929887 systemd[1]: Hostname set to . Oct 31 05:25:08.929895 systemd[1]: Initializing machine ID from random generator. Oct 31 05:25:08.929902 systemd[1]: Queued start job for default target initrd.target. Oct 31 05:25:08.929909 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Oct 31 05:25:08.929946 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 31 05:25:08.929954 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 31 05:25:08.929962 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Oct 31 05:25:08.929969 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 31 05:25:08.929978 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Oct 31 05:25:08.929986 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Oct 31 05:25:08.929993 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 31 05:25:08.930000 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 31 05:25:08.930007 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Oct 31 05:25:08.930015 systemd[1]: Reached target paths.target - Path Units. Oct 31 05:25:08.930022 systemd[1]: Reached target slices.target - Slice Units. Oct 31 05:25:08.930029 systemd[1]: Reached target swap.target - Swaps. Oct 31 05:25:08.930036 systemd[1]: Reached target timers.target - Timer Units. Oct 31 05:25:08.930043 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Oct 31 05:25:08.930051 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 31 05:25:08.930058 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 31 05:25:08.930066 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Oct 31 05:25:08.930073 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 31 05:25:08.930081 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 31 05:25:08.930088 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 31 05:25:08.930095 systemd[1]: Reached target sockets.target - Socket Units. Oct 31 05:25:08.930102 systemd[1]: Starting afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments... Oct 31 05:25:08.930108 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Oct 31 05:25:08.930116 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 31 05:25:08.930123 systemd[1]: Finished network-cleanup.service - Network Cleanup. Oct 31 05:25:08.930131 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Oct 31 05:25:08.930138 systemd[1]: Starting systemd-fsck-usr.service... Oct 31 05:25:08.930146 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 31 05:25:08.930153 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 31 05:25:08.930160 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 31 05:25:08.930169 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Oct 31 05:25:08.930192 systemd-journald[331]: Collecting audit messages is disabled. Oct 31 05:25:08.930210 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 31 05:25:08.930218 systemd[1]: Finished systemd-fsck-usr.service. Oct 31 05:25:08.930226 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 31 05:25:08.930234 systemd-journald[331]: Journal started Oct 31 05:25:08.930249 systemd-journald[331]: Runtime Journal (/run/log/journal/4b051d30298e42778b12ec2b2b2235eb) is 4.8M, max 38.4M, 33.6M free. Oct 31 05:25:08.931931 systemd[1]: Started systemd-journald.service - Journal Service. Oct 31 05:25:08.934987 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 31 05:25:08.937452 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 31 05:25:08.939931 kernel: Bridge firewalling registered Oct 31 05:25:08.940343 systemd-modules-load[336]: Inserted module 'br_netfilter' Oct 31 05:25:08.942043 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 31 05:25:08.943079 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 31 05:25:08.945653 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 31 05:25:08.948023 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 31 05:25:08.958868 systemd-tmpfiles[350]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Oct 31 05:25:08.963303 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 31 05:25:08.964047 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 31 05:25:08.967101 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 31 05:25:08.976973 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 31 05:25:08.988690 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 31 05:25:08.994078 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 31 05:25:08.996333 systemd[1]: Finished afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Oct 31 05:25:08.997742 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 31 05:25:09.000024 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Oct 31 05:25:09.009021 dracut-cmdline[377]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 ip=139.178.70.106::139.178.70.97:28::ens192:off:1.1.1.1:1.0.0.1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=901c893e02c15be8852b9ecc6e1436f31ef98f77ceb926ac27b04b3e43d366de Oct 31 05:25:09.052618 systemd-resolved[365]: Positive Trust Anchors: Oct 31 05:25:09.052627 systemd-resolved[365]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 31 05:25:09.052629 systemd-resolved[365]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Oct 31 05:25:09.052651 systemd-resolved[365]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 31 05:25:09.066761 systemd-resolved[365]: Defaulting to hostname 'linux'. Oct 31 05:25:09.067925 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 31 05:25:09.068308 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 31 05:25:09.114936 kernel: Loading iSCSI transport class v2.0-870. Oct 31 05:25:09.141934 kernel: iscsi: registered transport (tcp) Oct 31 05:25:09.179289 kernel: iscsi: registered transport (qla4xxx) Oct 31 05:25:09.179356 kernel: QLogic iSCSI HBA Driver Oct 31 05:25:09.200038 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 31 05:25:09.224198 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 31 05:25:09.225549 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 31 05:25:09.249372 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Oct 31 05:25:09.250626 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Oct 31 05:25:09.251982 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Oct 31 05:25:09.276397 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Oct 31 05:25:09.279074 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 31 05:25:09.298497 systemd-udevd[621]: Using default interface naming scheme 'v257'. Oct 31 05:25:09.306577 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 31 05:25:09.309382 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Oct 31 05:25:09.326118 dracut-pre-trigger[698]: rd.md=0: removing MD RAID activation Oct 31 05:25:09.330770 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 31 05:25:09.332127 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 31 05:25:09.345607 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Oct 31 05:25:09.346996 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 31 05:25:09.364253 systemd-networkd[737]: lo: Link UP Oct 31 05:25:09.364514 systemd-networkd[737]: lo: Gained carrier Oct 31 05:25:09.364970 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 31 05:25:09.365128 systemd[1]: Reached target network.target - Network. Oct 31 05:25:09.437851 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 31 05:25:09.439004 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Oct 31 05:25:09.533460 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_disk EFI-SYSTEM. Oct 31 05:25:09.550895 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_disk ROOT. Oct 31 05:25:09.559383 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_disk USR-A. Oct 31 05:25:09.560010 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Oct 31 05:25:09.573471 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. Oct 31 05:25:09.638847 kernel: VMware vmxnet3 virtual NIC driver - version 1.9.0.0-k-NAPI Oct 31 05:25:09.638885 kernel: vmxnet3 0000:0b:00.0: # of Tx queues : 2, # of Rx queues : 2 Oct 31 05:25:09.639029 kernel: vmxnet3 0000:0b:00.0 eth0: NIC Link is Up 10000 Mbps Oct 31 05:25:09.679936 kernel: cryptd: max_cpu_qlen set to 1000 Oct 31 05:25:09.683057 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input2 Oct 31 05:25:09.683099 kernel: vmxnet3 0000:0b:00.0 ens192: renamed from eth0 Oct 31 05:25:09.685982 (udev-worker)[761]: id: Truncating stdout of 'dmi_memory_id' up to 16384 byte. Oct 31 05:25:09.686427 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 31 05:25:09.686499 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 31 05:25:09.686820 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 31 05:25:09.687645 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 31 05:25:09.696208 systemd-networkd[737]: eth0: Interface name change detected, renamed to ens192. Oct 31 05:25:09.711869 kernel: AES CTR mode by8 optimization enabled Oct 31 05:25:09.725777 systemd-networkd[737]: ens192: Configuring with /etc/systemd/network/10-dracut-cmdline-99.network. Oct 31 05:25:09.727009 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 31 05:25:09.728781 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Oct 31 05:25:09.728904 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Oct 31 05:25:09.729447 systemd-networkd[737]: ens192: Link UP Oct 31 05:25:09.729453 systemd-networkd[737]: ens192: Gained carrier Oct 31 05:25:09.754802 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Oct 31 05:25:09.755578 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Oct 31 05:25:09.756007 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 31 05:25:09.756269 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 31 05:25:09.757568 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Oct 31 05:25:09.777650 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Oct 31 05:25:10.681407 disk-uuid[793]: Warning: The kernel is still using the old partition table. Oct 31 05:25:10.681407 disk-uuid[793]: The new table will be used at the next reboot or after you Oct 31 05:25:10.681407 disk-uuid[793]: run partprobe(8) or kpartx(8) Oct 31 05:25:10.681407 disk-uuid[793]: The operation has completed successfully. Oct 31 05:25:10.688620 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 31 05:25:10.688699 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Oct 31 05:25:10.689434 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Oct 31 05:25:10.718946 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (883) Oct 31 05:25:10.721638 kernel: BTRFS info (device sda6): first mount of filesystem 44662e19-0d2a-47a7-b27a-4fe93afcdd56 Oct 31 05:25:10.721689 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Oct 31 05:25:10.726449 kernel: BTRFS info (device sda6): enabling ssd optimizations Oct 31 05:25:10.726505 kernel: BTRFS info (device sda6): enabling free space tree Oct 31 05:25:10.730931 kernel: BTRFS info (device sda6): last unmount of filesystem 44662e19-0d2a-47a7-b27a-4fe93afcdd56 Oct 31 05:25:10.732190 systemd[1]: Finished ignition-setup.service - Ignition (setup). Oct 31 05:25:10.732987 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Oct 31 05:25:10.781301 systemd-networkd[737]: ens192: Gained IPv6LL Oct 31 05:25:10.938484 ignition[902]: Ignition 2.22.0 Oct 31 05:25:10.938493 ignition[902]: Stage: fetch-offline Oct 31 05:25:10.938517 ignition[902]: no configs at "/usr/lib/ignition/base.d" Oct 31 05:25:10.938523 ignition[902]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Oct 31 05:25:10.938573 ignition[902]: parsed url from cmdline: "" Oct 31 05:25:10.938574 ignition[902]: no config URL provided Oct 31 05:25:10.938577 ignition[902]: reading system config file "/usr/lib/ignition/user.ign" Oct 31 05:25:10.938582 ignition[902]: no config at "/usr/lib/ignition/user.ign" Oct 31 05:25:10.938946 ignition[902]: config successfully fetched Oct 31 05:25:10.938965 ignition[902]: parsing config with SHA512: 588d371079cab0f76b43cc21035982cdba8ea36229888a036af83b8d02623f0db688f870dbc18aa22c1314e960abf03174c2a1b6aec9e85ee53ebba5170e56e6 Oct 31 05:25:10.941096 unknown[902]: fetched base config from "system" Oct 31 05:25:10.941104 unknown[902]: fetched user config from "vmware" Oct 31 05:25:10.941301 ignition[902]: fetch-offline: fetch-offline passed Oct 31 05:25:10.941332 ignition[902]: Ignition finished successfully Oct 31 05:25:10.942336 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Oct 31 05:25:10.942675 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Oct 31 05:25:10.943443 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Oct 31 05:25:10.965766 ignition[908]: Ignition 2.22.0 Oct 31 05:25:10.965779 ignition[908]: Stage: kargs Oct 31 05:25:10.965884 ignition[908]: no configs at "/usr/lib/ignition/base.d" Oct 31 05:25:10.965891 ignition[908]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Oct 31 05:25:10.966546 ignition[908]: kargs: kargs passed Oct 31 05:25:10.966578 ignition[908]: Ignition finished successfully Oct 31 05:25:10.968104 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Oct 31 05:25:10.968897 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Oct 31 05:25:10.987447 ignition[914]: Ignition 2.22.0 Oct 31 05:25:10.987459 ignition[914]: Stage: disks Oct 31 05:25:10.987551 ignition[914]: no configs at "/usr/lib/ignition/base.d" Oct 31 05:25:10.987557 ignition[914]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Oct 31 05:25:10.988033 ignition[914]: disks: disks passed Oct 31 05:25:10.988063 ignition[914]: Ignition finished successfully Oct 31 05:25:10.989623 systemd[1]: Finished ignition-disks.service - Ignition (disks). Oct 31 05:25:10.990063 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Oct 31 05:25:10.990338 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 31 05:25:10.990619 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 31 05:25:10.990818 systemd[1]: Reached target sysinit.target - System Initialization. Oct 31 05:25:10.990912 systemd[1]: Reached target basic.target - Basic System. Oct 31 05:25:10.991951 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Oct 31 05:25:11.081456 systemd-fsck[922]: ROOT: clean, 15/1631200 files, 112378/1617920 blocks Oct 31 05:25:11.082780 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Oct 31 05:25:11.084042 systemd[1]: Mounting sysroot.mount - /sysroot... Oct 31 05:25:11.846760 systemd[1]: Mounted sysroot.mount - /sysroot. Oct 31 05:25:11.847019 kernel: EXT4-fs (sda9): mounted filesystem 08ed31a9-c0bd-4d76-b789-4f786927c7ec r/w with ordered data mode. Quota mode: none. Oct 31 05:25:11.847155 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Oct 31 05:25:11.848531 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 31 05:25:11.849957 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Oct 31 05:25:11.850355 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Oct 31 05:25:11.850556 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 31 05:25:11.850744 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Oct 31 05:25:11.858859 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Oct 31 05:25:11.859790 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Oct 31 05:25:11.929934 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (930) Oct 31 05:25:11.941774 kernel: BTRFS info (device sda6): first mount of filesystem 44662e19-0d2a-47a7-b27a-4fe93afcdd56 Oct 31 05:25:11.941812 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Oct 31 05:25:12.004491 kernel: BTRFS info (device sda6): enabling ssd optimizations Oct 31 05:25:12.004547 kernel: BTRFS info (device sda6): enabling free space tree Oct 31 05:25:12.006196 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 31 05:25:12.037154 initrd-setup-root[954]: cut: /sysroot/etc/passwd: No such file or directory Oct 31 05:25:12.040383 initrd-setup-root[961]: cut: /sysroot/etc/group: No such file or directory Oct 31 05:25:12.043317 initrd-setup-root[968]: cut: /sysroot/etc/shadow: No such file or directory Oct 31 05:25:12.045798 initrd-setup-root[975]: cut: /sysroot/etc/gshadow: No such file or directory Oct 31 05:25:12.187622 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Oct 31 05:25:12.188491 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Oct 31 05:25:12.188975 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Oct 31 05:25:12.205136 systemd[1]: sysroot-oem.mount: Deactivated successfully. Oct 31 05:25:12.206927 kernel: BTRFS info (device sda6): last unmount of filesystem 44662e19-0d2a-47a7-b27a-4fe93afcdd56 Oct 31 05:25:12.227791 ignition[1042]: INFO : Ignition 2.22.0 Oct 31 05:25:12.228862 ignition[1042]: INFO : Stage: mount Oct 31 05:25:12.228862 ignition[1042]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 31 05:25:12.228862 ignition[1042]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Oct 31 05:25:12.228862 ignition[1042]: INFO : mount: mount passed Oct 31 05:25:12.228862 ignition[1042]: INFO : Ignition finished successfully Oct 31 05:25:12.230461 systemd[1]: Finished ignition-mount.service - Ignition (mount). Oct 31 05:25:12.231277 systemd[1]: Starting ignition-files.service - Ignition (files)... Oct 31 05:25:12.242103 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 31 05:25:12.259932 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (1050) Oct 31 05:25:12.263533 kernel: BTRFS info (device sda6): first mount of filesystem 44662e19-0d2a-47a7-b27a-4fe93afcdd56 Oct 31 05:25:12.263562 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Oct 31 05:25:12.265127 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Oct 31 05:25:12.268039 kernel: BTRFS info (device sda6): enabling ssd optimizations Oct 31 05:25:12.268065 kernel: BTRFS info (device sda6): enabling free space tree Oct 31 05:25:12.269608 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 31 05:25:12.289136 ignition[1070]: INFO : Ignition 2.22.0 Oct 31 05:25:12.289136 ignition[1070]: INFO : Stage: files Oct 31 05:25:12.289538 ignition[1070]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 31 05:25:12.289538 ignition[1070]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Oct 31 05:25:12.289815 ignition[1070]: DEBUG : files: compiled without relabeling support, skipping Oct 31 05:25:12.294163 ignition[1070]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 31 05:25:12.294163 ignition[1070]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 31 05:25:12.304057 ignition[1070]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 31 05:25:12.304261 ignition[1070]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 31 05:25:12.304406 ignition[1070]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 31 05:25:12.304305 unknown[1070]: wrote ssh authorized keys file for user: core Oct 31 05:25:12.307384 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Oct 31 05:25:12.307644 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Oct 31 05:25:12.359877 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Oct 31 05:25:12.401404 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Oct 31 05:25:12.401404 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Oct 31 05:25:12.401404 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Oct 31 05:25:12.401404 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 31 05:25:12.401404 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 31 05:25:12.401404 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 31 05:25:12.401404 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 31 05:25:12.401404 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 31 05:25:12.401404 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 31 05:25:12.412250 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 31 05:25:12.412469 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 31 05:25:12.412469 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Oct 31 05:25:12.419961 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Oct 31 05:25:12.419961 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Oct 31 05:25:12.420374 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Oct 31 05:25:12.847420 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Oct 31 05:25:13.528682 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Oct 31 05:25:13.529272 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/systemd/network/00-vmware.network" Oct 31 05:25:13.529940 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/systemd/network/00-vmware.network" Oct 31 05:25:13.529940 ignition[1070]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Oct 31 05:25:13.533429 ignition[1070]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 31 05:25:13.533957 ignition[1070]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 31 05:25:13.534297 ignition[1070]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Oct 31 05:25:13.534297 ignition[1070]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Oct 31 05:25:13.534297 ignition[1070]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 31 05:25:13.534297 ignition[1070]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 31 05:25:13.534297 ignition[1070]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Oct 31 05:25:13.534297 ignition[1070]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Oct 31 05:25:13.564621 ignition[1070]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Oct 31 05:25:13.567483 ignition[1070]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Oct 31 05:25:13.567483 ignition[1070]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Oct 31 05:25:13.567483 ignition[1070]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Oct 31 05:25:13.567483 ignition[1070]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Oct 31 05:25:13.569162 ignition[1070]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 31 05:25:13.569162 ignition[1070]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 31 05:25:13.569162 ignition[1070]: INFO : files: files passed Oct 31 05:25:13.569162 ignition[1070]: INFO : Ignition finished successfully Oct 31 05:25:13.569208 systemd[1]: Finished ignition-files.service - Ignition (files). Oct 31 05:25:13.570351 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Oct 31 05:25:13.570991 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Oct 31 05:25:13.586881 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 31 05:25:13.587340 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Oct 31 05:25:13.591202 initrd-setup-root-after-ignition[1104]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 31 05:25:13.591202 initrd-setup-root-after-ignition[1104]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Oct 31 05:25:13.592396 initrd-setup-root-after-ignition[1108]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 31 05:25:13.593463 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 31 05:25:13.593875 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Oct 31 05:25:13.594505 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Oct 31 05:25:13.631000 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 31 05:25:13.631073 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Oct 31 05:25:13.631458 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Oct 31 05:25:13.631591 systemd[1]: Reached target initrd.target - Initrd Default Target. Oct 31 05:25:13.631895 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Oct 31 05:25:13.632416 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Oct 31 05:25:13.645556 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 31 05:25:13.646389 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Oct 31 05:25:13.660001 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Oct 31 05:25:13.660224 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Oct 31 05:25:13.660476 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 31 05:25:13.660748 systemd[1]: Stopped target timers.target - Timer Units. Oct 31 05:25:13.660979 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 31 05:25:13.661097 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 31 05:25:13.661438 systemd[1]: Stopped target initrd.target - Initrd Default Target. Oct 31 05:25:13.661655 systemd[1]: Stopped target basic.target - Basic System. Oct 31 05:25:13.661862 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Oct 31 05:25:13.662102 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Oct 31 05:25:13.662330 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Oct 31 05:25:13.662581 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Oct 31 05:25:13.662814 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Oct 31 05:25:13.663065 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Oct 31 05:25:13.663308 systemd[1]: Stopped target sysinit.target - System Initialization. Oct 31 05:25:13.663529 systemd[1]: Stopped target local-fs.target - Local File Systems. Oct 31 05:25:13.663747 systemd[1]: Stopped target swap.target - Swaps. Oct 31 05:25:13.663924 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 31 05:25:13.664033 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Oct 31 05:25:13.664397 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Oct 31 05:25:13.664620 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 31 05:25:13.664813 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Oct 31 05:25:13.664880 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 31 05:25:13.665087 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 31 05:25:13.665185 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Oct 31 05:25:13.665567 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 31 05:25:13.665664 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Oct 31 05:25:13.666027 systemd[1]: Stopped target paths.target - Path Units. Oct 31 05:25:13.666199 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 31 05:25:13.669949 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 31 05:25:13.670181 systemd[1]: Stopped target slices.target - Slice Units. Oct 31 05:25:13.670421 systemd[1]: Stopped target sockets.target - Socket Units. Oct 31 05:25:13.670620 systemd[1]: iscsid.socket: Deactivated successfully. Oct 31 05:25:13.670698 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Oct 31 05:25:13.670962 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 31 05:25:13.671036 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 31 05:25:13.671335 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 31 05:25:13.671443 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 31 05:25:13.671703 systemd[1]: ignition-files.service: Deactivated successfully. Oct 31 05:25:13.671797 systemd[1]: Stopped ignition-files.service - Ignition (files). Oct 31 05:25:13.672723 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Oct 31 05:25:13.672866 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 31 05:25:13.672987 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Oct 31 05:25:13.675041 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Oct 31 05:25:13.675182 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Oct 31 05:25:13.675286 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 31 05:25:13.675541 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 31 05:25:13.675638 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Oct 31 05:25:13.675896 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 31 05:25:13.676004 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Oct 31 05:25:13.690534 ignition[1128]: INFO : Ignition 2.22.0 Oct 31 05:25:13.690534 ignition[1128]: INFO : Stage: umount Oct 31 05:25:13.690986 ignition[1128]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 31 05:25:13.690986 ignition[1128]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Oct 31 05:25:13.691847 ignition[1128]: INFO : umount: umount passed Oct 31 05:25:13.691847 ignition[1128]: INFO : Ignition finished successfully Oct 31 05:25:13.696285 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 31 05:25:13.696345 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Oct 31 05:25:13.696750 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 31 05:25:13.696810 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Oct 31 05:25:13.698303 systemd[1]: Stopped target network.target - Network. Oct 31 05:25:13.698618 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 31 05:25:13.698745 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Oct 31 05:25:13.699016 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 31 05:25:13.699138 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Oct 31 05:25:13.699374 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 31 05:25:13.699498 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Oct 31 05:25:13.699711 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Oct 31 05:25:13.699733 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Oct 31 05:25:13.700226 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Oct 31 05:25:13.700610 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Oct 31 05:25:13.707896 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 31 05:25:13.708113 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Oct 31 05:25:13.709375 systemd[1]: Stopped target network-pre.target - Preparation for Network. Oct 31 05:25:13.709629 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 31 05:25:13.709751 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Oct 31 05:25:13.711046 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Oct 31 05:25:13.711263 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 31 05:25:13.711407 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 31 05:25:13.711743 systemd[1]: afterburn-network-kargs.service: Deactivated successfully. Oct 31 05:25:13.711882 systemd[1]: Stopped afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Oct 31 05:25:13.712525 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 31 05:25:13.717388 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 31 05:25:13.717464 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Oct 31 05:25:13.718564 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 31 05:25:13.718595 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 31 05:25:13.718793 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 31 05:25:13.718817 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Oct 31 05:25:13.720566 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 31 05:25:13.720644 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 31 05:25:13.720907 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 31 05:25:13.720943 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Oct 31 05:25:13.721145 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 31 05:25:13.721161 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Oct 31 05:25:13.721340 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 31 05:25:13.721364 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Oct 31 05:25:13.721627 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 31 05:25:13.721652 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Oct 31 05:25:13.721976 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 31 05:25:13.721998 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 31 05:25:13.723235 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Oct 31 05:25:13.724435 systemd[1]: systemd-network-generator.service: Deactivated successfully. Oct 31 05:25:13.724464 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Oct 31 05:25:13.724661 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 31 05:25:13.724683 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 31 05:25:13.725165 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 31 05:25:13.725191 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 31 05:25:13.732052 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 31 05:25:13.743212 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 31 05:25:13.743266 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Oct 31 05:25:13.763411 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 31 05:25:13.763501 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Oct 31 05:25:13.868997 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 31 05:25:13.869089 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Oct 31 05:25:13.869631 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Oct 31 05:25:13.869818 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 31 05:25:13.869864 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Oct 31 05:25:13.870588 systemd[1]: Starting initrd-switch-root.service - Switch Root... Oct 31 05:25:13.889466 systemd[1]: Switching root. Oct 31 05:25:13.915863 systemd-journald[331]: Journal stopped Oct 31 05:25:15.850885 systemd-journald[331]: Received SIGTERM from PID 1 (systemd). Oct 31 05:25:15.850940 kernel: SELinux: policy capability network_peer_controls=1 Oct 31 05:25:15.850953 kernel: SELinux: policy capability open_perms=1 Oct 31 05:25:15.850960 kernel: SELinux: policy capability extended_socket_class=1 Oct 31 05:25:15.850966 kernel: SELinux: policy capability always_check_network=0 Oct 31 05:25:15.850972 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 31 05:25:15.850981 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 31 05:25:15.850988 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 31 05:25:15.850995 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 31 05:25:15.851001 kernel: SELinux: policy capability userspace_initial_context=0 Oct 31 05:25:15.851008 kernel: audit: type=1403 audit(1761888315.158:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 31 05:25:15.851015 systemd[1]: Successfully loaded SELinux policy in 53.339ms. Oct 31 05:25:15.851024 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 3.880ms. Oct 31 05:25:15.851032 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Oct 31 05:25:15.851040 systemd[1]: Detected virtualization vmware. Oct 31 05:25:15.851048 systemd[1]: Detected architecture x86-64. Oct 31 05:25:15.851056 systemd[1]: Detected first boot. Oct 31 05:25:15.851064 systemd[1]: Initializing machine ID from random generator. Oct 31 05:25:15.851071 zram_generator::config[1171]: No configuration found. Oct 31 05:25:15.851185 kernel: vmw_vmci 0000:00:07.7: Using capabilities 0xc Oct 31 05:25:15.851197 kernel: Guest personality initialized and is active Oct 31 05:25:15.851206 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Oct 31 05:25:15.851213 kernel: Initialized host personality Oct 31 05:25:15.851219 kernel: NET: Registered PF_VSOCK protocol family Oct 31 05:25:15.851227 systemd[1]: Populated /etc with preset unit settings. Oct 31 05:25:15.851236 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Oct 31 05:25:15.851246 systemd[1]: COREOS_CUSTOM_PUBLIC_IPV4=$(ip addr show ens192 | grep -v "inet 10." | grep -Po "inet \K[\d.]+")" > ${OUTPUT}" Oct 31 05:25:15.851253 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 31 05:25:15.851260 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Oct 31 05:25:15.851268 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 31 05:25:15.851275 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Oct 31 05:25:15.851283 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Oct 31 05:25:15.851292 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Oct 31 05:25:15.851300 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Oct 31 05:25:15.851307 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Oct 31 05:25:15.851315 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Oct 31 05:25:15.851322 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Oct 31 05:25:15.851330 systemd[1]: Created slice user.slice - User and Session Slice. Oct 31 05:25:15.851338 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 31 05:25:15.851346 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 31 05:25:15.851356 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Oct 31 05:25:15.851364 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Oct 31 05:25:15.851372 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Oct 31 05:25:15.851380 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 31 05:25:15.851388 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Oct 31 05:25:15.851396 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 31 05:25:15.851404 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 31 05:25:15.851412 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Oct 31 05:25:15.851420 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Oct 31 05:25:15.851427 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Oct 31 05:25:15.851435 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Oct 31 05:25:15.851444 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 31 05:25:15.851451 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 31 05:25:15.851459 systemd[1]: Reached target slices.target - Slice Units. Oct 31 05:25:15.851466 systemd[1]: Reached target swap.target - Swaps. Oct 31 05:25:15.851474 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Oct 31 05:25:15.851481 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Oct 31 05:25:15.851490 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Oct 31 05:25:15.851498 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 31 05:25:15.851506 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 31 05:25:15.851514 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 31 05:25:15.851522 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Oct 31 05:25:15.851531 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Oct 31 05:25:15.851539 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Oct 31 05:25:15.851547 systemd[1]: Mounting media.mount - External Media Directory... Oct 31 05:25:15.851554 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 31 05:25:15.851562 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Oct 31 05:25:15.851570 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Oct 31 05:25:15.851577 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Oct 31 05:25:15.851587 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 31 05:25:15.851595 systemd[1]: Reached target machines.target - Containers. Oct 31 05:25:15.851603 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Oct 31 05:25:15.851611 systemd[1]: Starting ignition-delete-config.service - Ignition (delete config)... Oct 31 05:25:15.851619 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 31 05:25:15.851626 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Oct 31 05:25:15.851635 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 31 05:25:15.851643 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 31 05:25:15.851651 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 31 05:25:15.851658 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Oct 31 05:25:15.851666 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 31 05:25:15.851674 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 31 05:25:15.851681 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 31 05:25:15.851690 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Oct 31 05:25:15.851699 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 31 05:25:15.851706 systemd[1]: Stopped systemd-fsck-usr.service. Oct 31 05:25:15.851714 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 31 05:25:15.851722 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 31 05:25:15.851730 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 31 05:25:15.851738 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 31 05:25:15.851747 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Oct 31 05:25:15.851755 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Oct 31 05:25:15.851763 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 31 05:25:15.851771 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 31 05:25:15.851779 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Oct 31 05:25:15.851786 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Oct 31 05:25:15.851795 systemd[1]: Mounted media.mount - External Media Directory. Oct 31 05:25:15.851803 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Oct 31 05:25:15.851811 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Oct 31 05:25:15.851818 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Oct 31 05:25:15.851826 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 31 05:25:15.851834 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 31 05:25:15.851841 kernel: fuse: init (API version 7.41) Oct 31 05:25:15.851850 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Oct 31 05:25:15.851858 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 31 05:25:15.851865 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 31 05:25:15.851873 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 31 05:25:15.851880 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 31 05:25:15.851888 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 31 05:25:15.851896 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 31 05:25:15.851905 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 31 05:25:15.853926 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 31 05:25:15.853944 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Oct 31 05:25:15.853953 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 31 05:25:15.853962 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Oct 31 05:25:15.853970 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 31 05:25:15.853977 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Oct 31 05:25:15.853988 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Oct 31 05:25:15.853996 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Oct 31 05:25:15.854007 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 31 05:25:15.854016 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 31 05:25:15.854024 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Oct 31 05:25:15.854050 systemd-journald[1268]: Collecting audit messages is disabled. Oct 31 05:25:15.854072 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 31 05:25:15.854081 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Oct 31 05:25:15.854091 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 31 05:25:15.854100 systemd-journald[1268]: Journal started Oct 31 05:25:15.854116 systemd-journald[1268]: Runtime Journal (/run/log/journal/e220c5baa8614be2af3c5eaa3f895894) is 4.8M, max 38.4M, 33.6M free. Oct 31 05:25:15.636945 systemd[1]: Queued start job for default target multi-user.target. Oct 31 05:25:15.659789 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Oct 31 05:25:15.660179 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 31 05:25:15.854641 jq[1241]: true Oct 31 05:25:15.855206 jq[1284]: true Oct 31 05:25:15.857973 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Oct 31 05:25:15.858002 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 31 05:25:15.862449 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 31 05:25:15.866956 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Oct 31 05:25:15.872968 systemd[1]: Started systemd-journald.service - Journal Service. Oct 31 05:25:15.868824 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Oct 31 05:25:15.869141 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Oct 31 05:25:15.873903 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Oct 31 05:25:15.874089 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Oct 31 05:25:15.889925 kernel: ACPI: bus type drm_connector registered Oct 31 05:25:15.894481 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Oct 31 05:25:15.894972 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 31 05:25:15.895092 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 31 05:25:15.896416 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Oct 31 05:25:15.903818 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Oct 31 05:25:15.906976 kernel: loop1: detected capacity change from 0 to 128912 Oct 31 05:25:15.906998 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Oct 31 05:25:15.913592 systemd[1]: Starting systemd-sysusers.service - Create System Users... Oct 31 05:25:15.916033 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 31 05:25:15.930071 systemd-journald[1268]: Time spent on flushing to /var/log/journal/e220c5baa8614be2af3c5eaa3f895894 is 57.586ms for 1750 entries. Oct 31 05:25:15.930071 systemd-journald[1268]: System Journal (/var/log/journal/e220c5baa8614be2af3c5eaa3f895894) is 8M, max 588.1M, 580.1M free. Oct 31 05:25:15.998045 systemd-journald[1268]: Received client request to flush runtime journal. Oct 31 05:25:15.998081 kernel: loop2: detected capacity change from 0 to 111544 Oct 31 05:25:15.949932 ignition[1296]: Ignition 2.22.0 Oct 31 05:25:15.971749 systemd[1]: Finished ignition-delete-config.service - Ignition (delete config). Oct 31 05:25:15.950132 ignition[1296]: deleting config from guestinfo properties Oct 31 05:25:15.970244 ignition[1296]: Successfully deleted config Oct 31 05:25:15.999110 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Oct 31 05:25:16.004281 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Oct 31 05:25:16.011712 systemd[1]: Finished systemd-sysusers.service - Create System Users. Oct 31 05:25:16.016012 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 31 05:25:16.019207 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 31 05:25:16.024928 kernel: loop3: detected capacity change from 0 to 229808 Oct 31 05:25:16.029510 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Oct 31 05:25:16.050244 systemd-tmpfiles[1341]: ACLs are not supported, ignoring. Oct 31 05:25:16.050424 systemd-tmpfiles[1341]: ACLs are not supported, ignoring. Oct 31 05:25:16.055921 kernel: loop4: detected capacity change from 0 to 2960 Oct 31 05:25:16.056235 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 31 05:25:16.062955 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 31 05:25:16.071911 systemd[1]: Started systemd-userdbd.service - User Database Manager. Oct 31 05:25:16.081935 kernel: loop5: detected capacity change from 0 to 128912 Oct 31 05:25:16.091931 kernel: loop6: detected capacity change from 0 to 111544 Oct 31 05:25:16.105942 kernel: loop7: detected capacity change from 0 to 229808 Oct 31 05:25:16.121627 kernel: loop1: detected capacity change from 0 to 2960 Oct 31 05:25:16.129161 (sd-merge)[1352]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw', 'oem-vmware.raw'. Oct 31 05:25:16.131541 (sd-merge)[1352]: Merged extensions into '/usr'. Oct 31 05:25:16.136041 systemd[1]: Reload requested from client PID 1295 ('systemd-sysext') (unit systemd-sysext.service)... Oct 31 05:25:16.136053 systemd[1]: Reloading... Oct 31 05:25:16.138875 systemd-resolved[1340]: Positive Trust Anchors: Oct 31 05:25:16.139090 systemd-resolved[1340]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 31 05:25:16.139124 systemd-resolved[1340]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Oct 31 05:25:16.139171 systemd-resolved[1340]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 31 05:25:16.142038 systemd-resolved[1340]: Defaulting to hostname 'linux'. Oct 31 05:25:16.183013 zram_generator::config[1379]: No configuration found. Oct 31 05:25:16.284356 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Oct 31 05:25:16.331350 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 31 05:25:16.331598 systemd[1]: Reloading finished in 195 ms. Oct 31 05:25:16.348886 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 31 05:25:16.350297 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Oct 31 05:25:16.350809 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 31 05:25:16.358022 systemd[1]: Starting ensure-sysext.service... Oct 31 05:25:16.359943 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 31 05:25:16.376988 systemd[1]: Reload requested from client PID 1437 ('systemctl') (unit ensure-sysext.service)... Oct 31 05:25:16.377000 systemd[1]: Reloading... Oct 31 05:25:16.391169 systemd-tmpfiles[1438]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Oct 31 05:25:16.391207 systemd-tmpfiles[1438]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Oct 31 05:25:16.391406 systemd-tmpfiles[1438]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 31 05:25:16.391584 systemd-tmpfiles[1438]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Oct 31 05:25:16.393139 systemd-tmpfiles[1438]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 31 05:25:16.393368 systemd-tmpfiles[1438]: ACLs are not supported, ignoring. Oct 31 05:25:16.393995 systemd-tmpfiles[1438]: ACLs are not supported, ignoring. Oct 31 05:25:16.418933 zram_generator::config[1468]: No configuration found. Oct 31 05:25:16.500408 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Oct 31 05:25:16.547311 systemd[1]: Reloading finished in 170 ms. Oct 31 05:25:16.583090 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 31 05:25:16.583737 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 31 05:25:16.592502 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 31 05:25:16.592635 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 31 05:25:16.592704 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 31 05:25:16.593275 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 31 05:25:16.593379 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 31 05:25:16.593729 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 31 05:25:16.593836 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 31 05:25:16.596537 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 31 05:25:16.599078 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 31 05:25:16.599225 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 31 05:25:16.599290 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 31 05:25:16.599649 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 31 05:25:16.599775 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 31 05:25:16.604198 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 31 05:25:16.604906 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 31 05:25:16.606013 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 31 05:25:16.606089 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 31 05:25:16.606625 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 31 05:25:16.607053 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 31 05:25:16.607472 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 31 05:25:16.610148 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 31 05:25:16.611021 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 31 05:25:16.611118 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 31 05:25:16.611824 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 31 05:25:16.612291 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 31 05:25:16.612644 systemd[1]: Finished ensure-sysext.service. Oct 31 05:25:16.612871 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 31 05:25:16.612993 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 31 05:25:16.615756 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Oct 31 05:25:16.661487 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 31 05:25:16.661507 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 31 05:25:16.915667 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Oct 31 05:25:16.916051 systemd[1]: Reached target time-set.target - System Time Set. Oct 31 05:25:16.949553 systemd-tmpfiles[1438]: Detected autofs mount point /boot during canonicalization of boot. Oct 31 05:25:16.949562 systemd-tmpfiles[1438]: Skipping /boot Oct 31 05:25:16.955097 systemd-tmpfiles[1438]: Detected autofs mount point /boot during canonicalization of boot. Oct 31 05:25:16.955107 systemd-tmpfiles[1438]: Skipping /boot Oct 31 05:25:17.002232 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 31 05:25:17.004400 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 31 05:25:17.014732 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Oct 31 05:25:17.016014 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Oct 31 05:25:17.018027 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Oct 31 05:25:17.018750 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Oct 31 05:25:17.041369 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Oct 31 05:25:17.074531 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Oct 31 05:25:17.075908 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 31 05:25:17.154546 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Oct 31 05:25:17.171305 systemd-udevd[1557]: Using default interface naming scheme 'v257'. Oct 31 05:25:17.235161 augenrules[1572]: No rules Oct 31 05:25:17.235771 systemd[1]: audit-rules.service: Deactivated successfully. Oct 31 05:25:17.235936 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 31 05:25:17.480833 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 31 05:25:17.484041 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 31 05:25:17.539658 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Oct 31 05:25:17.565592 systemd-networkd[1579]: lo: Link UP Oct 31 05:25:17.565597 systemd-networkd[1579]: lo: Gained carrier Oct 31 05:25:17.566493 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 31 05:25:17.566813 systemd[1]: Reached target network.target - Network. Oct 31 05:25:17.568241 systemd-networkd[1579]: ens192: Configuring with /etc/systemd/network/00-vmware.network. Oct 31 05:25:17.568341 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Oct 31 05:25:17.570264 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Oct 31 05:25:17.573945 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Oct 31 05:25:17.574122 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Oct 31 05:25:17.578740 systemd-networkd[1579]: ens192: Link UP Oct 31 05:25:17.578876 systemd-networkd[1579]: ens192: Gained carrier Oct 31 05:25:17.583170 systemd-timesyncd[1539]: Network configuration changed, trying to establish connection. Oct 31 05:25:17.591837 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Oct 31 05:25:17.592331 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 31 05:25:17.604967 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Oct 31 05:25:17.613755 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Oct 31 05:25:17.620936 kernel: mousedev: PS/2 mouse device common for all mice Oct 31 05:25:17.629617 kernel: ACPI: button: Power Button [PWRF] Oct 31 05:25:17.673758 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. Oct 31 05:25:17.676163 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Oct 31 05:25:17.697441 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Oct 31 05:25:17.718661 kernel: piix4_smbus 0000:00:07.3: SMBus Host Controller not enabled! Oct 31 05:25:17.796794 (udev-worker)[1583]: id: Truncating stdout of 'dmi_memory_id' up to 16384 byte. Oct 31 05:25:17.799998 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 31 05:25:17.882996 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 31 05:25:17.986946 ldconfig[1546]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 31 05:25:17.989014 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Oct 31 05:25:17.990192 systemd[1]: Starting systemd-update-done.service - Update is Completed... Oct 31 05:25:17.999550 systemd[1]: Finished systemd-update-done.service - Update is Completed. Oct 31 05:25:17.999865 systemd[1]: Reached target sysinit.target - System Initialization. Oct 31 05:25:18.000113 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Oct 31 05:25:18.000302 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Oct 31 05:25:18.000464 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Oct 31 05:25:18.000691 systemd[1]: Started logrotate.timer - Daily rotation of log files. Oct 31 05:25:18.000872 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Oct 31 05:25:18.001030 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Oct 31 05:25:18.001202 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 31 05:25:18.001222 systemd[1]: Reached target paths.target - Path Units. Oct 31 05:25:18.001318 systemd[1]: Reached target timers.target - Timer Units. Oct 31 05:25:18.002788 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Oct 31 05:25:18.003855 systemd[1]: Starting docker.socket - Docker Socket for the API... Oct 31 05:25:18.005455 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Oct 31 05:25:18.005698 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Oct 31 05:25:18.005865 systemd[1]: Reached target ssh-access.target - SSH Access Available. Oct 31 05:25:18.007491 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Oct 31 05:25:18.007796 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Oct 31 05:25:18.008314 systemd[1]: Listening on docker.socket - Docker Socket for the API. Oct 31 05:25:18.008846 systemd[1]: Reached target sockets.target - Socket Units. Oct 31 05:25:18.008999 systemd[1]: Reached target basic.target - Basic System. Oct 31 05:25:18.009161 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Oct 31 05:25:18.009215 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Oct 31 05:25:18.009983 systemd[1]: Starting containerd.service - containerd container runtime... Oct 31 05:25:18.011995 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Oct 31 05:25:18.013134 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Oct 31 05:25:18.015474 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Oct 31 05:25:18.018021 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Oct 31 05:25:18.018137 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Oct 31 05:25:18.023532 jq[1645]: false Oct 31 05:25:18.022771 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Oct 31 05:25:18.023813 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Oct 31 05:25:18.025963 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Oct 31 05:25:18.027715 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Oct 31 05:25:18.035005 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Oct 31 05:25:18.039436 systemd[1]: Starting systemd-logind.service - User Login Management... Oct 31 05:25:18.039555 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 31 05:25:18.040144 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 31 05:25:18.041271 systemd[1]: Starting update-engine.service - Update Engine... Oct 31 05:25:18.043133 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Oct 31 05:25:18.048652 systemd[1]: Starting vgauthd.service - VGAuth Service for open-vm-tools... Oct 31 05:25:18.050702 google_oslogin_nss_cache[1647]: oslogin_cache_refresh[1647]: Refreshing passwd entry cache Oct 31 05:25:18.050700 oslogin_cache_refresh[1647]: Refreshing passwd entry cache Oct 31 05:25:18.051568 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Oct 31 05:25:18.051849 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 31 05:25:18.052000 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Oct 31 05:25:18.053170 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 31 05:25:18.053431 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Oct 31 05:25:18.060889 jq[1658]: true Oct 31 05:25:18.062264 extend-filesystems[1646]: Found /dev/sda6 Oct 31 05:25:18.066569 oslogin_cache_refresh[1647]: Failure getting users, quitting Oct 31 05:25:18.068190 google_oslogin_nss_cache[1647]: oslogin_cache_refresh[1647]: Failure getting users, quitting Oct 31 05:25:18.068190 google_oslogin_nss_cache[1647]: oslogin_cache_refresh[1647]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Oct 31 05:25:18.068190 google_oslogin_nss_cache[1647]: oslogin_cache_refresh[1647]: Refreshing group entry cache Oct 31 05:25:18.066580 oslogin_cache_refresh[1647]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Oct 31 05:25:18.066606 oslogin_cache_refresh[1647]: Refreshing group entry cache Oct 31 05:25:18.070606 extend-filesystems[1646]: Found /dev/sda9 Oct 31 05:25:18.070971 google_oslogin_nss_cache[1647]: oslogin_cache_refresh[1647]: Failure getting groups, quitting Oct 31 05:25:18.070971 google_oslogin_nss_cache[1647]: oslogin_cache_refresh[1647]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Oct 31 05:25:18.070640 oslogin_cache_refresh[1647]: Failure getting groups, quitting Oct 31 05:25:18.070647 oslogin_cache_refresh[1647]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Oct 31 05:25:18.071488 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Oct 31 05:25:18.071629 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Oct 31 05:25:18.073063 extend-filesystems[1646]: Checking size of /dev/sda9 Oct 31 05:25:18.074496 update_engine[1656]: I20251031 05:25:18.074139 1656 main.cc:92] Flatcar Update Engine starting Oct 31 05:25:18.083740 extend-filesystems[1646]: Resized partition /dev/sda9 Oct 31 05:25:18.085779 systemd[1]: motdgen.service: Deactivated successfully. Oct 31 05:25:18.085941 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Oct 31 05:25:18.092038 systemd[1]: Started vgauthd.service - VGAuth Service for open-vm-tools. Oct 31 05:25:18.096063 extend-filesystems[1694]: resize2fs 1.47.3 (8-Jul-2025) Oct 31 05:25:18.096305 jq[1671]: true Oct 31 05:25:18.107928 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 1635323 blocks Oct 31 05:25:18.123223 kernel: EXT4-fs (sda9): resized filesystem to 1635323 Oct 31 05:25:18.101082 systemd[1]: Starting vmtoolsd.service - Service for virtual machines hosted on VMware... Oct 31 05:25:18.123303 tar[1663]: linux-amd64/LICENSE Oct 31 05:25:18.124151 tar[1663]: linux-amd64/helm Oct 31 05:25:18.129080 extend-filesystems[1694]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Oct 31 05:25:18.129080 extend-filesystems[1694]: old_desc_blocks = 1, new_desc_blocks = 1 Oct 31 05:25:18.129080 extend-filesystems[1694]: The filesystem on /dev/sda9 is now 1635323 (4k) blocks long. Oct 31 05:25:18.130059 extend-filesystems[1646]: Resized filesystem in /dev/sda9 Oct 31 05:25:18.129372 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 31 05:25:18.129677 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Oct 31 05:25:18.143615 dbus-daemon[1643]: [system] SELinux support is enabled Oct 31 05:25:18.143755 systemd[1]: Started dbus.service - D-Bus System Message Bus. Oct 31 05:25:18.146802 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 31 05:25:18.146821 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Oct 31 05:25:18.147444 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 31 05:25:18.147457 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Oct 31 05:25:18.152720 unknown[1695]: Pref_Init: Using '/etc/vmware-tools/vgauth.conf' as preferences filepath Oct 31 05:25:18.158812 systemd[1]: Started vmtoolsd.service - Service for virtual machines hosted on VMware. Oct 31 05:25:18.160642 systemd[1]: Started update-engine.service - Update Engine. Oct 31 05:25:18.161465 update_engine[1656]: I20251031 05:25:18.161095 1656 update_check_scheduler.cc:74] Next update check in 2m52s Oct 31 05:25:18.164997 systemd[1]: Started locksmithd.service - Cluster reboot manager. Oct 31 05:25:18.168377 unknown[1695]: Core dump limit set to -1 Oct 31 05:25:18.197036 bash[1719]: Updated "/home/core/.ssh/authorized_keys" Oct 31 05:25:18.198476 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Oct 31 05:25:18.199042 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Oct 31 05:25:18.199525 systemd-logind[1655]: Watching system buttons on /dev/input/event2 (Power Button) Oct 31 05:25:18.208705 systemd-logind[1655]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Oct 31 05:25:18.209172 systemd-logind[1655]: New seat seat0. Oct 31 05:25:18.209666 systemd[1]: Started systemd-logind.service - User Login Management. Oct 31 05:25:18.359321 locksmithd[1718]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 31 05:25:18.367179 sshd_keygen[1689]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 31 05:25:18.403235 containerd[1681]: time="2025-10-31T05:25:18Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Oct 31 05:25:18.404850 containerd[1681]: time="2025-10-31T05:25:18.404232417Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Oct 31 05:25:18.406397 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Oct 31 05:25:18.409056 systemd[1]: Starting issuegen.service - Generate /run/issue... Oct 31 05:25:18.417521 containerd[1681]: time="2025-10-31T05:25:18.417487343Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="16.603µs" Oct 31 05:25:18.417521 containerd[1681]: time="2025-10-31T05:25:18.417511231Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Oct 31 05:25:18.417521 containerd[1681]: time="2025-10-31T05:25:18.417524698Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Oct 31 05:25:18.417905 containerd[1681]: time="2025-10-31T05:25:18.417638019Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Oct 31 05:25:18.417905 containerd[1681]: time="2025-10-31T05:25:18.417648717Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Oct 31 05:25:18.417905 containerd[1681]: time="2025-10-31T05:25:18.417663238Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Oct 31 05:25:18.417905 containerd[1681]: time="2025-10-31T05:25:18.417695999Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Oct 31 05:25:18.417905 containerd[1681]: time="2025-10-31T05:25:18.417703065Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Oct 31 05:25:18.417905 containerd[1681]: time="2025-10-31T05:25:18.417839032Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Oct 31 05:25:18.417905 containerd[1681]: time="2025-10-31T05:25:18.417848095Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Oct 31 05:25:18.417905 containerd[1681]: time="2025-10-31T05:25:18.417854245Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Oct 31 05:25:18.417905 containerd[1681]: time="2025-10-31T05:25:18.417858998Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Oct 31 05:25:18.417905 containerd[1681]: time="2025-10-31T05:25:18.417898367Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Oct 31 05:25:18.419655 containerd[1681]: time="2025-10-31T05:25:18.419285286Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Oct 31 05:25:18.419655 containerd[1681]: time="2025-10-31T05:25:18.419313718Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Oct 31 05:25:18.419655 containerd[1681]: time="2025-10-31T05:25:18.419328098Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Oct 31 05:25:18.422849 containerd[1681]: time="2025-10-31T05:25:18.422646788Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Oct 31 05:25:18.422849 containerd[1681]: time="2025-10-31T05:25:18.422852779Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Oct 31 05:25:18.422948 containerd[1681]: time="2025-10-31T05:25:18.422939765Z" level=info msg="metadata content store policy set" policy=shared Oct 31 05:25:18.425028 containerd[1681]: time="2025-10-31T05:25:18.425002519Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Oct 31 05:25:18.425085 containerd[1681]: time="2025-10-31T05:25:18.425039653Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Oct 31 05:25:18.425085 containerd[1681]: time="2025-10-31T05:25:18.425048633Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Oct 31 05:25:18.425085 containerd[1681]: time="2025-10-31T05:25:18.425055586Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Oct 31 05:25:18.425085 containerd[1681]: time="2025-10-31T05:25:18.425062522Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Oct 31 05:25:18.425085 containerd[1681]: time="2025-10-31T05:25:18.425068592Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Oct 31 05:25:18.425085 containerd[1681]: time="2025-10-31T05:25:18.425075876Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Oct 31 05:25:18.425085 containerd[1681]: time="2025-10-31T05:25:18.425082698Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Oct 31 05:25:18.425183 containerd[1681]: time="2025-10-31T05:25:18.425094335Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Oct 31 05:25:18.425183 containerd[1681]: time="2025-10-31T05:25:18.425101779Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Oct 31 05:25:18.425183 containerd[1681]: time="2025-10-31T05:25:18.425111214Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Oct 31 05:25:18.425183 containerd[1681]: time="2025-10-31T05:25:18.425122457Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Oct 31 05:25:18.425236 containerd[1681]: time="2025-10-31T05:25:18.425193801Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Oct 31 05:25:18.425236 containerd[1681]: time="2025-10-31T05:25:18.425206715Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Oct 31 05:25:18.425236 containerd[1681]: time="2025-10-31T05:25:18.425214780Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Oct 31 05:25:18.425236 containerd[1681]: time="2025-10-31T05:25:18.425221403Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Oct 31 05:25:18.425236 containerd[1681]: time="2025-10-31T05:25:18.425228926Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Oct 31 05:25:18.425236 containerd[1681]: time="2025-10-31T05:25:18.425234575Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Oct 31 05:25:18.425308 containerd[1681]: time="2025-10-31T05:25:18.425240642Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Oct 31 05:25:18.425308 containerd[1681]: time="2025-10-31T05:25:18.425246185Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Oct 31 05:25:18.425308 containerd[1681]: time="2025-10-31T05:25:18.425251838Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Oct 31 05:25:18.425308 containerd[1681]: time="2025-10-31T05:25:18.425257862Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Oct 31 05:25:18.425308 containerd[1681]: time="2025-10-31T05:25:18.425264098Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Oct 31 05:25:18.425308 containerd[1681]: time="2025-10-31T05:25:18.425299989Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Oct 31 05:25:18.425308 containerd[1681]: time="2025-10-31T05:25:18.425307917Z" level=info msg="Start snapshots syncer" Oct 31 05:25:18.425398 containerd[1681]: time="2025-10-31T05:25:18.425321352Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Oct 31 05:25:18.428121 systemd[1]: issuegen.service: Deactivated successfully. Oct 31 05:25:18.428400 systemd[1]: Finished issuegen.service - Generate /run/issue. Oct 31 05:25:18.431146 containerd[1681]: time="2025-10-31T05:25:18.428490815Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Oct 31 05:25:18.431146 containerd[1681]: time="2025-10-31T05:25:18.429158266Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Oct 31 05:25:18.431070 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Oct 31 05:25:18.431315 containerd[1681]: time="2025-10-31T05:25:18.431094760Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Oct 31 05:25:18.432198 containerd[1681]: time="2025-10-31T05:25:18.431959377Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Oct 31 05:25:18.432198 containerd[1681]: time="2025-10-31T05:25:18.431981782Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Oct 31 05:25:18.432198 containerd[1681]: time="2025-10-31T05:25:18.431998205Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Oct 31 05:25:18.432198 containerd[1681]: time="2025-10-31T05:25:18.432006627Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Oct 31 05:25:18.432198 containerd[1681]: time="2025-10-31T05:25:18.432131755Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Oct 31 05:25:18.432198 containerd[1681]: time="2025-10-31T05:25:18.432142137Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Oct 31 05:25:18.432198 containerd[1681]: time="2025-10-31T05:25:18.432148742Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Oct 31 05:25:18.432198 containerd[1681]: time="2025-10-31T05:25:18.432166910Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Oct 31 05:25:18.432198 containerd[1681]: time="2025-10-31T05:25:18.432174256Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Oct 31 05:25:18.432198 containerd[1681]: time="2025-10-31T05:25:18.432181747Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Oct 31 05:25:18.432468 containerd[1681]: time="2025-10-31T05:25:18.432387073Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Oct 31 05:25:18.433294 containerd[1681]: time="2025-10-31T05:25:18.432580397Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Oct 31 05:25:18.433294 containerd[1681]: time="2025-10-31T05:25:18.432590502Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Oct 31 05:25:18.433294 containerd[1681]: time="2025-10-31T05:25:18.432596785Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Oct 31 05:25:18.433294 containerd[1681]: time="2025-10-31T05:25:18.432605992Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Oct 31 05:25:18.433294 containerd[1681]: time="2025-10-31T05:25:18.432616005Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Oct 31 05:25:18.433294 containerd[1681]: time="2025-10-31T05:25:18.432624483Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Oct 31 05:25:18.433294 containerd[1681]: time="2025-10-31T05:25:18.432635160Z" level=info msg="runtime interface created" Oct 31 05:25:18.433294 containerd[1681]: time="2025-10-31T05:25:18.432638283Z" level=info msg="created NRI interface" Oct 31 05:25:18.433294 containerd[1681]: time="2025-10-31T05:25:18.432649334Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Oct 31 05:25:18.433294 containerd[1681]: time="2025-10-31T05:25:18.432662920Z" level=info msg="Connect containerd service" Oct 31 05:25:18.433294 containerd[1681]: time="2025-10-31T05:25:18.432684383Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Oct 31 05:25:18.435009 containerd[1681]: time="2025-10-31T05:25:18.434224230Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 31 05:25:18.454292 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Oct 31 05:25:18.457451 systemd[1]: Started getty@tty1.service - Getty on tty1. Oct 31 05:25:18.459531 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Oct 31 05:25:18.460048 systemd[1]: Reached target getty.target - Login Prompts. Oct 31 05:25:18.529389 containerd[1681]: time="2025-10-31T05:25:18.529364365Z" level=info msg="Start subscribing containerd event" Oct 31 05:25:18.529499 containerd[1681]: time="2025-10-31T05:25:18.529482431Z" level=info msg="Start recovering state" Oct 31 05:25:18.529604 containerd[1681]: time="2025-10-31T05:25:18.529595533Z" level=info msg="Start event monitor" Oct 31 05:25:18.529846 containerd[1681]: time="2025-10-31T05:25:18.529838761Z" level=info msg="Start cni network conf syncer for default" Oct 31 05:25:18.529880 containerd[1681]: time="2025-10-31T05:25:18.529874311Z" level=info msg="Start streaming server" Oct 31 05:25:18.529928 containerd[1681]: time="2025-10-31T05:25:18.529920257Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Oct 31 05:25:18.529997 containerd[1681]: time="2025-10-31T05:25:18.529990040Z" level=info msg="runtime interface starting up..." Oct 31 05:25:18.530027 containerd[1681]: time="2025-10-31T05:25:18.530021972Z" level=info msg="starting plugins..." Oct 31 05:25:18.530091 containerd[1681]: time="2025-10-31T05:25:18.530052877Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Oct 31 05:25:18.530490 containerd[1681]: time="2025-10-31T05:25:18.530481262Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 31 05:25:18.530649 containerd[1681]: time="2025-10-31T05:25:18.530640507Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 31 05:25:18.531051 containerd[1681]: time="2025-10-31T05:25:18.531042606Z" level=info msg="containerd successfully booted in 0.128048s" Oct 31 05:25:18.531126 systemd[1]: Started containerd.service - containerd container runtime. Oct 31 05:25:18.552798 tar[1663]: linux-amd64/README.md Oct 31 05:25:18.566466 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Oct 31 05:25:19.613042 systemd-networkd[1579]: ens192: Gained IPv6LL Oct 31 05:25:19.613390 systemd-timesyncd[1539]: Network configuration changed, trying to establish connection. Oct 31 05:25:19.614435 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Oct 31 05:25:19.615397 systemd[1]: Reached target network-online.target - Network is Online. Oct 31 05:25:19.616930 systemd[1]: Starting coreos-metadata.service - VMware metadata agent... Oct 31 05:25:19.622936 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 31 05:25:19.625041 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Oct 31 05:25:19.665309 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Oct 31 05:25:19.681404 systemd[1]: coreos-metadata.service: Deactivated successfully. Oct 31 05:25:19.681701 systemd[1]: Finished coreos-metadata.service - VMware metadata agent. Oct 31 05:25:19.682286 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Oct 31 05:25:20.473149 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 31 05:25:20.473493 systemd[1]: Reached target multi-user.target - Multi-User System. Oct 31 05:25:20.474127 systemd[1]: Startup finished in 2.814s (kernel) + 6.633s (initrd) + 5.366s (userspace) = 14.814s. Oct 31 05:25:20.476134 (kubelet)[1847]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 31 05:25:20.556078 systemd-timesyncd[1539]: Network configuration changed, trying to establish connection. Oct 31 05:25:20.844129 login[1805]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Oct 31 05:25:20.846403 login[1806]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Oct 31 05:25:20.853488 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Oct 31 05:25:20.855034 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Oct 31 05:25:20.861027 systemd-logind[1655]: New session 2 of user core. Oct 31 05:25:20.868153 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Oct 31 05:25:20.870685 systemd[1]: Starting user@500.service - User Manager for UID 500... Oct 31 05:25:20.879573 (systemd)[1858]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 31 05:25:20.882340 systemd-logind[1655]: New session c1 of user core. Oct 31 05:25:20.970254 systemd[1858]: Queued start job for default target default.target. Oct 31 05:25:20.975820 systemd[1858]: Created slice app.slice - User Application Slice. Oct 31 05:25:20.975841 systemd[1858]: Reached target paths.target - Paths. Oct 31 05:25:20.975965 systemd[1858]: Reached target timers.target - Timers. Oct 31 05:25:20.976824 systemd[1858]: Starting dbus.socket - D-Bus User Message Bus Socket... Oct 31 05:25:20.984463 systemd[1858]: Listening on dbus.socket - D-Bus User Message Bus Socket. Oct 31 05:25:20.984498 systemd[1858]: Reached target sockets.target - Sockets. Oct 31 05:25:20.984526 systemd[1858]: Reached target basic.target - Basic System. Oct 31 05:25:20.984551 systemd[1858]: Reached target default.target - Main User Target. Oct 31 05:25:20.984568 systemd[1858]: Startup finished in 97ms. Oct 31 05:25:20.984635 systemd[1]: Started user@500.service - User Manager for UID 500. Oct 31 05:25:20.988997 systemd[1]: Started session-2.scope - Session 2 of User core. Oct 31 05:25:21.276451 kubelet[1847]: E1031 05:25:21.276416 1847 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 31 05:25:21.277969 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 31 05:25:21.278056 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 31 05:25:21.278388 systemd[1]: kubelet.service: Consumed 668ms CPU time, 268.4M memory peak. Oct 31 05:25:21.844412 login[1805]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Oct 31 05:25:21.848100 systemd-logind[1655]: New session 1 of user core. Oct 31 05:25:21.855072 systemd[1]: Started session-1.scope - Session 1 of User core. Oct 31 05:25:31.377545 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Oct 31 05:25:31.378872 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 31 05:25:31.677598 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 31 05:25:31.688304 (kubelet)[1896]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 31 05:25:31.739836 kubelet[1896]: E1031 05:25:31.739785 1896 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 31 05:25:31.742418 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 31 05:25:31.742572 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 31 05:25:31.742926 systemd[1]: kubelet.service: Consumed 113ms CPU time, 109M memory peak. Oct 31 05:25:41.877462 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Oct 31 05:25:41.878874 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 31 05:25:42.241520 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 31 05:25:42.250133 (kubelet)[1911]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 31 05:25:42.299268 kubelet[1911]: E1031 05:25:42.299243 1911 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 31 05:25:42.300684 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 31 05:25:42.300787 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 31 05:25:42.301126 systemd[1]: kubelet.service: Consumed 101ms CPU time, 110.1M memory peak. Oct 31 05:25:48.239971 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Oct 31 05:25:48.242097 systemd[1]: Started sshd@0-139.178.70.106:22-147.75.109.163:39244.service - OpenSSH per-connection server daemon (147.75.109.163:39244). Oct 31 05:25:48.295047 sshd[1918]: Accepted publickey for core from 147.75.109.163 port 39244 ssh2: RSA SHA256:UFAhvEX7xC3a4oKPhS739blDe3ccIA0DQUFWMCfmmc4 Oct 31 05:25:48.295881 sshd-session[1918]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 05:25:48.298586 systemd-logind[1655]: New session 3 of user core. Oct 31 05:25:48.317105 systemd[1]: Started session-3.scope - Session 3 of User core. Oct 31 05:25:48.330980 systemd[1]: Started sshd@1-139.178.70.106:22-147.75.109.163:39254.service - OpenSSH per-connection server daemon (147.75.109.163:39254). Oct 31 05:25:48.373469 sshd[1924]: Accepted publickey for core from 147.75.109.163 port 39254 ssh2: RSA SHA256:UFAhvEX7xC3a4oKPhS739blDe3ccIA0DQUFWMCfmmc4 Oct 31 05:25:48.374432 sshd-session[1924]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 05:25:48.377887 systemd-logind[1655]: New session 4 of user core. Oct 31 05:25:48.388022 systemd[1]: Started session-4.scope - Session 4 of User core. Oct 31 05:25:48.395778 sshd[1927]: Connection closed by 147.75.109.163 port 39254 Oct 31 05:25:48.396657 sshd-session[1924]: pam_unix(sshd:session): session closed for user core Oct 31 05:25:48.402055 systemd[1]: sshd@1-139.178.70.106:22-147.75.109.163:39254.service: Deactivated successfully. Oct 31 05:25:48.403030 systemd[1]: session-4.scope: Deactivated successfully. Oct 31 05:25:48.403617 systemd-logind[1655]: Session 4 logged out. Waiting for processes to exit. Oct 31 05:25:48.404953 systemd[1]: Started sshd@2-139.178.70.106:22-147.75.109.163:39262.service - OpenSSH per-connection server daemon (147.75.109.163:39262). Oct 31 05:25:48.406508 systemd-logind[1655]: Removed session 4. Oct 31 05:25:48.444070 sshd[1933]: Accepted publickey for core from 147.75.109.163 port 39262 ssh2: RSA SHA256:UFAhvEX7xC3a4oKPhS739blDe3ccIA0DQUFWMCfmmc4 Oct 31 05:25:48.444906 sshd-session[1933]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 05:25:48.447502 systemd-logind[1655]: New session 5 of user core. Oct 31 05:25:48.455006 systemd[1]: Started session-5.scope - Session 5 of User core. Oct 31 05:25:48.460567 sshd[1936]: Connection closed by 147.75.109.163 port 39262 Oct 31 05:25:48.460501 sshd-session[1933]: pam_unix(sshd:session): session closed for user core Oct 31 05:25:48.467550 systemd[1]: sshd@2-139.178.70.106:22-147.75.109.163:39262.service: Deactivated successfully. Oct 31 05:25:48.468679 systemd[1]: session-5.scope: Deactivated successfully. Oct 31 05:25:48.470303 systemd-logind[1655]: Session 5 logged out. Waiting for processes to exit. Oct 31 05:25:48.471164 systemd[1]: Started sshd@3-139.178.70.106:22-147.75.109.163:39264.service - OpenSSH per-connection server daemon (147.75.109.163:39264). Oct 31 05:25:48.472036 systemd-logind[1655]: Removed session 5. Oct 31 05:25:48.508962 sshd[1942]: Accepted publickey for core from 147.75.109.163 port 39264 ssh2: RSA SHA256:UFAhvEX7xC3a4oKPhS739blDe3ccIA0DQUFWMCfmmc4 Oct 31 05:25:48.509468 sshd-session[1942]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 05:25:48.512650 systemd-logind[1655]: New session 6 of user core. Oct 31 05:25:48.522048 systemd[1]: Started session-6.scope - Session 6 of User core. Oct 31 05:25:48.530378 sshd[1945]: Connection closed by 147.75.109.163 port 39264 Oct 31 05:25:48.530716 sshd-session[1942]: pam_unix(sshd:session): session closed for user core Oct 31 05:25:48.537175 systemd[1]: sshd@3-139.178.70.106:22-147.75.109.163:39264.service: Deactivated successfully. Oct 31 05:25:48.538441 systemd[1]: session-6.scope: Deactivated successfully. Oct 31 05:25:48.539075 systemd-logind[1655]: Session 6 logged out. Waiting for processes to exit. Oct 31 05:25:48.541208 systemd[1]: Started sshd@4-139.178.70.106:22-147.75.109.163:39266.service - OpenSSH per-connection server daemon (147.75.109.163:39266). Oct 31 05:25:48.543729 systemd-logind[1655]: Removed session 6. Oct 31 05:25:48.582618 sshd[1951]: Accepted publickey for core from 147.75.109.163 port 39266 ssh2: RSA SHA256:UFAhvEX7xC3a4oKPhS739blDe3ccIA0DQUFWMCfmmc4 Oct 31 05:25:48.583722 sshd-session[1951]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 05:25:48.587360 systemd-logind[1655]: New session 7 of user core. Oct 31 05:25:48.594072 systemd[1]: Started session-7.scope - Session 7 of User core. Oct 31 05:25:48.618589 sudo[1955]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 31 05:25:48.618811 sudo[1955]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 31 05:25:48.632234 sudo[1955]: pam_unix(sudo:session): session closed for user root Oct 31 05:25:48.633144 sshd[1954]: Connection closed by 147.75.109.163 port 39266 Oct 31 05:25:48.633465 sshd-session[1951]: pam_unix(sshd:session): session closed for user core Oct 31 05:25:48.643059 systemd[1]: sshd@4-139.178.70.106:22-147.75.109.163:39266.service: Deactivated successfully. Oct 31 05:25:48.644194 systemd[1]: session-7.scope: Deactivated successfully. Oct 31 05:25:48.645126 systemd-logind[1655]: Session 7 logged out. Waiting for processes to exit. Oct 31 05:25:48.645790 systemd-logind[1655]: Removed session 7. Oct 31 05:25:48.646771 systemd[1]: Started sshd@5-139.178.70.106:22-147.75.109.163:39280.service - OpenSSH per-connection server daemon (147.75.109.163:39280). Oct 31 05:25:48.687927 sshd[1961]: Accepted publickey for core from 147.75.109.163 port 39280 ssh2: RSA SHA256:UFAhvEX7xC3a4oKPhS739blDe3ccIA0DQUFWMCfmmc4 Oct 31 05:25:48.688666 sshd-session[1961]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 05:25:48.691253 systemd-logind[1655]: New session 8 of user core. Oct 31 05:25:48.698003 systemd[1]: Started session-8.scope - Session 8 of User core. Oct 31 05:25:48.705110 sudo[1966]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 31 05:25:48.705263 sudo[1966]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 31 05:25:48.707487 sudo[1966]: pam_unix(sudo:session): session closed for user root Oct 31 05:25:48.710985 sudo[1965]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Oct 31 05:25:48.711124 sudo[1965]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 31 05:25:48.717343 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 31 05:25:48.738886 augenrules[1988]: No rules Oct 31 05:25:48.739241 systemd[1]: audit-rules.service: Deactivated successfully. Oct 31 05:25:48.739429 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 31 05:25:48.740082 sudo[1965]: pam_unix(sudo:session): session closed for user root Oct 31 05:25:48.741250 sshd[1964]: Connection closed by 147.75.109.163 port 39280 Oct 31 05:25:48.741453 sshd-session[1961]: pam_unix(sshd:session): session closed for user core Oct 31 05:25:48.747068 systemd[1]: sshd@5-139.178.70.106:22-147.75.109.163:39280.service: Deactivated successfully. Oct 31 05:25:48.747907 systemd[1]: session-8.scope: Deactivated successfully. Oct 31 05:25:48.748408 systemd-logind[1655]: Session 8 logged out. Waiting for processes to exit. Oct 31 05:25:48.749907 systemd[1]: Started sshd@6-139.178.70.106:22-147.75.109.163:39288.service - OpenSSH per-connection server daemon (147.75.109.163:39288). Oct 31 05:25:48.751048 systemd-logind[1655]: Removed session 8. Oct 31 05:25:48.782232 sshd[1997]: Accepted publickey for core from 147.75.109.163 port 39288 ssh2: RSA SHA256:UFAhvEX7xC3a4oKPhS739blDe3ccIA0DQUFWMCfmmc4 Oct 31 05:25:48.783449 sshd-session[1997]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 05:25:48.786100 systemd-logind[1655]: New session 9 of user core. Oct 31 05:25:48.796998 systemd[1]: Started session-9.scope - Session 9 of User core. Oct 31 05:25:48.803845 sudo[2001]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 31 05:25:48.804179 sudo[2001]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 31 05:25:49.254945 systemd[1]: Starting docker.service - Docker Application Container Engine... Oct 31 05:25:49.267139 (dockerd)[2019]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Oct 31 05:25:49.572693 dockerd[2019]: time="2025-10-31T05:25:49.572603634Z" level=info msg="Starting up" Oct 31 05:25:49.573189 dockerd[2019]: time="2025-10-31T05:25:49.573171168Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Oct 31 05:25:49.580579 dockerd[2019]: time="2025-10-31T05:25:49.580546061Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Oct 31 05:25:49.588955 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport636215693-merged.mount: Deactivated successfully. Oct 31 05:25:49.609509 dockerd[2019]: time="2025-10-31T05:25:49.609333550Z" level=info msg="Loading containers: start." Oct 31 05:25:49.618938 kernel: Initializing XFRM netlink socket Oct 31 05:25:49.818051 systemd-timesyncd[1539]: Network configuration changed, trying to establish connection. Oct 31 05:25:49.847180 systemd-networkd[1579]: docker0: Link UP Oct 31 05:25:49.848591 dockerd[2019]: time="2025-10-31T05:25:49.848567744Z" level=info msg="Loading containers: done." Oct 31 05:25:49.856532 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck4163550972-merged.mount: Deactivated successfully. Oct 31 05:25:49.858277 dockerd[2019]: time="2025-10-31T05:25:49.858246549Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Oct 31 05:25:49.858342 dockerd[2019]: time="2025-10-31T05:25:49.858313844Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Oct 31 05:25:49.858375 dockerd[2019]: time="2025-10-31T05:25:49.858362361Z" level=info msg="Initializing buildkit" Oct 31 05:25:49.871636 dockerd[2019]: time="2025-10-31T05:25:49.871564283Z" level=info msg="Completed buildkit initialization" Oct 31 05:27:27.593755 systemd-resolved[1340]: Clock change detected. Flushing caches. Oct 31 05:27:27.594041 systemd-timesyncd[1539]: Contacted time server 23.142.248.8:123 (2.flatcar.pool.ntp.org). Oct 31 05:27:27.594078 systemd-timesyncd[1539]: Initial clock synchronization to Fri 2025-10-31 05:27:27.593453 UTC. Oct 31 05:27:27.596869 dockerd[2019]: time="2025-10-31T05:27:27.596831447Z" level=info msg="Daemon has completed initialization" Oct 31 05:27:27.597466 dockerd[2019]: time="2025-10-31T05:27:27.596995247Z" level=info msg="API listen on /run/docker.sock" Oct 31 05:27:27.597395 systemd[1]: Started docker.service - Docker Application Container Engine. Oct 31 05:27:28.422837 containerd[1681]: time="2025-10-31T05:27:28.422808894Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\"" Oct 31 05:27:29.123225 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2717574113.mount: Deactivated successfully. Oct 31 05:27:30.060484 containerd[1681]: time="2025-10-31T05:27:30.059991652Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 05:27:30.062027 containerd[1681]: time="2025-10-31T05:27:30.062009697Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.5: active requests=0, bytes read=30114893" Oct 31 05:27:30.070197 containerd[1681]: time="2025-10-31T05:27:30.070179217Z" level=info msg="ImageCreate event name:\"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 05:27:30.077212 containerd[1681]: time="2025-10-31T05:27:30.077198779Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 05:27:30.077755 containerd[1681]: time="2025-10-31T05:27:30.077737477Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.5\" with image id \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\", size \"30111492\" in 1.654905558s" Oct 31 05:27:30.077789 containerd[1681]: time="2025-10-31T05:27:30.077758455Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\" returns image reference \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\"" Oct 31 05:27:30.078160 containerd[1681]: time="2025-10-31T05:27:30.078139424Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\"" Oct 31 05:27:30.098353 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Oct 31 05:27:30.101015 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 31 05:27:30.302094 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 31 05:27:30.308109 (kubelet)[2294]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 31 05:27:30.328720 kubelet[2294]: E1031 05:27:30.328640 2294 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 31 05:27:30.330944 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 31 05:27:30.331029 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 31 05:27:30.331222 systemd[1]: kubelet.service: Consumed 95ms CPU time, 110M memory peak. Oct 31 05:27:32.130400 containerd[1681]: time="2025-10-31T05:27:32.130352382Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 05:27:32.135188 containerd[1681]: time="2025-10-31T05:27:32.135156858Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.5: active requests=0, bytes read=26020844" Oct 31 05:27:32.147875 containerd[1681]: time="2025-10-31T05:27:32.147814349Z" level=info msg="ImageCreate event name:\"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 05:27:32.156394 containerd[1681]: time="2025-10-31T05:27:32.156346303Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 05:27:32.157220 containerd[1681]: time="2025-10-31T05:27:32.157098136Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.5\" with image id \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\", size \"27681301\" in 2.078779536s" Oct 31 05:27:32.157220 containerd[1681]: time="2025-10-31T05:27:32.157124854Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\" returns image reference \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\"" Oct 31 05:27:32.157454 containerd[1681]: time="2025-10-31T05:27:32.157441705Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\"" Oct 31 05:27:33.669273 containerd[1681]: time="2025-10-31T05:27:33.669241092Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 05:27:33.669956 containerd[1681]: time="2025-10-31T05:27:33.669940060Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.5: active requests=0, bytes read=20155568" Oct 31 05:27:33.670290 containerd[1681]: time="2025-10-31T05:27:33.670274390Z" level=info msg="ImageCreate event name:\"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 05:27:33.671980 containerd[1681]: time="2025-10-31T05:27:33.671963717Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 05:27:33.672763 containerd[1681]: time="2025-10-31T05:27:33.672744539Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.5\" with image id \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\", size \"21816043\" in 1.515176524s" Oct 31 05:27:33.672792 containerd[1681]: time="2025-10-31T05:27:33.672764594Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\" returns image reference \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\"" Oct 31 05:27:33.673110 containerd[1681]: time="2025-10-31T05:27:33.673096244Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\"" Oct 31 05:27:34.668249 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount70859112.mount: Deactivated successfully. Oct 31 05:27:35.381265 containerd[1681]: time="2025-10-31T05:27:35.381228048Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 05:27:35.382188 containerd[1681]: time="2025-10-31T05:27:35.382166510Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.5: active requests=0, bytes read=31929469" Oct 31 05:27:35.382502 containerd[1681]: time="2025-10-31T05:27:35.382478515Z" level=info msg="ImageCreate event name:\"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 05:27:35.383773 containerd[1681]: time="2025-10-31T05:27:35.383753322Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 05:27:35.384160 containerd[1681]: time="2025-10-31T05:27:35.384020295Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.5\" with image id \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\", repo tag \"registry.k8s.io/kube-proxy:v1.33.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\", size \"31928488\" in 1.710908133s" Oct 31 05:27:35.384160 containerd[1681]: time="2025-10-31T05:27:35.384108859Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\" returns image reference \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\"" Oct 31 05:27:35.384446 containerd[1681]: time="2025-10-31T05:27:35.384420097Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Oct 31 05:27:36.176413 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1025182795.mount: Deactivated successfully. Oct 31 05:27:37.364087 containerd[1681]: time="2025-10-31T05:27:37.363998695Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 05:27:37.373799 containerd[1681]: time="2025-10-31T05:27:37.373764503Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Oct 31 05:27:37.374770 containerd[1681]: time="2025-10-31T05:27:37.374620314Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 05:27:37.376754 containerd[1681]: time="2025-10-31T05:27:37.376735909Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 05:27:37.377258 containerd[1681]: time="2025-10-31T05:27:37.377241674Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.992800987s" Oct 31 05:27:37.377295 containerd[1681]: time="2025-10-31T05:27:37.377259423Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Oct 31 05:27:37.377827 containerd[1681]: time="2025-10-31T05:27:37.377535787Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Oct 31 05:27:38.280765 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3189893944.mount: Deactivated successfully. Oct 31 05:27:38.335758 containerd[1681]: time="2025-10-31T05:27:38.335706701Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 31 05:27:38.344850 containerd[1681]: time="2025-10-31T05:27:38.344811019Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Oct 31 05:27:38.348999 containerd[1681]: time="2025-10-31T05:27:38.348875716Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 31 05:27:38.355018 containerd[1681]: time="2025-10-31T05:27:38.354974002Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 31 05:27:38.355372 containerd[1681]: time="2025-10-31T05:27:38.355277784Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 977.724391ms" Oct 31 05:27:38.355372 containerd[1681]: time="2025-10-31T05:27:38.355302491Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Oct 31 05:27:38.355882 containerd[1681]: time="2025-10-31T05:27:38.355788777Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Oct 31 05:27:39.212685 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3703387282.mount: Deactivated successfully. Oct 31 05:27:40.347468 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Oct 31 05:27:40.349307 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 31 05:27:41.022562 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 31 05:27:41.025539 (kubelet)[2433]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 31 05:27:41.275890 kubelet[2433]: E1031 05:27:41.275709 2433 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 31 05:27:41.279035 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 31 05:27:41.279123 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 31 05:27:41.279334 systemd[1]: kubelet.service: Consumed 138ms CPU time, 105.6M memory peak. Oct 31 05:27:41.383400 update_engine[1656]: I20251031 05:27:41.383342 1656 update_attempter.cc:509] Updating boot flags... Oct 31 05:27:41.525088 containerd[1681]: time="2025-10-31T05:27:41.522324304Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 05:27:41.525299 containerd[1681]: time="2025-10-31T05:27:41.525192174Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58378433" Oct 31 05:27:41.525648 containerd[1681]: time="2025-10-31T05:27:41.525631628Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 05:27:41.530302 containerd[1681]: time="2025-10-31T05:27:41.530226126Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 05:27:41.535234 containerd[1681]: time="2025-10-31T05:27:41.535205016Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 3.179384719s" Oct 31 05:27:41.535234 containerd[1681]: time="2025-10-31T05:27:41.535230881Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Oct 31 05:27:43.932638 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 31 05:27:43.933112 systemd[1]: kubelet.service: Consumed 138ms CPU time, 105.6M memory peak. Oct 31 05:27:43.934930 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 31 05:27:43.956705 systemd[1]: Reload requested from client PID 2491 ('systemctl') (unit session-9.scope)... Oct 31 05:27:43.956729 systemd[1]: Reloading... Oct 31 05:27:44.023951 zram_generator::config[2539]: No configuration found. Oct 31 05:27:44.092222 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Oct 31 05:27:44.162811 systemd[1]: Reloading finished in 205 ms. Oct 31 05:27:44.221271 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Oct 31 05:27:44.221344 systemd[1]: kubelet.service: Failed with result 'signal'. Oct 31 05:27:44.221602 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 31 05:27:44.223844 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 31 05:27:44.665595 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 31 05:27:44.673091 (kubelet)[2603]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 31 05:27:44.735121 kubelet[2603]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 31 05:27:44.735121 kubelet[2603]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 31 05:27:44.735121 kubelet[2603]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 31 05:27:44.735401 kubelet[2603]: I1031 05:27:44.735176 2603 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 31 05:27:45.089978 kubelet[2603]: I1031 05:27:45.089712 2603 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Oct 31 05:27:45.089978 kubelet[2603]: I1031 05:27:45.089736 2603 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 31 05:27:45.090284 kubelet[2603]: I1031 05:27:45.090275 2603 server.go:956] "Client rotation is on, will bootstrap in background" Oct 31 05:27:45.131622 kubelet[2603]: E1031 05:27:45.131195 2603 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://139.178.70.106:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 139.178.70.106:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Oct 31 05:27:45.131622 kubelet[2603]: I1031 05:27:45.131289 2603 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 31 05:27:45.187521 kubelet[2603]: I1031 05:27:45.187505 2603 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 31 05:27:45.193417 kubelet[2603]: I1031 05:27:45.193330 2603 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 31 05:27:45.199944 kubelet[2603]: I1031 05:27:45.199716 2603 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 31 05:27:45.202553 kubelet[2603]: I1031 05:27:45.199775 2603 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 31 05:27:45.203078 kubelet[2603]: I1031 05:27:45.202704 2603 topology_manager.go:138] "Creating topology manager with none policy" Oct 31 05:27:45.203078 kubelet[2603]: I1031 05:27:45.202716 2603 container_manager_linux.go:303] "Creating device plugin manager" Oct 31 05:27:45.203078 kubelet[2603]: I1031 05:27:45.202816 2603 state_mem.go:36] "Initialized new in-memory state store" Oct 31 05:27:45.205126 kubelet[2603]: I1031 05:27:45.205111 2603 kubelet.go:480] "Attempting to sync node with API server" Oct 31 05:27:45.205207 kubelet[2603]: I1031 05:27:45.205199 2603 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 31 05:27:45.205255 kubelet[2603]: I1031 05:27:45.205251 2603 kubelet.go:386] "Adding apiserver pod source" Oct 31 05:27:45.207362 kubelet[2603]: I1031 05:27:45.207347 2603 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 31 05:27:45.213625 kubelet[2603]: E1031 05:27:45.213595 2603 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://139.178.70.106:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 139.178.70.106:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Oct 31 05:27:45.223966 kubelet[2603]: I1031 05:27:45.223267 2603 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Oct 31 05:27:45.223966 kubelet[2603]: I1031 05:27:45.223819 2603 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Oct 31 05:27:45.225710 kubelet[2603]: W1031 05:27:45.225688 2603 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 31 05:27:45.226646 kubelet[2603]: E1031 05:27:45.226617 2603 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://139.178.70.106:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.70.106:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Oct 31 05:27:45.249118 kubelet[2603]: I1031 05:27:45.249093 2603 watchdog_linux.go:99] "Systemd watchdog is not enabled" Oct 31 05:27:45.249200 kubelet[2603]: I1031 05:27:45.249171 2603 server.go:1289] "Started kubelet" Oct 31 05:27:45.259207 kubelet[2603]: I1031 05:27:45.259156 2603 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Oct 31 05:27:45.272054 kubelet[2603]: I1031 05:27:45.271977 2603 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 31 05:27:45.272779 kubelet[2603]: I1031 05:27:45.272704 2603 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 31 05:27:45.274945 kubelet[2603]: I1031 05:27:45.274929 2603 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 31 05:27:45.277935 kubelet[2603]: I1031 05:27:45.277701 2603 server.go:317] "Adding debug handlers to kubelet server" Oct 31 05:27:45.285279 kubelet[2603]: E1031 05:27:45.281713 2603 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://139.178.70.106:6443/api/v1/namespaces/default/events\": dial tcp 139.178.70.106:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18737c3969081c92 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-10-31 05:27:45.249115282 +0000 UTC m=+0.573441774,LastTimestamp:2025-10-31 05:27:45.249115282 +0000 UTC m=+0.573441774,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Oct 31 05:27:45.286393 kubelet[2603]: I1031 05:27:45.286354 2603 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 31 05:27:45.289338 kubelet[2603]: I1031 05:27:45.289310 2603 volume_manager.go:297] "Starting Kubelet Volume Manager" Oct 31 05:27:45.289508 kubelet[2603]: E1031 05:27:45.289497 2603 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 31 05:27:45.293213 kubelet[2603]: I1031 05:27:45.293190 2603 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Oct 31 05:27:45.293285 kubelet[2603]: I1031 05:27:45.293240 2603 reconciler.go:26] "Reconciler: start to sync state" Oct 31 05:27:45.293949 kubelet[2603]: E1031 05:27:45.293489 2603 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.106:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.106:6443: connect: connection refused" interval="200ms" Oct 31 05:27:45.293949 kubelet[2603]: E1031 05:27:45.293828 2603 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://139.178.70.106:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.70.106:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Oct 31 05:27:45.294020 kubelet[2603]: I1031 05:27:45.293913 2603 factory.go:223] Registration of the systemd container factory successfully Oct 31 05:27:45.294073 kubelet[2603]: I1031 05:27:45.294046 2603 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 31 05:27:45.294955 kubelet[2603]: I1031 05:27:45.294937 2603 factory.go:223] Registration of the containerd container factory successfully Oct 31 05:27:45.301961 kubelet[2603]: I1031 05:27:45.301057 2603 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Oct 31 05:27:45.301961 kubelet[2603]: I1031 05:27:45.301893 2603 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Oct 31 05:27:45.301961 kubelet[2603]: I1031 05:27:45.301905 2603 status_manager.go:230] "Starting to sync pod status with apiserver" Oct 31 05:27:45.301961 kubelet[2603]: I1031 05:27:45.301930 2603 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 31 05:27:45.301961 kubelet[2603]: I1031 05:27:45.301935 2603 kubelet.go:2436] "Starting kubelet main sync loop" Oct 31 05:27:45.301961 kubelet[2603]: E1031 05:27:45.301960 2603 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 31 05:27:45.306235 kubelet[2603]: E1031 05:27:45.306212 2603 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://139.178.70.106:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 139.178.70.106:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Oct 31 05:27:45.313223 kubelet[2603]: E1031 05:27:45.313176 2603 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 31 05:27:45.315470 kubelet[2603]: I1031 05:27:45.315445 2603 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 31 05:27:45.315538 kubelet[2603]: I1031 05:27:45.315533 2603 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 31 05:27:45.315594 kubelet[2603]: I1031 05:27:45.315588 2603 state_mem.go:36] "Initialized new in-memory state store" Oct 31 05:27:45.321980 kubelet[2603]: I1031 05:27:45.321961 2603 policy_none.go:49] "None policy: Start" Oct 31 05:27:45.322067 kubelet[2603]: I1031 05:27:45.322062 2603 memory_manager.go:186] "Starting memorymanager" policy="None" Oct 31 05:27:45.322109 kubelet[2603]: I1031 05:27:45.322104 2603 state_mem.go:35] "Initializing new in-memory state store" Oct 31 05:27:45.334544 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Oct 31 05:27:45.346719 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Oct 31 05:27:45.350116 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Oct 31 05:27:45.360798 kubelet[2603]: E1031 05:27:45.360773 2603 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Oct 31 05:27:45.361378 kubelet[2603]: I1031 05:27:45.361129 2603 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 31 05:27:45.361378 kubelet[2603]: I1031 05:27:45.361138 2603 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 31 05:27:45.361378 kubelet[2603]: I1031 05:27:45.361369 2603 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 31 05:27:45.363593 kubelet[2603]: E1031 05:27:45.363239 2603 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 31 05:27:45.363593 kubelet[2603]: E1031 05:27:45.363266 2603 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Oct 31 05:27:45.409698 systemd[1]: Created slice kubepods-burstable-pod20c890a246d840d308022312da9174cb.slice - libcontainer container kubepods-burstable-pod20c890a246d840d308022312da9174cb.slice. Oct 31 05:27:45.415557 kubelet[2603]: E1031 05:27:45.415532 2603 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 31 05:27:45.418827 systemd[1]: Created slice kubepods-burstable-podd13d96f639b65e57f439b4396b605564.slice - libcontainer container kubepods-burstable-podd13d96f639b65e57f439b4396b605564.slice. Oct 31 05:27:45.422639 kubelet[2603]: E1031 05:27:45.422616 2603 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 31 05:27:45.430614 systemd[1]: Created slice kubepods-burstable-pod458148ce844f17f8f66cb54d2824e808.slice - libcontainer container kubepods-burstable-pod458148ce844f17f8f66cb54d2824e808.slice. Oct 31 05:27:45.432093 kubelet[2603]: E1031 05:27:45.432074 2603 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 31 05:27:45.463126 kubelet[2603]: I1031 05:27:45.463083 2603 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 31 05:27:45.463436 kubelet[2603]: E1031 05:27:45.463420 2603 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.70.106:6443/api/v1/nodes\": dial tcp 139.178.70.106:6443: connect: connection refused" node="localhost" Oct 31 05:27:45.494390 kubelet[2603]: I1031 05:27:45.494115 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d13d96f639b65e57f439b4396b605564-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d13d96f639b65e57f439b4396b605564\") " pod="kube-system/kube-scheduler-localhost" Oct 31 05:27:45.494390 kubelet[2603]: I1031 05:27:45.494147 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/458148ce844f17f8f66cb54d2824e808-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"458148ce844f17f8f66cb54d2824e808\") " pod="kube-system/kube-apiserver-localhost" Oct 31 05:27:45.494390 kubelet[2603]: I1031 05:27:45.494173 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 05:27:45.494390 kubelet[2603]: I1031 05:27:45.494199 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 05:27:45.494390 kubelet[2603]: I1031 05:27:45.494225 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/458148ce844f17f8f66cb54d2824e808-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"458148ce844f17f8f66cb54d2824e808\") " pod="kube-system/kube-apiserver-localhost" Oct 31 05:27:45.494647 kubelet[2603]: I1031 05:27:45.494250 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/458148ce844f17f8f66cb54d2824e808-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"458148ce844f17f8f66cb54d2824e808\") " pod="kube-system/kube-apiserver-localhost" Oct 31 05:27:45.494647 kubelet[2603]: I1031 05:27:45.494271 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 05:27:45.494647 kubelet[2603]: I1031 05:27:45.494299 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 05:27:45.494647 kubelet[2603]: I1031 05:27:45.494328 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 05:27:45.494877 kubelet[2603]: E1031 05:27:45.494853 2603 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.106:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.106:6443: connect: connection refused" interval="400ms" Oct 31 05:27:45.665244 kubelet[2603]: I1031 05:27:45.665183 2603 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 31 05:27:45.666104 kubelet[2603]: E1031 05:27:45.666074 2603 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.70.106:6443/api/v1/nodes\": dial tcp 139.178.70.106:6443: connect: connection refused" node="localhost" Oct 31 05:27:45.718379 containerd[1681]: time="2025-10-31T05:27:45.718128957Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:20c890a246d840d308022312da9174cb,Namespace:kube-system,Attempt:0,}" Oct 31 05:27:45.723493 containerd[1681]: time="2025-10-31T05:27:45.723470188Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d13d96f639b65e57f439b4396b605564,Namespace:kube-system,Attempt:0,}" Oct 31 05:27:45.732959 containerd[1681]: time="2025-10-31T05:27:45.732868831Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:458148ce844f17f8f66cb54d2824e808,Namespace:kube-system,Attempt:0,}" Oct 31 05:27:45.867877 containerd[1681]: time="2025-10-31T05:27:45.867831477Z" level=info msg="connecting to shim 94d56714fecf93e5a1bbadf49973577b3e53b9ca82b29ff16ff90ffda872a2d5" address="unix:///run/containerd/s/5d3a3ec53e9c4a26259e685c1fc755b524008bc1b5c75faa4a3f94835d0fad66" namespace=k8s.io protocol=ttrpc version=3 Oct 31 05:27:45.871084 containerd[1681]: time="2025-10-31T05:27:45.868112905Z" level=info msg="connecting to shim ec16dd4fe37a61ef33a0991bc0087dd6469ff7a0b39349400831d9c8ebf53c7b" address="unix:///run/containerd/s/82797702bfa6a5ffa1f4dfde19f45c0f7014a8f3db5734b012d60567d1a0c56e" namespace=k8s.io protocol=ttrpc version=3 Oct 31 05:27:45.871572 containerd[1681]: time="2025-10-31T05:27:45.871545585Z" level=info msg="connecting to shim ba31d4c749793c0045348e0c7d9138e1cb3d72416964bb0714ed6597dcc783e9" address="unix:///run/containerd/s/16ccfc855610e8a0d67be1d58d398fc1ba40bf02da933510ab299709fe8ee5c8" namespace=k8s.io protocol=ttrpc version=3 Oct 31 05:27:45.895814 kubelet[2603]: E1031 05:27:45.895783 2603 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.106:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.106:6443: connect: connection refused" interval="800ms" Oct 31 05:27:46.050118 systemd[1]: Started cri-containerd-ec16dd4fe37a61ef33a0991bc0087dd6469ff7a0b39349400831d9c8ebf53c7b.scope - libcontainer container ec16dd4fe37a61ef33a0991bc0087dd6469ff7a0b39349400831d9c8ebf53c7b. Oct 31 05:27:46.055232 systemd[1]: Started cri-containerd-94d56714fecf93e5a1bbadf49973577b3e53b9ca82b29ff16ff90ffda872a2d5.scope - libcontainer container 94d56714fecf93e5a1bbadf49973577b3e53b9ca82b29ff16ff90ffda872a2d5. Oct 31 05:27:46.056677 systemd[1]: Started cri-containerd-ba31d4c749793c0045348e0c7d9138e1cb3d72416964bb0714ed6597dcc783e9.scope - libcontainer container ba31d4c749793c0045348e0c7d9138e1cb3d72416964bb0714ed6597dcc783e9. Oct 31 05:27:46.067815 kubelet[2603]: I1031 05:27:46.067610 2603 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 31 05:27:46.067815 kubelet[2603]: E1031 05:27:46.067799 2603 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.70.106:6443/api/v1/nodes\": dial tcp 139.178.70.106:6443: connect: connection refused" node="localhost" Oct 31 05:27:46.102316 containerd[1681]: time="2025-10-31T05:27:46.102264659Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:20c890a246d840d308022312da9174cb,Namespace:kube-system,Attempt:0,} returns sandbox id \"ba31d4c749793c0045348e0c7d9138e1cb3d72416964bb0714ed6597dcc783e9\"" Oct 31 05:27:46.109385 containerd[1681]: time="2025-10-31T05:27:46.108954584Z" level=info msg="CreateContainer within sandbox \"ba31d4c749793c0045348e0c7d9138e1cb3d72416964bb0714ed6597dcc783e9\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Oct 31 05:27:46.111241 containerd[1681]: time="2025-10-31T05:27:46.111219069Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:458148ce844f17f8f66cb54d2824e808,Namespace:kube-system,Attempt:0,} returns sandbox id \"94d56714fecf93e5a1bbadf49973577b3e53b9ca82b29ff16ff90ffda872a2d5\"" Oct 31 05:27:46.114739 containerd[1681]: time="2025-10-31T05:27:46.114711033Z" level=info msg="CreateContainer within sandbox \"94d56714fecf93e5a1bbadf49973577b3e53b9ca82b29ff16ff90ffda872a2d5\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Oct 31 05:27:46.118004 containerd[1681]: time="2025-10-31T05:27:46.117981752Z" level=info msg="Container 23708d206e0365300211b501a1c093538c774ab958a96967cfc1f350d43cafcf: CDI devices from CRI Config.CDIDevices: []" Oct 31 05:27:46.122250 containerd[1681]: time="2025-10-31T05:27:46.122191468Z" level=info msg="Container 1f69b4b99a6af129bc13a54d9a66574aa64429581ad26a33529dceb411cbbcce: CDI devices from CRI Config.CDIDevices: []" Oct 31 05:27:46.127744 containerd[1681]: time="2025-10-31T05:27:46.127712475Z" level=info msg="CreateContainer within sandbox \"94d56714fecf93e5a1bbadf49973577b3e53b9ca82b29ff16ff90ffda872a2d5\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"1f69b4b99a6af129bc13a54d9a66574aa64429581ad26a33529dceb411cbbcce\"" Oct 31 05:27:46.128262 containerd[1681]: time="2025-10-31T05:27:46.128244014Z" level=info msg="CreateContainer within sandbox \"ba31d4c749793c0045348e0c7d9138e1cb3d72416964bb0714ed6597dcc783e9\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"23708d206e0365300211b501a1c093538c774ab958a96967cfc1f350d43cafcf\"" Oct 31 05:27:46.129906 containerd[1681]: time="2025-10-31T05:27:46.129873011Z" level=info msg="StartContainer for \"1f69b4b99a6af129bc13a54d9a66574aa64429581ad26a33529dceb411cbbcce\"" Oct 31 05:27:46.131130 containerd[1681]: time="2025-10-31T05:27:46.131103981Z" level=info msg="StartContainer for \"23708d206e0365300211b501a1c093538c774ab958a96967cfc1f350d43cafcf\"" Oct 31 05:27:46.133589 containerd[1681]: time="2025-10-31T05:27:46.133547085Z" level=info msg="connecting to shim 1f69b4b99a6af129bc13a54d9a66574aa64429581ad26a33529dceb411cbbcce" address="unix:///run/containerd/s/5d3a3ec53e9c4a26259e685c1fc755b524008bc1b5c75faa4a3f94835d0fad66" protocol=ttrpc version=3 Oct 31 05:27:46.134081 containerd[1681]: time="2025-10-31T05:27:46.134063215Z" level=info msg="connecting to shim 23708d206e0365300211b501a1c093538c774ab958a96967cfc1f350d43cafcf" address="unix:///run/containerd/s/16ccfc855610e8a0d67be1d58d398fc1ba40bf02da933510ab299709fe8ee5c8" protocol=ttrpc version=3 Oct 31 05:27:46.137302 containerd[1681]: time="2025-10-31T05:27:46.137216922Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d13d96f639b65e57f439b4396b605564,Namespace:kube-system,Attempt:0,} returns sandbox id \"ec16dd4fe37a61ef33a0991bc0087dd6469ff7a0b39349400831d9c8ebf53c7b\"" Oct 31 05:27:46.140443 containerd[1681]: time="2025-10-31T05:27:46.140420712Z" level=info msg="CreateContainer within sandbox \"ec16dd4fe37a61ef33a0991bc0087dd6469ff7a0b39349400831d9c8ebf53c7b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Oct 31 05:27:46.150102 systemd[1]: Started cri-containerd-1f69b4b99a6af129bc13a54d9a66574aa64429581ad26a33529dceb411cbbcce.scope - libcontainer container 1f69b4b99a6af129bc13a54d9a66574aa64429581ad26a33529dceb411cbbcce. Oct 31 05:27:46.160008 systemd[1]: Started cri-containerd-23708d206e0365300211b501a1c093538c774ab958a96967cfc1f350d43cafcf.scope - libcontainer container 23708d206e0365300211b501a1c093538c774ab958a96967cfc1f350d43cafcf. Oct 31 05:27:46.175614 containerd[1681]: time="2025-10-31T05:27:46.175594511Z" level=info msg="Container 7e0ba5f0e132de17f47ae91a29869754f158dff9b6ee5936e2f45bfb4f8e715c: CDI devices from CRI Config.CDIDevices: []" Oct 31 05:27:46.180210 kubelet[2603]: E1031 05:27:46.180194 2603 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://139.178.70.106:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.70.106:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Oct 31 05:27:46.254207 containerd[1681]: time="2025-10-31T05:27:46.253855238Z" level=info msg="CreateContainer within sandbox \"ec16dd4fe37a61ef33a0991bc0087dd6469ff7a0b39349400831d9c8ebf53c7b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"7e0ba5f0e132de17f47ae91a29869754f158dff9b6ee5936e2f45bfb4f8e715c\"" Oct 31 05:27:46.254207 containerd[1681]: time="2025-10-31T05:27:46.254159946Z" level=info msg="StartContainer for \"23708d206e0365300211b501a1c093538c774ab958a96967cfc1f350d43cafcf\" returns successfully" Oct 31 05:27:46.254207 containerd[1681]: time="2025-10-31T05:27:46.253924050Z" level=info msg="StartContainer for \"1f69b4b99a6af129bc13a54d9a66574aa64429581ad26a33529dceb411cbbcce\" returns successfully" Oct 31 05:27:46.254888 containerd[1681]: time="2025-10-31T05:27:46.254859093Z" level=info msg="StartContainer for \"7e0ba5f0e132de17f47ae91a29869754f158dff9b6ee5936e2f45bfb4f8e715c\"" Oct 31 05:27:46.255611 containerd[1681]: time="2025-10-31T05:27:46.255577025Z" level=info msg="connecting to shim 7e0ba5f0e132de17f47ae91a29869754f158dff9b6ee5936e2f45bfb4f8e715c" address="unix:///run/containerd/s/82797702bfa6a5ffa1f4dfde19f45c0f7014a8f3db5734b012d60567d1a0c56e" protocol=ttrpc version=3 Oct 31 05:27:46.270028 kubelet[2603]: E1031 05:27:46.270003 2603 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://139.178.70.106:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 139.178.70.106:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Oct 31 05:27:46.273044 systemd[1]: Started cri-containerd-7e0ba5f0e132de17f47ae91a29869754f158dff9b6ee5936e2f45bfb4f8e715c.scope - libcontainer container 7e0ba5f0e132de17f47ae91a29869754f158dff9b6ee5936e2f45bfb4f8e715c. Oct 31 05:27:46.308427 containerd[1681]: time="2025-10-31T05:27:46.308369114Z" level=info msg="StartContainer for \"7e0ba5f0e132de17f47ae91a29869754f158dff9b6ee5936e2f45bfb4f8e715c\" returns successfully" Oct 31 05:27:46.319027 kubelet[2603]: E1031 05:27:46.318828 2603 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 31 05:27:46.322389 kubelet[2603]: E1031 05:27:46.322170 2603 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 31 05:27:46.323174 kubelet[2603]: E1031 05:27:46.323061 2603 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 31 05:27:46.447909 kubelet[2603]: E1031 05:27:46.447880 2603 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://139.178.70.106:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.70.106:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Oct 31 05:27:46.701996 kubelet[2603]: E1031 05:27:46.701947 2603 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://139.178.70.106:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 139.178.70.106:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Oct 31 05:27:46.701996 kubelet[2603]: E1031 05:27:46.701943 2603 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.106:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.106:6443: connect: connection refused" interval="1.6s" Oct 31 05:27:46.869218 kubelet[2603]: I1031 05:27:46.869194 2603 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 31 05:27:46.869449 kubelet[2603]: E1031 05:27:46.869428 2603 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.70.106:6443/api/v1/nodes\": dial tcp 139.178.70.106:6443: connect: connection refused" node="localhost" Oct 31 05:27:47.320334 kubelet[2603]: E1031 05:27:47.320303 2603 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://139.178.70.106:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 139.178.70.106:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Oct 31 05:27:47.324333 kubelet[2603]: E1031 05:27:47.324175 2603 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 31 05:27:47.324630 kubelet[2603]: E1031 05:27:47.324623 2603 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 31 05:27:48.471009 kubelet[2603]: I1031 05:27:48.470988 2603 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 31 05:27:48.771576 kubelet[2603]: E1031 05:27:48.771269 2603 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Oct 31 05:27:48.889843 kubelet[2603]: I1031 05:27:48.889640 2603 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Oct 31 05:27:48.889843 kubelet[2603]: E1031 05:27:48.889670 2603 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Oct 31 05:27:48.898703 kubelet[2603]: E1031 05:27:48.898666 2603 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 31 05:27:48.999377 kubelet[2603]: E1031 05:27:48.999349 2603 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 31 05:27:49.100019 kubelet[2603]: E1031 05:27:49.100003 2603 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 31 05:27:49.216953 kubelet[2603]: I1031 05:27:49.216932 2603 apiserver.go:52] "Watching apiserver" Oct 31 05:27:49.289858 kubelet[2603]: I1031 05:27:49.289832 2603 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 31 05:27:49.294428 kubelet[2603]: I1031 05:27:49.294314 2603 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Oct 31 05:27:49.297895 kubelet[2603]: E1031 05:27:49.297866 2603 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Oct 31 05:27:49.297895 kubelet[2603]: I1031 05:27:49.297892 2603 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 31 05:27:49.299005 kubelet[2603]: E1031 05:27:49.298981 2603 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Oct 31 05:27:49.299005 kubelet[2603]: I1031 05:27:49.298997 2603 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 31 05:27:49.299922 kubelet[2603]: E1031 05:27:49.299897 2603 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Oct 31 05:27:50.898949 systemd[1]: Reload requested from client PID 2880 ('systemctl') (unit session-9.scope)... Oct 31 05:27:50.898960 systemd[1]: Reloading... Oct 31 05:27:50.968944 zram_generator::config[2928]: No configuration found. Oct 31 05:27:51.046633 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Oct 31 05:27:51.127807 systemd[1]: Reloading finished in 228 ms. Oct 31 05:27:51.145166 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 31 05:27:51.158659 systemd[1]: kubelet.service: Deactivated successfully. Oct 31 05:27:51.158836 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 31 05:27:51.158878 systemd[1]: kubelet.service: Consumed 657ms CPU time, 130.9M memory peak. Oct 31 05:27:51.160251 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 31 05:27:51.424579 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 31 05:27:51.430252 (kubelet)[2992]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 31 05:27:51.670857 kubelet[2992]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 31 05:27:51.671099 kubelet[2992]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 31 05:27:51.671136 kubelet[2992]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 31 05:27:51.684507 kubelet[2992]: I1031 05:27:51.684418 2992 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 31 05:27:51.691628 kubelet[2992]: I1031 05:27:51.691604 2992 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Oct 31 05:27:51.691628 kubelet[2992]: I1031 05:27:51.691622 2992 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 31 05:27:51.691784 kubelet[2992]: I1031 05:27:51.691773 2992 server.go:956] "Client rotation is on, will bootstrap in background" Oct 31 05:27:51.706467 kubelet[2992]: I1031 05:27:51.706442 2992 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Oct 31 05:27:51.716881 kubelet[2992]: I1031 05:27:51.716509 2992 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 31 05:27:51.748159 kubelet[2992]: I1031 05:27:51.748139 2992 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 31 05:27:51.756282 kubelet[2992]: I1031 05:27:51.756254 2992 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 31 05:27:51.756455 kubelet[2992]: I1031 05:27:51.756432 2992 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 31 05:27:51.756561 kubelet[2992]: I1031 05:27:51.756455 2992 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 31 05:27:51.776105 kubelet[2992]: I1031 05:27:51.776076 2992 topology_manager.go:138] "Creating topology manager with none policy" Oct 31 05:27:51.776105 kubelet[2992]: I1031 05:27:51.776107 2992 container_manager_linux.go:303] "Creating device plugin manager" Oct 31 05:27:51.782871 kubelet[2992]: I1031 05:27:51.782844 2992 state_mem.go:36] "Initialized new in-memory state store" Oct 31 05:27:51.788417 kubelet[2992]: I1031 05:27:51.788399 2992 kubelet.go:480] "Attempting to sync node with API server" Oct 31 05:27:51.788417 kubelet[2992]: I1031 05:27:51.788419 2992 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 31 05:27:51.793876 kubelet[2992]: I1031 05:27:51.793741 2992 kubelet.go:386] "Adding apiserver pod source" Oct 31 05:27:51.793876 kubelet[2992]: I1031 05:27:51.793762 2992 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 31 05:27:51.823571 kubelet[2992]: I1031 05:27:51.823508 2992 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Oct 31 05:27:51.823838 kubelet[2992]: I1031 05:27:51.823823 2992 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Oct 31 05:27:51.827743 kubelet[2992]: I1031 05:27:51.827177 2992 watchdog_linux.go:99] "Systemd watchdog is not enabled" Oct 31 05:27:51.827743 kubelet[2992]: I1031 05:27:51.827208 2992 server.go:1289] "Started kubelet" Oct 31 05:27:51.831980 kubelet[2992]: I1031 05:27:51.830654 2992 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Oct 31 05:27:51.831980 kubelet[2992]: I1031 05:27:51.831660 2992 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 31 05:27:51.833764 kubelet[2992]: I1031 05:27:51.833751 2992 server.go:317] "Adding debug handlers to kubelet server" Oct 31 05:27:51.835780 kubelet[2992]: I1031 05:27:51.835751 2992 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 31 05:27:51.835877 kubelet[2992]: I1031 05:27:51.835866 2992 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 31 05:27:51.836980 kubelet[2992]: I1031 05:27:51.836941 2992 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 31 05:27:51.840782 kubelet[2992]: I1031 05:27:51.840769 2992 volume_manager.go:297] "Starting Kubelet Volume Manager" Oct 31 05:27:51.841031 kubelet[2992]: I1031 05:27:51.841022 2992 reconciler.go:26] "Reconciler: start to sync state" Oct 31 05:27:51.841962 kubelet[2992]: I1031 05:27:51.841870 2992 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Oct 31 05:27:51.843842 kubelet[2992]: E1031 05:27:51.843715 2992 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 31 05:27:51.844280 kubelet[2992]: I1031 05:27:51.844044 2992 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Oct 31 05:27:51.844559 kubelet[2992]: I1031 05:27:51.844535 2992 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 31 05:27:51.846302 kubelet[2992]: I1031 05:27:51.846136 2992 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Oct 31 05:27:51.846302 kubelet[2992]: I1031 05:27:51.846306 2992 status_manager.go:230] "Starting to sync pod status with apiserver" Oct 31 05:27:51.846399 kubelet[2992]: I1031 05:27:51.846324 2992 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 31 05:27:51.846399 kubelet[2992]: I1031 05:27:51.846329 2992 kubelet.go:2436] "Starting kubelet main sync loop" Oct 31 05:27:51.846399 kubelet[2992]: E1031 05:27:51.846359 2992 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 31 05:27:51.849533 kubelet[2992]: I1031 05:27:51.849499 2992 factory.go:223] Registration of the containerd container factory successfully Oct 31 05:27:51.849644 kubelet[2992]: I1031 05:27:51.849638 2992 factory.go:223] Registration of the systemd container factory successfully Oct 31 05:27:51.891881 kubelet[2992]: I1031 05:27:51.891866 2992 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 31 05:27:51.891986 kubelet[2992]: I1031 05:27:51.891979 2992 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 31 05:27:51.892039 kubelet[2992]: I1031 05:27:51.892034 2992 state_mem.go:36] "Initialized new in-memory state store" Oct 31 05:27:51.892149 kubelet[2992]: I1031 05:27:51.892137 2992 state_mem.go:88] "Updated default CPUSet" cpuSet="" Oct 31 05:27:51.892194 kubelet[2992]: I1031 05:27:51.892181 2992 state_mem.go:96] "Updated CPUSet assignments" assignments={} Oct 31 05:27:51.892225 kubelet[2992]: I1031 05:27:51.892222 2992 policy_none.go:49] "None policy: Start" Oct 31 05:27:51.892256 kubelet[2992]: I1031 05:27:51.892253 2992 memory_manager.go:186] "Starting memorymanager" policy="None" Oct 31 05:27:51.892284 kubelet[2992]: I1031 05:27:51.892280 2992 state_mem.go:35] "Initializing new in-memory state store" Oct 31 05:27:51.892367 kubelet[2992]: I1031 05:27:51.892361 2992 state_mem.go:75] "Updated machine memory state" Oct 31 05:27:51.894903 kubelet[2992]: E1031 05:27:51.894893 2992 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Oct 31 05:27:51.895795 kubelet[2992]: I1031 05:27:51.895443 2992 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 31 05:27:51.895795 kubelet[2992]: I1031 05:27:51.895453 2992 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 31 05:27:51.895795 kubelet[2992]: I1031 05:27:51.895674 2992 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 31 05:27:51.899220 kubelet[2992]: E1031 05:27:51.899109 2992 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 31 05:27:51.949569 kubelet[2992]: I1031 05:27:51.949510 2992 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 31 05:27:51.950087 kubelet[2992]: I1031 05:27:51.949719 2992 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 31 05:27:51.950301 kubelet[2992]: I1031 05:27:51.949789 2992 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 31 05:27:52.002284 kubelet[2992]: I1031 05:27:52.002262 2992 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 31 05:27:52.008637 kubelet[2992]: I1031 05:27:52.008203 2992 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Oct 31 05:27:52.008637 kubelet[2992]: I1031 05:27:52.008245 2992 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Oct 31 05:27:52.142714 kubelet[2992]: I1031 05:27:52.142685 2992 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 05:27:52.142888 kubelet[2992]: I1031 05:27:52.142862 2992 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 05:27:52.142976 kubelet[2992]: I1031 05:27:52.142967 2992 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 05:27:52.143048 kubelet[2992]: I1031 05:27:52.143036 2992 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/458148ce844f17f8f66cb54d2824e808-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"458148ce844f17f8f66cb54d2824e808\") " pod="kube-system/kube-apiserver-localhost" Oct 31 05:27:52.143103 kubelet[2992]: I1031 05:27:52.143097 2992 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 05:27:52.143144 kubelet[2992]: I1031 05:27:52.143137 2992 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 05:27:52.143198 kubelet[2992]: I1031 05:27:52.143192 2992 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d13d96f639b65e57f439b4396b605564-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d13d96f639b65e57f439b4396b605564\") " pod="kube-system/kube-scheduler-localhost" Oct 31 05:27:52.143272 kubelet[2992]: I1031 05:27:52.143230 2992 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/458148ce844f17f8f66cb54d2824e808-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"458148ce844f17f8f66cb54d2824e808\") " pod="kube-system/kube-apiserver-localhost" Oct 31 05:27:52.143335 kubelet[2992]: I1031 05:27:52.143325 2992 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/458148ce844f17f8f66cb54d2824e808-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"458148ce844f17f8f66cb54d2824e808\") " pod="kube-system/kube-apiserver-localhost" Oct 31 05:27:52.821164 kubelet[2992]: I1031 05:27:52.821135 2992 apiserver.go:52] "Watching apiserver" Oct 31 05:27:52.842389 kubelet[2992]: I1031 05:27:52.842364 2992 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Oct 31 05:27:52.875150 kubelet[2992]: I1031 05:27:52.875129 2992 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 31 05:27:52.875609 kubelet[2992]: I1031 05:27:52.875596 2992 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 31 05:27:52.879131 kubelet[2992]: E1031 05:27:52.878972 2992 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Oct 31 05:27:52.879242 kubelet[2992]: E1031 05:27:52.879223 2992 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Oct 31 05:27:52.894478 kubelet[2992]: I1031 05:27:52.894432 2992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.8944086919999998 podStartE2EDuration="1.894408692s" podCreationTimestamp="2025-10-31 05:27:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-31 05:27:52.892845965 +0000 UTC m=+1.402716401" watchObservedRunningTime="2025-10-31 05:27:52.894408692 +0000 UTC m=+1.404279121" Oct 31 05:27:52.898205 kubelet[2992]: I1031 05:27:52.898079 2992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.898069124 podStartE2EDuration="1.898069124s" podCreationTimestamp="2025-10-31 05:27:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-31 05:27:52.897907424 +0000 UTC m=+1.407777860" watchObservedRunningTime="2025-10-31 05:27:52.898069124 +0000 UTC m=+1.407939552" Oct 31 05:27:52.908131 kubelet[2992]: I1031 05:27:52.908093 2992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.908080203 podStartE2EDuration="1.908080203s" podCreationTimestamp="2025-10-31 05:27:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-31 05:27:52.903316553 +0000 UTC m=+1.413186988" watchObservedRunningTime="2025-10-31 05:27:52.908080203 +0000 UTC m=+1.417950638" Oct 31 05:27:56.088502 kubelet[2992]: I1031 05:27:56.088477 2992 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Oct 31 05:27:56.088758 containerd[1681]: time="2025-10-31T05:27:56.088671683Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 31 05:27:56.088872 kubelet[2992]: I1031 05:27:56.088755 2992 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Oct 31 05:27:56.926792 systemd[1]: Created slice kubepods-besteffort-pod47580849_10c0_4b01_a811_e68906c6ae48.slice - libcontainer container kubepods-besteffort-pod47580849_10c0_4b01_a811_e68906c6ae48.slice. Oct 31 05:27:56.973479 kubelet[2992]: I1031 05:27:56.973458 2992 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/47580849-10c0-4b01-a811-e68906c6ae48-kube-proxy\") pod \"kube-proxy-b9q2k\" (UID: \"47580849-10c0-4b01-a811-e68906c6ae48\") " pod="kube-system/kube-proxy-b9q2k" Oct 31 05:27:56.973708 kubelet[2992]: I1031 05:27:56.973627 2992 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/47580849-10c0-4b01-a811-e68906c6ae48-lib-modules\") pod \"kube-proxy-b9q2k\" (UID: \"47580849-10c0-4b01-a811-e68906c6ae48\") " pod="kube-system/kube-proxy-b9q2k" Oct 31 05:27:56.973708 kubelet[2992]: I1031 05:27:56.973649 2992 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/47580849-10c0-4b01-a811-e68906c6ae48-xtables-lock\") pod \"kube-proxy-b9q2k\" (UID: \"47580849-10c0-4b01-a811-e68906c6ae48\") " pod="kube-system/kube-proxy-b9q2k" Oct 31 05:27:56.973708 kubelet[2992]: I1031 05:27:56.973658 2992 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ckr7p\" (UniqueName: \"kubernetes.io/projected/47580849-10c0-4b01-a811-e68906c6ae48-kube-api-access-ckr7p\") pod \"kube-proxy-b9q2k\" (UID: \"47580849-10c0-4b01-a811-e68906c6ae48\") " pod="kube-system/kube-proxy-b9q2k" Oct 31 05:27:57.235440 containerd[1681]: time="2025-10-31T05:27:57.235363194Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-b9q2k,Uid:47580849-10c0-4b01-a811-e68906c6ae48,Namespace:kube-system,Attempt:0,}" Oct 31 05:27:57.298206 systemd[1]: Created slice kubepods-besteffort-pode7781b10_f13e_45e6_902b_a16d26d05f42.slice - libcontainer container kubepods-besteffort-pode7781b10_f13e_45e6_902b_a16d26d05f42.slice. Oct 31 05:27:57.315347 containerd[1681]: time="2025-10-31T05:27:57.315257468Z" level=info msg="connecting to shim ecfb795a3ea5d554f2ee45b0b1f9d59dc3fb475ba36c6e00731a0e7b6f2be08e" address="unix:///run/containerd/s/34a642b7ec448e48d0fad4fca5b4d37c83f2a41fbf4f2b285d6777ad6ea9928c" namespace=k8s.io protocol=ttrpc version=3 Oct 31 05:27:57.333248 systemd[1]: Started cri-containerd-ecfb795a3ea5d554f2ee45b0b1f9d59dc3fb475ba36c6e00731a0e7b6f2be08e.scope - libcontainer container ecfb795a3ea5d554f2ee45b0b1f9d59dc3fb475ba36c6e00731a0e7b6f2be08e. Oct 31 05:27:57.350451 containerd[1681]: time="2025-10-31T05:27:57.350426161Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-b9q2k,Uid:47580849-10c0-4b01-a811-e68906c6ae48,Namespace:kube-system,Attempt:0,} returns sandbox id \"ecfb795a3ea5d554f2ee45b0b1f9d59dc3fb475ba36c6e00731a0e7b6f2be08e\"" Oct 31 05:27:57.360890 containerd[1681]: time="2025-10-31T05:27:57.360838051Z" level=info msg="CreateContainer within sandbox \"ecfb795a3ea5d554f2ee45b0b1f9d59dc3fb475ba36c6e00731a0e7b6f2be08e\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 31 05:27:57.376378 kubelet[2992]: I1031 05:27:57.376353 2992 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e7781b10-f13e-45e6-902b-a16d26d05f42-var-lib-calico\") pod \"tigera-operator-7dcd859c48-lsbrp\" (UID: \"e7781b10-f13e-45e6-902b-a16d26d05f42\") " pod="tigera-operator/tigera-operator-7dcd859c48-lsbrp" Oct 31 05:27:57.376378 kubelet[2992]: I1031 05:27:57.376379 2992 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5frjc\" (UniqueName: \"kubernetes.io/projected/e7781b10-f13e-45e6-902b-a16d26d05f42-kube-api-access-5frjc\") pod \"tigera-operator-7dcd859c48-lsbrp\" (UID: \"e7781b10-f13e-45e6-902b-a16d26d05f42\") " pod="tigera-operator/tigera-operator-7dcd859c48-lsbrp" Oct 31 05:27:57.392296 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1658990356.mount: Deactivated successfully. Oct 31 05:27:57.396290 containerd[1681]: time="2025-10-31T05:27:57.396264891Z" level=info msg="Container 8782c3cd082877ef5474876ed0cb6d5f0dbfe5fae106cac1b2e6b13a0f50507d: CDI devices from CRI Config.CDIDevices: []" Oct 31 05:27:57.581812 containerd[1681]: time="2025-10-31T05:27:57.581725864Z" level=info msg="CreateContainer within sandbox \"ecfb795a3ea5d554f2ee45b0b1f9d59dc3fb475ba36c6e00731a0e7b6f2be08e\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"8782c3cd082877ef5474876ed0cb6d5f0dbfe5fae106cac1b2e6b13a0f50507d\"" Oct 31 05:27:57.582626 containerd[1681]: time="2025-10-31T05:27:57.582604356Z" level=info msg="StartContainer for \"8782c3cd082877ef5474876ed0cb6d5f0dbfe5fae106cac1b2e6b13a0f50507d\"" Oct 31 05:27:57.583881 containerd[1681]: time="2025-10-31T05:27:57.583854741Z" level=info msg="connecting to shim 8782c3cd082877ef5474876ed0cb6d5f0dbfe5fae106cac1b2e6b13a0f50507d" address="unix:///run/containerd/s/34a642b7ec448e48d0fad4fca5b4d37c83f2a41fbf4f2b285d6777ad6ea9928c" protocol=ttrpc version=3 Oct 31 05:27:57.600302 containerd[1681]: time="2025-10-31T05:27:57.600278437Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-lsbrp,Uid:e7781b10-f13e-45e6-902b-a16d26d05f42,Namespace:tigera-operator,Attempt:0,}" Oct 31 05:27:57.602288 systemd[1]: Started cri-containerd-8782c3cd082877ef5474876ed0cb6d5f0dbfe5fae106cac1b2e6b13a0f50507d.scope - libcontainer container 8782c3cd082877ef5474876ed0cb6d5f0dbfe5fae106cac1b2e6b13a0f50507d. Oct 31 05:27:57.632677 containerd[1681]: time="2025-10-31T05:27:57.632613702Z" level=info msg="StartContainer for \"8782c3cd082877ef5474876ed0cb6d5f0dbfe5fae106cac1b2e6b13a0f50507d\" returns successfully" Oct 31 05:27:57.639869 containerd[1681]: time="2025-10-31T05:27:57.639706035Z" level=info msg="connecting to shim e05f3fbe9d9059fc4409e20834447de69706e7221fc9c091c4b0a4383a819300" address="unix:///run/containerd/s/4ce09c11c8db564587c94c0c0bb81bba8a57eb9bf8039bea1b46d441ca5de035" namespace=k8s.io protocol=ttrpc version=3 Oct 31 05:27:57.657029 systemd[1]: Started cri-containerd-e05f3fbe9d9059fc4409e20834447de69706e7221fc9c091c4b0a4383a819300.scope - libcontainer container e05f3fbe9d9059fc4409e20834447de69706e7221fc9c091c4b0a4383a819300. Oct 31 05:27:57.691541 containerd[1681]: time="2025-10-31T05:27:57.691456737Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-lsbrp,Uid:e7781b10-f13e-45e6-902b-a16d26d05f42,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"e05f3fbe9d9059fc4409e20834447de69706e7221fc9c091c4b0a4383a819300\"" Oct 31 05:27:57.692925 containerd[1681]: time="2025-10-31T05:27:57.692897902Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Oct 31 05:27:58.088014 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2501354411.mount: Deactivated successfully. Oct 31 05:27:58.931702 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3231314738.mount: Deactivated successfully. Oct 31 05:27:59.327328 containerd[1681]: time="2025-10-31T05:27:59.327174869Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 05:27:59.328046 containerd[1681]: time="2025-10-31T05:27:59.328034557Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Oct 31 05:27:59.328300 containerd[1681]: time="2025-10-31T05:27:59.328288091Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 05:27:59.329616 containerd[1681]: time="2025-10-31T05:27:59.329604584Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 05:27:59.330434 containerd[1681]: time="2025-10-31T05:27:59.330421573Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 1.637506722s" Oct 31 05:27:59.330484 containerd[1681]: time="2025-10-31T05:27:59.330476613Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Oct 31 05:27:59.333056 containerd[1681]: time="2025-10-31T05:27:59.333030432Z" level=info msg="CreateContainer within sandbox \"e05f3fbe9d9059fc4409e20834447de69706e7221fc9c091c4b0a4383a819300\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Oct 31 05:27:59.337305 containerd[1681]: time="2025-10-31T05:27:59.337276735Z" level=info msg="Container 07447fcae7555517180b0debc03d0fdbdaa51fdebe04052c16f684ce2b3c6723: CDI devices from CRI Config.CDIDevices: []" Oct 31 05:27:59.339382 containerd[1681]: time="2025-10-31T05:27:59.339355756Z" level=info msg="CreateContainer within sandbox \"e05f3fbe9d9059fc4409e20834447de69706e7221fc9c091c4b0a4383a819300\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"07447fcae7555517180b0debc03d0fdbdaa51fdebe04052c16f684ce2b3c6723\"" Oct 31 05:27:59.340111 containerd[1681]: time="2025-10-31T05:27:59.340096063Z" level=info msg="StartContainer for \"07447fcae7555517180b0debc03d0fdbdaa51fdebe04052c16f684ce2b3c6723\"" Oct 31 05:27:59.340672 containerd[1681]: time="2025-10-31T05:27:59.340658527Z" level=info msg="connecting to shim 07447fcae7555517180b0debc03d0fdbdaa51fdebe04052c16f684ce2b3c6723" address="unix:///run/containerd/s/4ce09c11c8db564587c94c0c0bb81bba8a57eb9bf8039bea1b46d441ca5de035" protocol=ttrpc version=3 Oct 31 05:27:59.363100 systemd[1]: Started cri-containerd-07447fcae7555517180b0debc03d0fdbdaa51fdebe04052c16f684ce2b3c6723.scope - libcontainer container 07447fcae7555517180b0debc03d0fdbdaa51fdebe04052c16f684ce2b3c6723. Oct 31 05:27:59.384097 containerd[1681]: time="2025-10-31T05:27:59.384071835Z" level=info msg="StartContainer for \"07447fcae7555517180b0debc03d0fdbdaa51fdebe04052c16f684ce2b3c6723\" returns successfully" Oct 31 05:27:59.898847 kubelet[2992]: I1031 05:27:59.898813 2992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-b9q2k" podStartSLOduration=3.89880117 podStartE2EDuration="3.89880117s" podCreationTimestamp="2025-10-31 05:27:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-31 05:27:57.895680484 +0000 UTC m=+6.405550919" watchObservedRunningTime="2025-10-31 05:27:59.89880117 +0000 UTC m=+8.408671599" Oct 31 05:27:59.899250 kubelet[2992]: I1031 05:27:59.898865 2992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-lsbrp" podStartSLOduration=1.260584399 podStartE2EDuration="2.898862611s" podCreationTimestamp="2025-10-31 05:27:57 +0000 UTC" firstStartedPulling="2025-10-31 05:27:57.692620145 +0000 UTC m=+6.202490568" lastFinishedPulling="2025-10-31 05:27:59.330898353 +0000 UTC m=+7.840768780" observedRunningTime="2025-10-31 05:27:59.898207725 +0000 UTC m=+8.408078161" watchObservedRunningTime="2025-10-31 05:27:59.898862611 +0000 UTC m=+8.408733046" Oct 31 05:28:05.407479 sudo[2001]: pam_unix(sudo:session): session closed for user root Oct 31 05:28:05.410396 sshd-session[1997]: pam_unix(sshd:session): session closed for user core Oct 31 05:28:05.411278 sshd[2000]: Connection closed by 147.75.109.163 port 39288 Oct 31 05:28:05.413509 systemd-logind[1655]: Session 9 logged out. Waiting for processes to exit. Oct 31 05:28:05.414123 systemd[1]: sshd@6-139.178.70.106:22-147.75.109.163:39288.service: Deactivated successfully. Oct 31 05:28:05.417244 systemd[1]: session-9.scope: Deactivated successfully. Oct 31 05:28:05.417383 systemd[1]: session-9.scope: Consumed 3.471s CPU time, 152.6M memory peak. Oct 31 05:28:05.419664 systemd-logind[1655]: Removed session 9. Oct 31 05:28:09.481172 systemd[1]: Created slice kubepods-besteffort-pod8a4fe9e5_ca60_4620_9643_431fb842b077.slice - libcontainer container kubepods-besteffort-pod8a4fe9e5_ca60_4620_9643_431fb842b077.slice. Oct 31 05:28:09.577500 kubelet[2992]: I1031 05:28:09.577170 2992 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8a4fe9e5-ca60-4620-9643-431fb842b077-tigera-ca-bundle\") pod \"calico-typha-56764fd7d-6b59s\" (UID: \"8a4fe9e5-ca60-4620-9643-431fb842b077\") " pod="calico-system/calico-typha-56764fd7d-6b59s" Oct 31 05:28:09.577500 kubelet[2992]: I1031 05:28:09.577237 2992 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/8a4fe9e5-ca60-4620-9643-431fb842b077-typha-certs\") pod \"calico-typha-56764fd7d-6b59s\" (UID: \"8a4fe9e5-ca60-4620-9643-431fb842b077\") " pod="calico-system/calico-typha-56764fd7d-6b59s" Oct 31 05:28:09.577500 kubelet[2992]: I1031 05:28:09.577254 2992 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kj6sn\" (UniqueName: \"kubernetes.io/projected/8a4fe9e5-ca60-4620-9643-431fb842b077-kube-api-access-kj6sn\") pod \"calico-typha-56764fd7d-6b59s\" (UID: \"8a4fe9e5-ca60-4620-9643-431fb842b077\") " pod="calico-system/calico-typha-56764fd7d-6b59s" Oct 31 05:28:09.637628 systemd[1]: Created slice kubepods-besteffort-pod8fc99dce_5c1e_43f1_b3f0_57a26fa7393e.slice - libcontainer container kubepods-besteffort-pod8fc99dce_5c1e_43f1_b3f0_57a26fa7393e.slice. Oct 31 05:28:09.778638 kubelet[2992]: I1031 05:28:09.778083 2992 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/8fc99dce-5c1e-43f1-b3f0-57a26fa7393e-policysync\") pod \"calico-node-hfv9f\" (UID: \"8fc99dce-5c1e-43f1-b3f0-57a26fa7393e\") " pod="calico-system/calico-node-hfv9f" Oct 31 05:28:09.778638 kubelet[2992]: I1031 05:28:09.778512 2992 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8fc99dce-5c1e-43f1-b3f0-57a26fa7393e-tigera-ca-bundle\") pod \"calico-node-hfv9f\" (UID: \"8fc99dce-5c1e-43f1-b3f0-57a26fa7393e\") " pod="calico-system/calico-node-hfv9f" Oct 31 05:28:09.778638 kubelet[2992]: I1031 05:28:09.778533 2992 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/8fc99dce-5c1e-43f1-b3f0-57a26fa7393e-var-run-calico\") pod \"calico-node-hfv9f\" (UID: \"8fc99dce-5c1e-43f1-b3f0-57a26fa7393e\") " pod="calico-system/calico-node-hfv9f" Oct 31 05:28:09.779693 kubelet[2992]: I1031 05:28:09.779283 2992 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8fc99dce-5c1e-43f1-b3f0-57a26fa7393e-xtables-lock\") pod \"calico-node-hfv9f\" (UID: \"8fc99dce-5c1e-43f1-b3f0-57a26fa7393e\") " pod="calico-system/calico-node-hfv9f" Oct 31 05:28:09.779693 kubelet[2992]: I1031 05:28:09.779313 2992 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/8fc99dce-5c1e-43f1-b3f0-57a26fa7393e-var-lib-calico\") pod \"calico-node-hfv9f\" (UID: \"8fc99dce-5c1e-43f1-b3f0-57a26fa7393e\") " pod="calico-system/calico-node-hfv9f" Oct 31 05:28:09.779693 kubelet[2992]: I1031 05:28:09.779342 2992 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvhbr\" (UniqueName: \"kubernetes.io/projected/8fc99dce-5c1e-43f1-b3f0-57a26fa7393e-kube-api-access-fvhbr\") pod \"calico-node-hfv9f\" (UID: \"8fc99dce-5c1e-43f1-b3f0-57a26fa7393e\") " pod="calico-system/calico-node-hfv9f" Oct 31 05:28:09.779693 kubelet[2992]: I1031 05:28:09.779364 2992 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/8fc99dce-5c1e-43f1-b3f0-57a26fa7393e-cni-net-dir\") pod \"calico-node-hfv9f\" (UID: \"8fc99dce-5c1e-43f1-b3f0-57a26fa7393e\") " pod="calico-system/calico-node-hfv9f" Oct 31 05:28:09.779693 kubelet[2992]: I1031 05:28:09.779379 2992 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8fc99dce-5c1e-43f1-b3f0-57a26fa7393e-lib-modules\") pod \"calico-node-hfv9f\" (UID: \"8fc99dce-5c1e-43f1-b3f0-57a26fa7393e\") " pod="calico-system/calico-node-hfv9f" Oct 31 05:28:09.780047 kubelet[2992]: I1031 05:28:09.779393 2992 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/8fc99dce-5c1e-43f1-b3f0-57a26fa7393e-cni-log-dir\") pod \"calico-node-hfv9f\" (UID: \"8fc99dce-5c1e-43f1-b3f0-57a26fa7393e\") " pod="calico-system/calico-node-hfv9f" Oct 31 05:28:09.780047 kubelet[2992]: I1031 05:28:09.779504 2992 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/8fc99dce-5c1e-43f1-b3f0-57a26fa7393e-flexvol-driver-host\") pod \"calico-node-hfv9f\" (UID: \"8fc99dce-5c1e-43f1-b3f0-57a26fa7393e\") " pod="calico-system/calico-node-hfv9f" Oct 31 05:28:09.780047 kubelet[2992]: I1031 05:28:09.779543 2992 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/8fc99dce-5c1e-43f1-b3f0-57a26fa7393e-cni-bin-dir\") pod \"calico-node-hfv9f\" (UID: \"8fc99dce-5c1e-43f1-b3f0-57a26fa7393e\") " pod="calico-system/calico-node-hfv9f" Oct 31 05:28:09.780047 kubelet[2992]: I1031 05:28:09.779555 2992 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/8fc99dce-5c1e-43f1-b3f0-57a26fa7393e-node-certs\") pod \"calico-node-hfv9f\" (UID: \"8fc99dce-5c1e-43f1-b3f0-57a26fa7393e\") " pod="calico-system/calico-node-hfv9f" Oct 31 05:28:09.786234 containerd[1681]: time="2025-10-31T05:28:09.786160394Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-56764fd7d-6b59s,Uid:8a4fe9e5-ca60-4620-9643-431fb842b077,Namespace:calico-system,Attempt:0,}" Oct 31 05:28:09.803032 containerd[1681]: time="2025-10-31T05:28:09.802779911Z" level=info msg="connecting to shim b79983b76a917ffbca0014dd21776af3d378af5ca4284760405d412ebb419c41" address="unix:///run/containerd/s/c27aaebca1de439744234456444f6d6b4267d92d1bc502201e4a80dbd28aae28" namespace=k8s.io protocol=ttrpc version=3 Oct 31 05:28:09.810934 kubelet[2992]: E1031 05:28:09.810092 2992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-d5mr7" podUID="dfabc2cb-7622-45ac-b566-baf9171817d8" Oct 31 05:28:09.835160 systemd[1]: Started cri-containerd-b79983b76a917ffbca0014dd21776af3d378af5ca4284760405d412ebb419c41.scope - libcontainer container b79983b76a917ffbca0014dd21776af3d378af5ca4284760405d412ebb419c41. Oct 31 05:28:09.887170 containerd[1681]: time="2025-10-31T05:28:09.887141465Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-56764fd7d-6b59s,Uid:8a4fe9e5-ca60-4620-9643-431fb842b077,Namespace:calico-system,Attempt:0,} returns sandbox id \"b79983b76a917ffbca0014dd21776af3d378af5ca4284760405d412ebb419c41\"" Oct 31 05:28:09.888356 containerd[1681]: time="2025-10-31T05:28:09.888152642Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Oct 31 05:28:09.902365 kubelet[2992]: E1031 05:28:09.902248 2992 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 05:28:09.902365 kubelet[2992]: W1031 05:28:09.902264 2992 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 05:28:09.914310 kubelet[2992]: E1031 05:28:09.913577 2992 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 05:28:09.944130 containerd[1681]: time="2025-10-31T05:28:09.944080029Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-hfv9f,Uid:8fc99dce-5c1e-43f1-b3f0-57a26fa7393e,Namespace:calico-system,Attempt:0,}" Oct 31 05:28:09.981868 kubelet[2992]: E1031 05:28:09.981824 2992 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 05:28:09.981868 kubelet[2992]: W1031 05:28:09.981850 2992 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 05:28:09.981868 kubelet[2992]: E1031 05:28:09.981864 2992 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 05:28:09.982089 kubelet[2992]: I1031 05:28:09.981904 2992 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/dfabc2cb-7622-45ac-b566-baf9171817d8-socket-dir\") pod \"csi-node-driver-d5mr7\" (UID: \"dfabc2cb-7622-45ac-b566-baf9171817d8\") " pod="calico-system/csi-node-driver-d5mr7" Oct 31 05:28:09.982158 kubelet[2992]: E1031 05:28:09.982145 2992 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 05:28:09.982158 kubelet[2992]: W1031 05:28:09.982157 2992 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 05:28:09.982203 kubelet[2992]: E1031 05:28:09.982165 2992 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 05:28:09.982478 kubelet[2992]: E1031 05:28:09.982461 2992 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 05:28:09.982517 kubelet[2992]: W1031 05:28:09.982480 2992 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 05:28:09.982517 kubelet[2992]: E1031 05:28:09.982488 2992 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 05:28:09.982637 kubelet[2992]: E1031 05:28:09.982624 2992 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 05:28:09.982637 kubelet[2992]: W1031 05:28:09.982633 2992 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 05:28:09.982691 kubelet[2992]: E1031 05:28:09.982642 2992 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 05:28:09.982691 kubelet[2992]: I1031 05:28:09.982681 2992 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/dfabc2cb-7622-45ac-b566-baf9171817d8-registration-dir\") pod \"csi-node-driver-d5mr7\" (UID: \"dfabc2cb-7622-45ac-b566-baf9171817d8\") " pod="calico-system/csi-node-driver-d5mr7" Oct 31 05:28:09.982822 kubelet[2992]: E1031 05:28:09.982809 2992 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 05:28:09.982822 kubelet[2992]: W1031 05:28:09.982821 2992 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 05:28:09.982869 kubelet[2992]: E1031 05:28:09.982829 2992 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 05:28:09.982869 kubelet[2992]: I1031 05:28:09.982863 2992 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dfabc2cb-7622-45ac-b566-baf9171817d8-kubelet-dir\") pod \"csi-node-driver-d5mr7\" (UID: \"dfabc2cb-7622-45ac-b566-baf9171817d8\") " pod="calico-system/csi-node-driver-d5mr7" Oct 31 05:28:09.989312 kubelet[2992]: E1031 05:28:09.982992 2992 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 05:28:09.989312 kubelet[2992]: W1031 05:28:09.983000 2992 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 05:28:09.989312 kubelet[2992]: E1031 05:28:09.983006 2992 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 05:28:09.989312 kubelet[2992]: I1031 05:28:09.983026 2992 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/dfabc2cb-7622-45ac-b566-baf9171817d8-varrun\") pod \"csi-node-driver-d5mr7\" (UID: \"dfabc2cb-7622-45ac-b566-baf9171817d8\") " pod="calico-system/csi-node-driver-d5mr7" Oct 31 05:28:09.989312 kubelet[2992]: E1031 05:28:09.983184 2992 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 05:28:09.989312 kubelet[2992]: W1031 05:28:09.983191 2992 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 05:28:09.989312 kubelet[2992]: E1031 05:28:09.983205 2992 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 05:28:09.989312 kubelet[2992]: I1031 05:28:09.983223 2992 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6k4sl\" (UniqueName: \"kubernetes.io/projected/dfabc2cb-7622-45ac-b566-baf9171817d8-kube-api-access-6k4sl\") pod \"csi-node-driver-d5mr7\" (UID: \"dfabc2cb-7622-45ac-b566-baf9171817d8\") " pod="calico-system/csi-node-driver-d5mr7" Oct 31 05:28:09.989312 kubelet[2992]: E1031 05:28:09.983343 2992 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 05:28:09.989472 kubelet[2992]: W1031 05:28:09.983351 2992 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 05:28:09.989472 kubelet[2992]: E1031 05:28:09.983359 2992 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 05:28:09.989472 kubelet[2992]: E1031 05:28:09.983465 2992 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 05:28:09.989472 kubelet[2992]: W1031 05:28:09.983472 2992 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 05:28:09.989472 kubelet[2992]: E1031 05:28:09.983478 2992 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 05:28:09.989472 kubelet[2992]: E1031 05:28:09.983596 2992 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 05:28:09.989472 kubelet[2992]: W1031 05:28:09.983601 2992 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 05:28:09.989472 kubelet[2992]: E1031 05:28:09.983606 2992 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 05:28:09.989472 kubelet[2992]: E1031 05:28:09.983717 2992 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 05:28:09.989472 kubelet[2992]: W1031 05:28:09.983722 2992 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 05:28:09.989660 kubelet[2992]: E1031 05:28:09.983728 2992 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 05:28:09.989660 kubelet[2992]: E1031 05:28:09.983897 2992 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 05:28:09.989660 kubelet[2992]: W1031 05:28:09.983903 2992 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 05:28:09.989660 kubelet[2992]: E1031 05:28:09.983910 2992 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 05:28:09.989660 kubelet[2992]: E1031 05:28:09.984032 2992 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 05:28:09.989660 kubelet[2992]: W1031 05:28:09.984039 2992 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 05:28:09.989660 kubelet[2992]: E1031 05:28:09.984047 2992 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 05:28:09.989660 kubelet[2992]: E1031 05:28:09.984162 2992 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 05:28:09.989660 kubelet[2992]: W1031 05:28:09.984168 2992 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 05:28:09.989660 kubelet[2992]: E1031 05:28:09.984175 2992 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 05:28:09.993754 kubelet[2992]: E1031 05:28:09.984269 2992 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 05:28:09.993754 kubelet[2992]: W1031 05:28:09.984274 2992 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 05:28:09.993754 kubelet[2992]: E1031 05:28:09.984279 2992 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 05:28:10.011790 containerd[1681]: time="2025-10-31T05:28:10.011737180Z" level=info msg="connecting to shim d3657c3af831de2c10299f719985c73d131289202b3f81429519cda6138dbc5b" address="unix:///run/containerd/s/6cd99c696e166c29e6813eb72acf82c5fd7ac21c60e1dcd76428fcd86a0c56a4" namespace=k8s.io protocol=ttrpc version=3 Oct 31 05:28:10.042070 systemd[1]: Started cri-containerd-d3657c3af831de2c10299f719985c73d131289202b3f81429519cda6138dbc5b.scope - libcontainer container d3657c3af831de2c10299f719985c73d131289202b3f81429519cda6138dbc5b. Oct 31 05:28:10.064905 containerd[1681]: time="2025-10-31T05:28:10.064842630Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-hfv9f,Uid:8fc99dce-5c1e-43f1-b3f0-57a26fa7393e,Namespace:calico-system,Attempt:0,} returns sandbox id \"d3657c3af831de2c10299f719985c73d131289202b3f81429519cda6138dbc5b\"" Oct 31 05:28:10.084481 kubelet[2992]: E1031 05:28:10.084409 2992 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 05:28:10.084481 kubelet[2992]: W1031 05:28:10.084425 2992 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 05:28:10.084481 kubelet[2992]: E1031 05:28:10.084437 2992 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 05:28:10.084736 kubelet[2992]: E1031 05:28:10.084701 2992 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 05:28:10.084736 kubelet[2992]: W1031 05:28:10.084708 2992 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 05:28:10.084736 kubelet[2992]: E1031 05:28:10.084714 2992 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 05:28:10.084838 kubelet[2992]: E1031 05:28:10.084830 2992 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 05:28:10.084867 kubelet[2992]: W1031 05:28:10.084838 2992 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 05:28:10.084867 kubelet[2992]: E1031 05:28:10.084846 2992 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 05:28:10.084974 kubelet[2992]: E1031 05:28:10.084946 2992 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 05:28:10.084974 kubelet[2992]: W1031 05:28:10.084953 2992 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 05:28:10.084974 kubelet[2992]: E1031 05:28:10.084959 2992 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 05:28:10.085054 kubelet[2992]: E1031 05:28:10.085049 2992 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 05:28:10.085054 kubelet[2992]: W1031 05:28:10.085053 2992 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 05:28:10.085159 kubelet[2992]: E1031 05:28:10.085058 2992 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 05:28:10.085212 kubelet[2992]: E1031 05:28:10.085205 2992 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 05:28:10.085249 kubelet[2992]: W1031 05:28:10.085243 2992 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 05:28:10.085292 kubelet[2992]: E1031 05:28:10.085287 2992 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 05:28:10.085421 kubelet[2992]: E1031 05:28:10.085408 2992 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 05:28:10.085421 kubelet[2992]: W1031 05:28:10.085416 2992 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 05:28:10.085421 kubelet[2992]: E1031 05:28:10.085422 2992 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 05:28:10.085500 kubelet[2992]: E1031 05:28:10.085496 2992 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 05:28:10.085562 kubelet[2992]: W1031 05:28:10.085500 2992 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 05:28:10.085562 kubelet[2992]: E1031 05:28:10.085505 2992 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 05:28:10.085622 kubelet[2992]: E1031 05:28:10.085574 2992 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 05:28:10.085622 kubelet[2992]: W1031 05:28:10.085579 2992 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 05:28:10.085622 kubelet[2992]: E1031 05:28:10.085584 2992 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 05:28:10.085707 kubelet[2992]: E1031 05:28:10.085697 2992 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 05:28:10.085707 kubelet[2992]: W1031 05:28:10.085705 2992 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 05:28:10.085753 kubelet[2992]: E1031 05:28:10.085710 2992 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 05:28:10.085814 kubelet[2992]: E1031 05:28:10.085802 2992 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 05:28:10.085814 kubelet[2992]: W1031 05:28:10.085810 2992 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 05:28:10.085872 kubelet[2992]: E1031 05:28:10.085815 2992 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 05:28:10.085906 kubelet[2992]: E1031 05:28:10.085895 2992 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 05:28:10.085906 kubelet[2992]: W1031 05:28:10.085904 2992 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 05:28:10.086032 kubelet[2992]: E1031 05:28:10.085909 2992 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 05:28:10.086081 kubelet[2992]: E1031 05:28:10.086075 2992 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 05:28:10.086130 kubelet[2992]: W1031 05:28:10.086124 2992 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 05:28:10.086163 kubelet[2992]: E1031 05:28:10.086157 2992 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 05:28:10.086355 kubelet[2992]: E1031 05:28:10.086294 2992 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 05:28:10.086355 kubelet[2992]: W1031 05:28:10.086300 2992 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 05:28:10.086355 kubelet[2992]: E1031 05:28:10.086306 2992 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 05:28:10.086452 kubelet[2992]: E1031 05:28:10.086446 2992 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 05:28:10.086487 kubelet[2992]: W1031 05:28:10.086482 2992 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 05:28:10.086603 kubelet[2992]: E1031 05:28:10.086510 2992 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 05:28:10.086659 kubelet[2992]: E1031 05:28:10.086653 2992 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 05:28:10.086697 kubelet[2992]: W1031 05:28:10.086691 2992 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 05:28:10.086783 kubelet[2992]: E1031 05:28:10.086730 2992 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 05:28:10.086849 kubelet[2992]: E1031 05:28:10.086843 2992 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 05:28:10.092138 kubelet[2992]: W1031 05:28:10.086939 2992 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 05:28:10.092138 kubelet[2992]: E1031 05:28:10.086947 2992 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 05:28:10.092138 kubelet[2992]: E1031 05:28:10.087039 2992 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 05:28:10.092138 kubelet[2992]: W1031 05:28:10.087044 2992 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 05:28:10.092138 kubelet[2992]: E1031 05:28:10.087049 2992 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 05:28:10.092138 kubelet[2992]: E1031 05:28:10.087146 2992 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 05:28:10.092138 kubelet[2992]: W1031 05:28:10.087152 2992 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 05:28:10.092138 kubelet[2992]: E1031 05:28:10.087156 2992 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 05:28:10.092138 kubelet[2992]: E1031 05:28:10.087240 2992 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 05:28:10.092138 kubelet[2992]: W1031 05:28:10.087245 2992 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 05:28:10.092325 kubelet[2992]: E1031 05:28:10.087250 2992 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 05:28:10.092325 kubelet[2992]: E1031 05:28:10.087349 2992 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 05:28:10.092325 kubelet[2992]: W1031 05:28:10.087355 2992 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 05:28:10.092325 kubelet[2992]: E1031 05:28:10.087360 2992 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 05:28:10.092325 kubelet[2992]: E1031 05:28:10.087511 2992 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 05:28:10.092325 kubelet[2992]: W1031 05:28:10.087516 2992 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 05:28:10.092325 kubelet[2992]: E1031 05:28:10.087528 2992 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 05:28:10.092325 kubelet[2992]: E1031 05:28:10.087955 2992 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 05:28:10.092325 kubelet[2992]: W1031 05:28:10.087961 2992 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 05:28:10.092325 kubelet[2992]: E1031 05:28:10.087967 2992 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 05:28:10.092494 kubelet[2992]: E1031 05:28:10.088116 2992 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 05:28:10.092494 kubelet[2992]: W1031 05:28:10.088121 2992 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 05:28:10.092494 kubelet[2992]: E1031 05:28:10.088127 2992 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 05:28:10.092494 kubelet[2992]: E1031 05:28:10.089168 2992 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 05:28:10.092494 kubelet[2992]: W1031 05:28:10.089176 2992 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 05:28:10.092494 kubelet[2992]: E1031 05:28:10.089183 2992 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 05:28:10.092842 kubelet[2992]: E1031 05:28:10.092829 2992 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 05:28:10.092842 kubelet[2992]: W1031 05:28:10.092840 2992 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 05:28:10.092894 kubelet[2992]: E1031 05:28:10.092850 2992 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 05:28:10.847722 kubelet[2992]: E1031 05:28:10.847678 2992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-d5mr7" podUID="dfabc2cb-7622-45ac-b566-baf9171817d8" Oct 31 05:28:11.248801 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4038757609.mount: Deactivated successfully. Oct 31 05:28:11.991707 containerd[1681]: time="2025-10-31T05:28:11.991344574Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 05:28:11.995805 containerd[1681]: time="2025-10-31T05:28:11.995786930Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Oct 31 05:28:11.998694 containerd[1681]: time="2025-10-31T05:28:11.998676752Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 05:28:12.003578 containerd[1681]: time="2025-10-31T05:28:12.003537659Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 05:28:12.004053 containerd[1681]: time="2025-10-31T05:28:12.003884176Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 2.11571475s" Oct 31 05:28:12.004053 containerd[1681]: time="2025-10-31T05:28:12.003904829Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Oct 31 05:28:12.005044 containerd[1681]: time="2025-10-31T05:28:12.005026408Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Oct 31 05:28:12.073193 containerd[1681]: time="2025-10-31T05:28:12.073145577Z" level=info msg="CreateContainer within sandbox \"b79983b76a917ffbca0014dd21776af3d378af5ca4284760405d412ebb419c41\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Oct 31 05:28:12.087540 containerd[1681]: time="2025-10-31T05:28:12.087033859Z" level=info msg="Container a878c90936eb61f62f164a25e573344b43e656de018dfa7105ed2846144ca0e4: CDI devices from CRI Config.CDIDevices: []" Oct 31 05:28:12.091120 containerd[1681]: time="2025-10-31T05:28:12.091104254Z" level=info msg="CreateContainer within sandbox \"b79983b76a917ffbca0014dd21776af3d378af5ca4284760405d412ebb419c41\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"a878c90936eb61f62f164a25e573344b43e656de018dfa7105ed2846144ca0e4\"" Oct 31 05:28:12.091609 containerd[1681]: time="2025-10-31T05:28:12.091595119Z" level=info msg="StartContainer for \"a878c90936eb61f62f164a25e573344b43e656de018dfa7105ed2846144ca0e4\"" Oct 31 05:28:12.092593 containerd[1681]: time="2025-10-31T05:28:12.092575821Z" level=info msg="connecting to shim a878c90936eb61f62f164a25e573344b43e656de018dfa7105ed2846144ca0e4" address="unix:///run/containerd/s/c27aaebca1de439744234456444f6d6b4267d92d1bc502201e4a80dbd28aae28" protocol=ttrpc version=3 Oct 31 05:28:12.147052 systemd[1]: Started cri-containerd-a878c90936eb61f62f164a25e573344b43e656de018dfa7105ed2846144ca0e4.scope - libcontainer container a878c90936eb61f62f164a25e573344b43e656de018dfa7105ed2846144ca0e4. Oct 31 05:28:12.188623 containerd[1681]: time="2025-10-31T05:28:12.188569724Z" level=info msg="StartContainer for \"a878c90936eb61f62f164a25e573344b43e656de018dfa7105ed2846144ca0e4\" returns successfully" Oct 31 05:28:12.852808 kubelet[2992]: E1031 05:28:12.852745 2992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-d5mr7" podUID="dfabc2cb-7622-45ac-b566-baf9171817d8" Oct 31 05:28:12.954604 kubelet[2992]: I1031 05:28:12.952193 2992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-56764fd7d-6b59s" podStartSLOduration=1.835523894 podStartE2EDuration="3.952182668s" podCreationTimestamp="2025-10-31 05:28:09 +0000 UTC" firstStartedPulling="2025-10-31 05:28:09.887887476 +0000 UTC m=+18.397757902" lastFinishedPulling="2025-10-31 05:28:12.00454625 +0000 UTC m=+20.514416676" observedRunningTime="2025-10-31 05:28:12.951097173 +0000 UTC m=+21.460967618" watchObservedRunningTime="2025-10-31 05:28:12.952182668 +0000 UTC m=+21.462053098" Oct 31 05:28:13.021719 kubelet[2992]: E1031 05:28:13.021697 2992 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 05:28:13.021719 kubelet[2992]: W1031 05:28:13.021713 2992 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 05:28:13.024270 kubelet[2992]: E1031 05:28:13.024251 2992 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 05:28:13.024674 kubelet[2992]: E1031 05:28:13.024661 2992 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 05:28:13.024674 kubelet[2992]: W1031 05:28:13.024670 2992 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 05:28:13.024782 kubelet[2992]: E1031 05:28:13.024680 2992 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 05:28:13.024806 kubelet[2992]: E1031 05:28:13.024786 2992 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 05:28:13.024806 kubelet[2992]: W1031 05:28:13.024791 2992 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 05:28:13.024806 kubelet[2992]: E1031 05:28:13.024796 2992 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 05:28:13.024974 kubelet[2992]: E1031 05:28:13.024964 2992 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 05:28:13.024974 kubelet[2992]: W1031 05:28:13.024970 2992 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 05:28:13.025041 kubelet[2992]: E1031 05:28:13.024976 2992 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 05:28:13.025154 kubelet[2992]: E1031 05:28:13.025144 2992 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 05:28:13.025154 kubelet[2992]: W1031 05:28:13.025152 2992 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 05:28:13.025201 kubelet[2992]: E1031 05:28:13.025157 2992 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 05:28:13.025392 kubelet[2992]: E1031 05:28:13.025373 2992 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 05:28:13.025392 kubelet[2992]: W1031 05:28:13.025380 2992 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 05:28:13.025392 kubelet[2992]: E1031 05:28:13.025386 2992 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 05:28:13.025569 kubelet[2992]: E1031 05:28:13.025560 2992 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 05:28:13.025569 kubelet[2992]: W1031 05:28:13.025568 2992 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 05:28:13.025569 kubelet[2992]: E1031 05:28:13.025573 2992 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 05:28:13.025772 kubelet[2992]: E1031 05:28:13.025761 2992 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 05:28:13.025772 kubelet[2992]: W1031 05:28:13.025768 2992 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 05:28:13.025813 kubelet[2992]: E1031 05:28:13.025775 2992 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 05:28:13.025942 kubelet[2992]: E1031 05:28:13.025877 2992 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 05:28:13.025942 kubelet[2992]: W1031 05:28:13.025881 2992 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 05:28:13.025942 kubelet[2992]: E1031 05:28:13.025886 2992 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 05:28:13.026195 kubelet[2992]: E1031 05:28:13.025981 2992 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 05:28:13.026195 kubelet[2992]: W1031 05:28:13.025986 2992 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 05:28:13.026195 kubelet[2992]: E1031 05:28:13.025990 2992 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 05:28:13.026195 kubelet[2992]: E1031 05:28:13.026060 2992 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 05:28:13.026195 kubelet[2992]: W1031 05:28:13.026064 2992 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 05:28:13.026195 kubelet[2992]: E1031 05:28:13.026069 2992 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 05:28:13.026195 kubelet[2992]: E1031 05:28:13.026140 2992 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 05:28:13.026195 kubelet[2992]: W1031 05:28:13.026144 2992 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 05:28:13.026195 kubelet[2992]: E1031 05:28:13.026148 2992 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 05:28:13.033391 kubelet[2992]: E1031 05:28:13.026232 2992 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 05:28:13.033391 kubelet[2992]: W1031 05:28:13.026236 2992 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 05:28:13.033391 kubelet[2992]: E1031 05:28:13.026241 2992 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 05:28:13.033391 kubelet[2992]: E1031 05:28:13.026320 2992 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 05:28:13.033391 kubelet[2992]: W1031 05:28:13.026324 2992 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 05:28:13.033391 kubelet[2992]: E1031 05:28:13.026329 2992 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 05:28:13.033391 kubelet[2992]: E1031 05:28:13.026399 2992 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 05:28:13.033391 kubelet[2992]: W1031 05:28:13.026403 2992 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 05:28:13.033391 kubelet[2992]: E1031 05:28:13.026407 2992 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 05:28:13.033391 kubelet[2992]: E1031 05:28:13.026553 2992 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 05:28:13.033556 kubelet[2992]: W1031 05:28:13.026557 2992 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 05:28:13.033556 kubelet[2992]: E1031 05:28:13.026562 2992 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 05:28:13.033556 kubelet[2992]: E1031 05:28:13.026659 2992 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 05:28:13.033556 kubelet[2992]: W1031 05:28:13.026663 2992 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 05:28:13.033556 kubelet[2992]: E1031 05:28:13.026668 2992 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 05:28:13.033556 kubelet[2992]: E1031 05:28:13.026745 2992 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 05:28:13.033556 kubelet[2992]: W1031 05:28:13.026749 2992 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 05:28:13.033556 kubelet[2992]: E1031 05:28:13.026754 2992 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 05:28:13.033556 kubelet[2992]: E1031 05:28:13.026835 2992 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 05:28:13.033556 kubelet[2992]: W1031 05:28:13.026839 2992 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 05:28:13.038344 kubelet[2992]: E1031 05:28:13.026844 2992 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 05:28:13.038344 kubelet[2992]: E1031 05:28:13.026945 2992 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 05:28:13.038344 kubelet[2992]: W1031 05:28:13.026949 2992 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 05:28:13.038344 kubelet[2992]: E1031 05:28:13.026953 2992 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 05:28:13.038344 kubelet[2992]: E1031 05:28:13.027065 2992 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 05:28:13.038344 kubelet[2992]: W1031 05:28:13.027070 2992 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 05:28:13.038344 kubelet[2992]: E1031 05:28:13.027075 2992 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 05:28:13.038344 kubelet[2992]: E1031 05:28:13.027180 2992 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 05:28:13.038344 kubelet[2992]: W1031 05:28:13.027186 2992 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 05:28:13.038344 kubelet[2992]: E1031 05:28:13.027193 2992 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 05:28:13.038520 kubelet[2992]: E1031 05:28:13.027390 2992 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 05:28:13.038520 kubelet[2992]: W1031 05:28:13.027397 2992 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 05:28:13.038520 kubelet[2992]: E1031 05:28:13.027402 2992 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 05:28:13.038520 kubelet[2992]: E1031 05:28:13.027489 2992 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 05:28:13.038520 kubelet[2992]: W1031 05:28:13.027493 2992 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 05:28:13.038520 kubelet[2992]: E1031 05:28:13.027498 2992 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 05:28:13.038520 kubelet[2992]: E1031 05:28:13.027573 2992 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 05:28:13.038520 kubelet[2992]: W1031 05:28:13.027577 2992 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 05:28:13.038520 kubelet[2992]: E1031 05:28:13.027582 2992 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 05:28:13.038520 kubelet[2992]: E1031 05:28:13.027661 2992 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 05:28:13.038675 kubelet[2992]: W1031 05:28:13.027666 2992 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 05:28:13.038675 kubelet[2992]: E1031 05:28:13.027670 2992 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 05:28:13.038675 kubelet[2992]: E1031 05:28:13.027937 2992 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 05:28:13.038675 kubelet[2992]: W1031 05:28:13.027943 2992 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 05:28:13.038675 kubelet[2992]: E1031 05:28:13.027948 2992 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 05:28:13.038675 kubelet[2992]: E1031 05:28:13.028018 2992 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 05:28:13.038675 kubelet[2992]: W1031 05:28:13.028024 2992 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 05:28:13.038675 kubelet[2992]: E1031 05:28:13.028029 2992 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 05:28:13.038675 kubelet[2992]: E1031 05:28:13.028094 2992 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 05:28:13.038675 kubelet[2992]: W1031 05:28:13.028098 2992 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 05:28:13.038826 kubelet[2992]: E1031 05:28:13.028102 2992 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 05:28:13.038826 kubelet[2992]: E1031 05:28:13.028184 2992 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 05:28:13.038826 kubelet[2992]: W1031 05:28:13.028188 2992 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 05:28:13.038826 kubelet[2992]: E1031 05:28:13.028193 2992 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 05:28:13.038826 kubelet[2992]: E1031 05:28:13.028335 2992 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 05:28:13.038826 kubelet[2992]: W1031 05:28:13.028344 2992 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 05:28:13.038826 kubelet[2992]: E1031 05:28:13.028349 2992 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 05:28:13.038826 kubelet[2992]: E1031 05:28:13.028427 2992 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 05:28:13.038826 kubelet[2992]: W1031 05:28:13.028432 2992 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 05:28:13.038826 kubelet[2992]: E1031 05:28:13.028436 2992 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 05:28:13.042850 kubelet[2992]: E1031 05:28:13.028516 2992 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 05:28:13.042850 kubelet[2992]: W1031 05:28:13.028520 2992 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 05:28:13.042850 kubelet[2992]: E1031 05:28:13.028525 2992 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 05:28:13.332470 containerd[1681]: time="2025-10-31T05:28:13.330028766Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 05:28:13.332470 containerd[1681]: time="2025-10-31T05:28:13.330481559Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Oct 31 05:28:13.332470 containerd[1681]: time="2025-10-31T05:28:13.330767902Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 05:28:13.332726 containerd[1681]: time="2025-10-31T05:28:13.332581530Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 05:28:13.332783 containerd[1681]: time="2025-10-31T05:28:13.332769611Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.327664581s" Oct 31 05:28:13.332988 containerd[1681]: time="2025-10-31T05:28:13.332848771Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Oct 31 05:28:13.335684 containerd[1681]: time="2025-10-31T05:28:13.335659535Z" level=info msg="CreateContainer within sandbox \"d3657c3af831de2c10299f719985c73d131289202b3f81429519cda6138dbc5b\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Oct 31 05:28:13.341246 containerd[1681]: time="2025-10-31T05:28:13.340582021Z" level=info msg="Container dc44461af4a90dafb953da181f67c55248c07af12128bdcfdc21e4c759152ac6: CDI devices from CRI Config.CDIDevices: []" Oct 31 05:28:13.343646 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1749912742.mount: Deactivated successfully. Oct 31 05:28:13.346942 containerd[1681]: time="2025-10-31T05:28:13.346880914Z" level=info msg="CreateContainer within sandbox \"d3657c3af831de2c10299f719985c73d131289202b3f81429519cda6138dbc5b\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"dc44461af4a90dafb953da181f67c55248c07af12128bdcfdc21e4c759152ac6\"" Oct 31 05:28:13.347716 containerd[1681]: time="2025-10-31T05:28:13.347453972Z" level=info msg="StartContainer for \"dc44461af4a90dafb953da181f67c55248c07af12128bdcfdc21e4c759152ac6\"" Oct 31 05:28:13.348573 containerd[1681]: time="2025-10-31T05:28:13.348552934Z" level=info msg="connecting to shim dc44461af4a90dafb953da181f67c55248c07af12128bdcfdc21e4c759152ac6" address="unix:///run/containerd/s/6cd99c696e166c29e6813eb72acf82c5fd7ac21c60e1dcd76428fcd86a0c56a4" protocol=ttrpc version=3 Oct 31 05:28:13.366011 systemd[1]: Started cri-containerd-dc44461af4a90dafb953da181f67c55248c07af12128bdcfdc21e4c759152ac6.scope - libcontainer container dc44461af4a90dafb953da181f67c55248c07af12128bdcfdc21e4c759152ac6. Oct 31 05:28:13.402276 systemd[1]: cri-containerd-dc44461af4a90dafb953da181f67c55248c07af12128bdcfdc21e4c759152ac6.scope: Deactivated successfully. Oct 31 05:28:13.406724 containerd[1681]: time="2025-10-31T05:28:13.406705142Z" level=info msg="StartContainer for \"dc44461af4a90dafb953da181f67c55248c07af12128bdcfdc21e4c759152ac6\" returns successfully" Oct 31 05:28:13.430504 containerd[1681]: time="2025-10-31T05:28:13.430310530Z" level=info msg="received exit event container_id:\"dc44461af4a90dafb953da181f67c55248c07af12128bdcfdc21e4c759152ac6\" id:\"dc44461af4a90dafb953da181f67c55248c07af12128bdcfdc21e4c759152ac6\" pid:3640 exited_at:{seconds:1761888493 nanos:403386751}" Oct 31 05:28:13.448031 containerd[1681]: time="2025-10-31T05:28:13.447969467Z" level=info msg="TaskExit event in podsandbox handler container_id:\"dc44461af4a90dafb953da181f67c55248c07af12128bdcfdc21e4c759152ac6\" id:\"dc44461af4a90dafb953da181f67c55248c07af12128bdcfdc21e4c759152ac6\" pid:3640 exited_at:{seconds:1761888493 nanos:403386751}" Oct 31 05:28:13.452111 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dc44461af4a90dafb953da181f67c55248c07af12128bdcfdc21e4c759152ac6-rootfs.mount: Deactivated successfully. Oct 31 05:28:13.946545 containerd[1681]: time="2025-10-31T05:28:13.946516915Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Oct 31 05:28:14.846941 kubelet[2992]: E1031 05:28:14.846828 2992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-d5mr7" podUID="dfabc2cb-7622-45ac-b566-baf9171817d8" Oct 31 05:28:16.847560 kubelet[2992]: E1031 05:28:16.847516 2992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-d5mr7" podUID="dfabc2cb-7622-45ac-b566-baf9171817d8" Oct 31 05:28:18.847315 kubelet[2992]: E1031 05:28:18.847274 2992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-d5mr7" podUID="dfabc2cb-7622-45ac-b566-baf9171817d8" Oct 31 05:28:19.890235 containerd[1681]: time="2025-10-31T05:28:19.890163141Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 05:28:19.891070 containerd[1681]: time="2025-10-31T05:28:19.890527351Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Oct 31 05:28:19.891070 containerd[1681]: time="2025-10-31T05:28:19.890893573Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 05:28:19.892249 containerd[1681]: time="2025-10-31T05:28:19.892237061Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 05:28:19.892621 containerd[1681]: time="2025-10-31T05:28:19.892609538Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 5.946065404s" Oct 31 05:28:19.892680 containerd[1681]: time="2025-10-31T05:28:19.892660722Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Oct 31 05:28:19.898003 containerd[1681]: time="2025-10-31T05:28:19.897940238Z" level=info msg="CreateContainer within sandbox \"d3657c3af831de2c10299f719985c73d131289202b3f81429519cda6138dbc5b\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Oct 31 05:28:19.905112 containerd[1681]: time="2025-10-31T05:28:19.905090351Z" level=info msg="Container 72ddd53851c5bfb3f58898381b5c2e87fa402d77bd581968742b963d8e6d6057: CDI devices from CRI Config.CDIDevices: []" Oct 31 05:28:19.909387 containerd[1681]: time="2025-10-31T05:28:19.909364363Z" level=info msg="CreateContainer within sandbox \"d3657c3af831de2c10299f719985c73d131289202b3f81429519cda6138dbc5b\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"72ddd53851c5bfb3f58898381b5c2e87fa402d77bd581968742b963d8e6d6057\"" Oct 31 05:28:19.910060 containerd[1681]: time="2025-10-31T05:28:19.910039622Z" level=info msg="StartContainer for \"72ddd53851c5bfb3f58898381b5c2e87fa402d77bd581968742b963d8e6d6057\"" Oct 31 05:28:19.911487 containerd[1681]: time="2025-10-31T05:28:19.911439142Z" level=info msg="connecting to shim 72ddd53851c5bfb3f58898381b5c2e87fa402d77bd581968742b963d8e6d6057" address="unix:///run/containerd/s/6cd99c696e166c29e6813eb72acf82c5fd7ac21c60e1dcd76428fcd86a0c56a4" protocol=ttrpc version=3 Oct 31 05:28:19.932028 systemd[1]: Started cri-containerd-72ddd53851c5bfb3f58898381b5c2e87fa402d77bd581968742b963d8e6d6057.scope - libcontainer container 72ddd53851c5bfb3f58898381b5c2e87fa402d77bd581968742b963d8e6d6057. Oct 31 05:28:19.967352 containerd[1681]: time="2025-10-31T05:28:19.967077474Z" level=info msg="StartContainer for \"72ddd53851c5bfb3f58898381b5c2e87fa402d77bd581968742b963d8e6d6057\" returns successfully" Oct 31 05:28:20.846708 kubelet[2992]: E1031 05:28:20.846664 2992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-d5mr7" podUID="dfabc2cb-7622-45ac-b566-baf9171817d8" Oct 31 05:28:21.280599 systemd[1]: cri-containerd-72ddd53851c5bfb3f58898381b5c2e87fa402d77bd581968742b963d8e6d6057.scope: Deactivated successfully. Oct 31 05:28:21.280771 systemd[1]: cri-containerd-72ddd53851c5bfb3f58898381b5c2e87fa402d77bd581968742b963d8e6d6057.scope: Consumed 312ms CPU time, 158.1M memory peak, 24K read from disk, 171.3M written to disk. Oct 31 05:28:21.282377 containerd[1681]: time="2025-10-31T05:28:21.281632014Z" level=info msg="received exit event container_id:\"72ddd53851c5bfb3f58898381b5c2e87fa402d77bd581968742b963d8e6d6057\" id:\"72ddd53851c5bfb3f58898381b5c2e87fa402d77bd581968742b963d8e6d6057\" pid:3698 exited_at:{seconds:1761888501 nanos:281395901}" Oct 31 05:28:21.282377 containerd[1681]: time="2025-10-31T05:28:21.281750234Z" level=info msg="TaskExit event in podsandbox handler container_id:\"72ddd53851c5bfb3f58898381b5c2e87fa402d77bd581968742b963d8e6d6057\" id:\"72ddd53851c5bfb3f58898381b5c2e87fa402d77bd581968742b963d8e6d6057\" pid:3698 exited_at:{seconds:1761888501 nanos:281395901}" Oct 31 05:28:21.321491 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-72ddd53851c5bfb3f58898381b5c2e87fa402d77bd581968742b963d8e6d6057-rootfs.mount: Deactivated successfully. Oct 31 05:28:21.351626 kubelet[2992]: I1031 05:28:21.351604 2992 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Oct 31 05:28:21.457697 systemd[1]: Created slice kubepods-burstable-poddbcbf5c8_0241_4abd_a1f5_927e98ef9e75.slice - libcontainer container kubepods-burstable-poddbcbf5c8_0241_4abd_a1f5_927e98ef9e75.slice. Oct 31 05:28:21.465078 systemd[1]: Created slice kubepods-besteffort-pod186659ac_48c5_4a61_b81f_4021ad112f63.slice - libcontainer container kubepods-besteffort-pod186659ac_48c5_4a61_b81f_4021ad112f63.slice. Oct 31 05:28:21.471433 systemd[1]: Created slice kubepods-burstable-podf84825b5_aa90_4de7_a17e_50cbbffb855d.slice - libcontainer container kubepods-burstable-podf84825b5_aa90_4de7_a17e_50cbbffb855d.slice. Oct 31 05:28:21.477652 kubelet[2992]: I1031 05:28:21.477630 2992 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/186659ac-48c5-4a61-b81f-4021ad112f63-calico-apiserver-certs\") pod \"calico-apiserver-76bf9d45db-nvffh\" (UID: \"186659ac-48c5-4a61-b81f-4021ad112f63\") " pod="calico-apiserver/calico-apiserver-76bf9d45db-nvffh" Oct 31 05:28:21.477796 kubelet[2992]: I1031 05:28:21.477785 2992 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dbcbf5c8-0241-4abd-a1f5-927e98ef9e75-config-volume\") pod \"coredns-674b8bbfcf-5zjgb\" (UID: \"dbcbf5c8-0241-4abd-a1f5-927e98ef9e75\") " pod="kube-system/coredns-674b8bbfcf-5zjgb" Oct 31 05:28:21.478358 kubelet[2992]: I1031 05:28:21.478224 2992 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f84825b5-aa90-4de7-a17e-50cbbffb855d-config-volume\") pod \"coredns-674b8bbfcf-tw9d4\" (UID: \"f84825b5-aa90-4de7-a17e-50cbbffb855d\") " pod="kube-system/coredns-674b8bbfcf-tw9d4" Oct 31 05:28:21.478358 kubelet[2992]: I1031 05:28:21.478240 2992 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxj7b\" (UniqueName: \"kubernetes.io/projected/f84825b5-aa90-4de7-a17e-50cbbffb855d-kube-api-access-fxj7b\") pod \"coredns-674b8bbfcf-tw9d4\" (UID: \"f84825b5-aa90-4de7-a17e-50cbbffb855d\") " pod="kube-system/coredns-674b8bbfcf-tw9d4" Oct 31 05:28:21.478358 kubelet[2992]: I1031 05:28:21.478250 2992 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ca64dee5-fbb5-4c1e-b441-a24d0937f7dd-tigera-ca-bundle\") pod \"calico-kube-controllers-7d5c795875-vrwbt\" (UID: \"ca64dee5-fbb5-4c1e-b441-a24d0937f7dd\") " pod="calico-system/calico-kube-controllers-7d5c795875-vrwbt" Oct 31 05:28:21.478358 kubelet[2992]: I1031 05:28:21.478261 2992 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/886a9a56-e128-4afe-8400-8c8f904fe953-calico-apiserver-certs\") pod \"calico-apiserver-76bf9d45db-sqc28\" (UID: \"886a9a56-e128-4afe-8400-8c8f904fe953\") " pod="calico-apiserver/calico-apiserver-76bf9d45db-sqc28" Oct 31 05:28:21.478601 kubelet[2992]: I1031 05:28:21.478272 2992 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqjkq\" (UniqueName: \"kubernetes.io/projected/886a9a56-e128-4afe-8400-8c8f904fe953-kube-api-access-kqjkq\") pod \"calico-apiserver-76bf9d45db-sqc28\" (UID: \"886a9a56-e128-4afe-8400-8c8f904fe953\") " pod="calico-apiserver/calico-apiserver-76bf9d45db-sqc28" Oct 31 05:28:21.478601 kubelet[2992]: I1031 05:28:21.478467 2992 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6gs9\" (UniqueName: \"kubernetes.io/projected/186659ac-48c5-4a61-b81f-4021ad112f63-kube-api-access-z6gs9\") pod \"calico-apiserver-76bf9d45db-nvffh\" (UID: \"186659ac-48c5-4a61-b81f-4021ad112f63\") " pod="calico-apiserver/calico-apiserver-76bf9d45db-nvffh" Oct 31 05:28:21.478601 kubelet[2992]: I1031 05:28:21.478486 2992 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5205e036-4995-4277-8348-1e3bf6f34336-goldmane-ca-bundle\") pod \"goldmane-666569f655-k2xp4\" (UID: \"5205e036-4995-4277-8348-1e3bf6f34336\") " pod="calico-system/goldmane-666569f655-k2xp4" Oct 31 05:28:21.478601 kubelet[2992]: I1031 05:28:21.478497 2992 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/5205e036-4995-4277-8348-1e3bf6f34336-goldmane-key-pair\") pod \"goldmane-666569f655-k2xp4\" (UID: \"5205e036-4995-4277-8348-1e3bf6f34336\") " pod="calico-system/goldmane-666569f655-k2xp4" Oct 31 05:28:21.478601 kubelet[2992]: I1031 05:28:21.478509 2992 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bb0778d0-3d72-4ac2-9721-069eeeab611b-whisker-ca-bundle\") pod \"whisker-566c8cd897-f2b62\" (UID: \"bb0778d0-3d72-4ac2-9721-069eeeab611b\") " pod="calico-system/whisker-566c8cd897-f2b62" Oct 31 05:28:21.479171 kubelet[2992]: I1031 05:28:21.478533 2992 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/bb0778d0-3d72-4ac2-9721-069eeeab611b-whisker-backend-key-pair\") pod \"whisker-566c8cd897-f2b62\" (UID: \"bb0778d0-3d72-4ac2-9721-069eeeab611b\") " pod="calico-system/whisker-566c8cd897-f2b62" Oct 31 05:28:21.479171 kubelet[2992]: I1031 05:28:21.478549 2992 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zp988\" (UniqueName: \"kubernetes.io/projected/5205e036-4995-4277-8348-1e3bf6f34336-kube-api-access-zp988\") pod \"goldmane-666569f655-k2xp4\" (UID: \"5205e036-4995-4277-8348-1e3bf6f34336\") " pod="calico-system/goldmane-666569f655-k2xp4" Oct 31 05:28:21.479171 kubelet[2992]: I1031 05:28:21.478561 2992 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxn4r\" (UniqueName: \"kubernetes.io/projected/dbcbf5c8-0241-4abd-a1f5-927e98ef9e75-kube-api-access-fxn4r\") pod \"coredns-674b8bbfcf-5zjgb\" (UID: \"dbcbf5c8-0241-4abd-a1f5-927e98ef9e75\") " pod="kube-system/coredns-674b8bbfcf-5zjgb" Oct 31 05:28:21.479171 kubelet[2992]: I1031 05:28:21.478576 2992 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zlb7g\" (UniqueName: \"kubernetes.io/projected/ca64dee5-fbb5-4c1e-b441-a24d0937f7dd-kube-api-access-zlb7g\") pod \"calico-kube-controllers-7d5c795875-vrwbt\" (UID: \"ca64dee5-fbb5-4c1e-b441-a24d0937f7dd\") " pod="calico-system/calico-kube-controllers-7d5c795875-vrwbt" Oct 31 05:28:21.479171 kubelet[2992]: I1031 05:28:21.478585 2992 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxp6c\" (UniqueName: \"kubernetes.io/projected/bb0778d0-3d72-4ac2-9721-069eeeab611b-kube-api-access-fxp6c\") pod \"whisker-566c8cd897-f2b62\" (UID: \"bb0778d0-3d72-4ac2-9721-069eeeab611b\") " pod="calico-system/whisker-566c8cd897-f2b62" Oct 31 05:28:21.479411 kubelet[2992]: I1031 05:28:21.478776 2992 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5205e036-4995-4277-8348-1e3bf6f34336-config\") pod \"goldmane-666569f655-k2xp4\" (UID: \"5205e036-4995-4277-8348-1e3bf6f34336\") " pod="calico-system/goldmane-666569f655-k2xp4" Oct 31 05:28:21.479281 systemd[1]: Created slice kubepods-besteffort-podca64dee5_fbb5_4c1e_b441_a24d0937f7dd.slice - libcontainer container kubepods-besteffort-podca64dee5_fbb5_4c1e_b441_a24d0937f7dd.slice. Oct 31 05:28:21.486892 systemd[1]: Created slice kubepods-besteffort-podbb0778d0_3d72_4ac2_9721_069eeeab611b.slice - libcontainer container kubepods-besteffort-podbb0778d0_3d72_4ac2_9721_069eeeab611b.slice. Oct 31 05:28:21.492864 systemd[1]: Created slice kubepods-besteffort-pod886a9a56_e128_4afe_8400_8c8f904fe953.slice - libcontainer container kubepods-besteffort-pod886a9a56_e128_4afe_8400_8c8f904fe953.slice. Oct 31 05:28:21.499118 systemd[1]: Created slice kubepods-besteffort-pod5205e036_4995_4277_8348_1e3bf6f34336.slice - libcontainer container kubepods-besteffort-pod5205e036_4995_4277_8348_1e3bf6f34336.slice. Oct 31 05:28:21.764160 containerd[1681]: time="2025-10-31T05:28:21.764133343Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-5zjgb,Uid:dbcbf5c8-0241-4abd-a1f5-927e98ef9e75,Namespace:kube-system,Attempt:0,}" Oct 31 05:28:21.768473 containerd[1681]: time="2025-10-31T05:28:21.768456126Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76bf9d45db-nvffh,Uid:186659ac-48c5-4a61-b81f-4021ad112f63,Namespace:calico-apiserver,Attempt:0,}" Oct 31 05:28:21.778823 containerd[1681]: time="2025-10-31T05:28:21.778452355Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-tw9d4,Uid:f84825b5-aa90-4de7-a17e-50cbbffb855d,Namespace:kube-system,Attempt:0,}" Oct 31 05:28:21.783779 containerd[1681]: time="2025-10-31T05:28:21.783763463Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7d5c795875-vrwbt,Uid:ca64dee5-fbb5-4c1e-b441-a24d0937f7dd,Namespace:calico-system,Attempt:0,}" Oct 31 05:28:21.805079 containerd[1681]: time="2025-10-31T05:28:21.805053539Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-k2xp4,Uid:5205e036-4995-4277-8348-1e3bf6f34336,Namespace:calico-system,Attempt:0,}" Oct 31 05:28:21.812637 containerd[1681]: time="2025-10-31T05:28:21.812621181Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-566c8cd897-f2b62,Uid:bb0778d0-3d72-4ac2-9721-069eeeab611b,Namespace:calico-system,Attempt:0,}" Oct 31 05:28:21.817605 containerd[1681]: time="2025-10-31T05:28:21.817439545Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76bf9d45db-sqc28,Uid:886a9a56-e128-4afe-8400-8c8f904fe953,Namespace:calico-apiserver,Attempt:0,}" Oct 31 05:28:22.073398 containerd[1681]: time="2025-10-31T05:28:22.073311182Z" level=error msg="Failed to destroy network for sandbox \"3898bed1c49f6a2deb10968e17d0faa176d717e322c6aa2ab1f465246ee32614\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 05:28:22.076214 containerd[1681]: time="2025-10-31T05:28:22.076187087Z" level=error msg="Failed to destroy network for sandbox \"a282770741f6b0f342b7110a6eddc9de46dce0769085471fbfc31f96f5eb4a71\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 05:28:22.077816 containerd[1681]: time="2025-10-31T05:28:22.077736369Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Oct 31 05:28:22.078643 containerd[1681]: time="2025-10-31T05:28:22.078628254Z" level=error msg="Failed to destroy network for sandbox \"8b35439168a3e737a9b0e148fec0dab507d661ace09bbee4b4297abd767592ee\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 05:28:22.078988 containerd[1681]: time="2025-10-31T05:28:22.078965464Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-5zjgb,Uid:dbcbf5c8-0241-4abd-a1f5-927e98ef9e75,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3898bed1c49f6a2deb10968e17d0faa176d717e322c6aa2ab1f465246ee32614\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 05:28:22.098772 containerd[1681]: time="2025-10-31T05:28:22.098634281Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-k2xp4,Uid:5205e036-4995-4277-8348-1e3bf6f34336,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a282770741f6b0f342b7110a6eddc9de46dce0769085471fbfc31f96f5eb4a71\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 05:28:22.099826 containerd[1681]: time="2025-10-31T05:28:22.099553474Z" level=error msg="Failed to destroy network for sandbox \"41c6493a118857b4491de5a7df2fd97ff61bd22b18b4de03fdd0145b24a9d085\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 05:28:22.101120 containerd[1681]: time="2025-10-31T05:28:22.100475706Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-566c8cd897-f2b62,Uid:bb0778d0-3d72-4ac2-9721-069eeeab611b,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b35439168a3e737a9b0e148fec0dab507d661ace09bbee4b4297abd767592ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 05:28:22.103013 containerd[1681]: time="2025-10-31T05:28:22.102985978Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76bf9d45db-nvffh,Uid:186659ac-48c5-4a61-b81f-4021ad112f63,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"41c6493a118857b4491de5a7df2fd97ff61bd22b18b4de03fdd0145b24a9d085\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 05:28:22.104027 containerd[1681]: time="2025-10-31T05:28:22.104010064Z" level=error msg="Failed to destroy network for sandbox \"904c2df3d09c1262c7b7593e768a5bc57d7887b7cc41e74ae805af183585274f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 05:28:22.105013 containerd[1681]: time="2025-10-31T05:28:22.104548687Z" level=error msg="Failed to destroy network for sandbox \"4ba6ebd4ab837d457c0006573c1036cbcb8e82a13ad846df839e40fa87751b75\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 05:28:22.105237 containerd[1681]: time="2025-10-31T05:28:22.105220388Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7d5c795875-vrwbt,Uid:ca64dee5-fbb5-4c1e-b441-a24d0937f7dd,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"904c2df3d09c1262c7b7593e768a5bc57d7887b7cc41e74ae805af183585274f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 05:28:22.105419 containerd[1681]: time="2025-10-31T05:28:22.105338867Z" level=error msg="Failed to destroy network for sandbox \"68aa6442a83f9e87498103a38a38f29c9959db3ec76f4219680c07ddf9871385\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 05:28:22.106225 kubelet[2992]: E1031 05:28:22.105869 2992 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"904c2df3d09c1262c7b7593e768a5bc57d7887b7cc41e74ae805af183585274f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 05:28:22.106225 kubelet[2992]: E1031 05:28:22.105941 2992 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"904c2df3d09c1262c7b7593e768a5bc57d7887b7cc41e74ae805af183585274f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7d5c795875-vrwbt" Oct 31 05:28:22.106225 kubelet[2992]: E1031 05:28:22.105958 2992 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"904c2df3d09c1262c7b7593e768a5bc57d7887b7cc41e74ae805af183585274f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7d5c795875-vrwbt" Oct 31 05:28:22.106752 kubelet[2992]: E1031 05:28:22.105989 2992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7d5c795875-vrwbt_calico-system(ca64dee5-fbb5-4c1e-b441-a24d0937f7dd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7d5c795875-vrwbt_calico-system(ca64dee5-fbb5-4c1e-b441-a24d0937f7dd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"904c2df3d09c1262c7b7593e768a5bc57d7887b7cc41e74ae805af183585274f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7d5c795875-vrwbt" podUID="ca64dee5-fbb5-4c1e-b441-a24d0937f7dd" Oct 31 05:28:22.106811 containerd[1681]: time="2025-10-31T05:28:22.106675480Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76bf9d45db-sqc28,Uid:886a9a56-e128-4afe-8400-8c8f904fe953,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ba6ebd4ab837d457c0006573c1036cbcb8e82a13ad846df839e40fa87751b75\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 05:28:22.107368 kubelet[2992]: E1031 05:28:22.107186 2992 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3898bed1c49f6a2deb10968e17d0faa176d717e322c6aa2ab1f465246ee32614\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 05:28:22.107368 kubelet[2992]: E1031 05:28:22.107211 2992 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3898bed1c49f6a2deb10968e17d0faa176d717e322c6aa2ab1f465246ee32614\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-5zjgb" Oct 31 05:28:22.107368 kubelet[2992]: E1031 05:28:22.107228 2992 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3898bed1c49f6a2deb10968e17d0faa176d717e322c6aa2ab1f465246ee32614\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-5zjgb" Oct 31 05:28:22.107456 kubelet[2992]: E1031 05:28:22.107261 2992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-5zjgb_kube-system(dbcbf5c8-0241-4abd-a1f5-927e98ef9e75)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-5zjgb_kube-system(dbcbf5c8-0241-4abd-a1f5-927e98ef9e75)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3898bed1c49f6a2deb10968e17d0faa176d717e322c6aa2ab1f465246ee32614\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-5zjgb" podUID="dbcbf5c8-0241-4abd-a1f5-927e98ef9e75" Oct 31 05:28:22.107456 kubelet[2992]: E1031 05:28:22.107287 2992 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a282770741f6b0f342b7110a6eddc9de46dce0769085471fbfc31f96f5eb4a71\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 05:28:22.107456 kubelet[2992]: E1031 05:28:22.107297 2992 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a282770741f6b0f342b7110a6eddc9de46dce0769085471fbfc31f96f5eb4a71\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-k2xp4" Oct 31 05:28:22.107550 kubelet[2992]: E1031 05:28:22.107309 2992 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a282770741f6b0f342b7110a6eddc9de46dce0769085471fbfc31f96f5eb4a71\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-k2xp4" Oct 31 05:28:22.107550 kubelet[2992]: E1031 05:28:22.107328 2992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-k2xp4_calico-system(5205e036-4995-4277-8348-1e3bf6f34336)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-k2xp4_calico-system(5205e036-4995-4277-8348-1e3bf6f34336)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a282770741f6b0f342b7110a6eddc9de46dce0769085471fbfc31f96f5eb4a71\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-k2xp4" podUID="5205e036-4995-4277-8348-1e3bf6f34336" Oct 31 05:28:22.107550 kubelet[2992]: E1031 05:28:22.107429 2992 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b35439168a3e737a9b0e148fec0dab507d661ace09bbee4b4297abd767592ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 05:28:22.108003 kubelet[2992]: E1031 05:28:22.107442 2992 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b35439168a3e737a9b0e148fec0dab507d661ace09bbee4b4297abd767592ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-566c8cd897-f2b62" Oct 31 05:28:22.108003 kubelet[2992]: E1031 05:28:22.107449 2992 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b35439168a3e737a9b0e148fec0dab507d661ace09bbee4b4297abd767592ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-566c8cd897-f2b62" Oct 31 05:28:22.108003 kubelet[2992]: E1031 05:28:22.107468 2992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-566c8cd897-f2b62_calico-system(bb0778d0-3d72-4ac2-9721-069eeeab611b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-566c8cd897-f2b62_calico-system(bb0778d0-3d72-4ac2-9721-069eeeab611b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8b35439168a3e737a9b0e148fec0dab507d661ace09bbee4b4297abd767592ee\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-566c8cd897-f2b62" podUID="bb0778d0-3d72-4ac2-9721-069eeeab611b" Oct 31 05:28:22.108095 containerd[1681]: time="2025-10-31T05:28:22.107739015Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-tw9d4,Uid:f84825b5-aa90-4de7-a17e-50cbbffb855d,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"68aa6442a83f9e87498103a38a38f29c9959db3ec76f4219680c07ddf9871385\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 05:28:22.108141 kubelet[2992]: E1031 05:28:22.107488 2992 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"41c6493a118857b4491de5a7df2fd97ff61bd22b18b4de03fdd0145b24a9d085\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 05:28:22.108141 kubelet[2992]: E1031 05:28:22.107498 2992 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"41c6493a118857b4491de5a7df2fd97ff61bd22b18b4de03fdd0145b24a9d085\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-76bf9d45db-nvffh" Oct 31 05:28:22.108141 kubelet[2992]: E1031 05:28:22.107513 2992 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"41c6493a118857b4491de5a7df2fd97ff61bd22b18b4de03fdd0145b24a9d085\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-76bf9d45db-nvffh" Oct 31 05:28:22.108701 kubelet[2992]: E1031 05:28:22.107538 2992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-76bf9d45db-nvffh_calico-apiserver(186659ac-48c5-4a61-b81f-4021ad112f63)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-76bf9d45db-nvffh_calico-apiserver(186659ac-48c5-4a61-b81f-4021ad112f63)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"41c6493a118857b4491de5a7df2fd97ff61bd22b18b4de03fdd0145b24a9d085\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-76bf9d45db-nvffh" podUID="186659ac-48c5-4a61-b81f-4021ad112f63" Oct 31 05:28:22.108701 kubelet[2992]: E1031 05:28:22.107591 2992 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ba6ebd4ab837d457c0006573c1036cbcb8e82a13ad846df839e40fa87751b75\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 05:28:22.108701 kubelet[2992]: E1031 05:28:22.107603 2992 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ba6ebd4ab837d457c0006573c1036cbcb8e82a13ad846df839e40fa87751b75\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-76bf9d45db-sqc28" Oct 31 05:28:22.108791 kubelet[2992]: E1031 05:28:22.107613 2992 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ba6ebd4ab837d457c0006573c1036cbcb8e82a13ad846df839e40fa87751b75\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-76bf9d45db-sqc28" Oct 31 05:28:22.108791 kubelet[2992]: E1031 05:28:22.107636 2992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-76bf9d45db-sqc28_calico-apiserver(886a9a56-e128-4afe-8400-8c8f904fe953)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-76bf9d45db-sqc28_calico-apiserver(886a9a56-e128-4afe-8400-8c8f904fe953)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4ba6ebd4ab837d457c0006573c1036cbcb8e82a13ad846df839e40fa87751b75\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-76bf9d45db-sqc28" podUID="886a9a56-e128-4afe-8400-8c8f904fe953" Oct 31 05:28:22.108791 kubelet[2992]: E1031 05:28:22.108226 2992 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"68aa6442a83f9e87498103a38a38f29c9959db3ec76f4219680c07ddf9871385\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 05:28:22.109676 kubelet[2992]: E1031 05:28:22.108251 2992 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"68aa6442a83f9e87498103a38a38f29c9959db3ec76f4219680c07ddf9871385\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-tw9d4" Oct 31 05:28:22.109676 kubelet[2992]: E1031 05:28:22.108262 2992 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"68aa6442a83f9e87498103a38a38f29c9959db3ec76f4219680c07ddf9871385\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-tw9d4" Oct 31 05:28:22.109676 kubelet[2992]: E1031 05:28:22.108279 2992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-tw9d4_kube-system(f84825b5-aa90-4de7-a17e-50cbbffb855d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-tw9d4_kube-system(f84825b5-aa90-4de7-a17e-50cbbffb855d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"68aa6442a83f9e87498103a38a38f29c9959db3ec76f4219680c07ddf9871385\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-tw9d4" podUID="f84825b5-aa90-4de7-a17e-50cbbffb855d" Oct 31 05:28:22.882985 systemd[1]: Created slice kubepods-besteffort-poddfabc2cb_7622_45ac_b566_baf9171817d8.slice - libcontainer container kubepods-besteffort-poddfabc2cb_7622_45ac_b566_baf9171817d8.slice. Oct 31 05:28:22.884969 containerd[1681]: time="2025-10-31T05:28:22.884907508Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-d5mr7,Uid:dfabc2cb-7622-45ac-b566-baf9171817d8,Namespace:calico-system,Attempt:0,}" Oct 31 05:28:22.935541 containerd[1681]: time="2025-10-31T05:28:22.935000954Z" level=error msg="Failed to destroy network for sandbox \"f1b4f20175a9f972abd6ad12dba09633c4a41613f146f2e73fd6a5f610604b46\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 05:28:22.936453 systemd[1]: run-netns-cni\x2d2ec3525c\x2dd3e5\x2d5fa1\x2d0e6b\x2d2e08613c460b.mount: Deactivated successfully. Oct 31 05:28:22.952203 containerd[1681]: time="2025-10-31T05:28:22.952142460Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-d5mr7,Uid:dfabc2cb-7622-45ac-b566-baf9171817d8,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f1b4f20175a9f972abd6ad12dba09633c4a41613f146f2e73fd6a5f610604b46\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 05:28:22.953394 kubelet[2992]: E1031 05:28:22.952311 2992 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f1b4f20175a9f972abd6ad12dba09633c4a41613f146f2e73fd6a5f610604b46\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 05:28:22.953394 kubelet[2992]: E1031 05:28:22.952473 2992 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f1b4f20175a9f972abd6ad12dba09633c4a41613f146f2e73fd6a5f610604b46\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-d5mr7" Oct 31 05:28:22.953394 kubelet[2992]: E1031 05:28:22.952489 2992 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f1b4f20175a9f972abd6ad12dba09633c4a41613f146f2e73fd6a5f610604b46\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-d5mr7" Oct 31 05:28:22.953518 kubelet[2992]: E1031 05:28:22.952525 2992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-d5mr7_calico-system(dfabc2cb-7622-45ac-b566-baf9171817d8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-d5mr7_calico-system(dfabc2cb-7622-45ac-b566-baf9171817d8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f1b4f20175a9f972abd6ad12dba09633c4a41613f146f2e73fd6a5f610604b46\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-d5mr7" podUID="dfabc2cb-7622-45ac-b566-baf9171817d8" Oct 31 05:28:29.680042 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4130868132.mount: Deactivated successfully. Oct 31 05:28:29.749418 containerd[1681]: time="2025-10-31T05:28:29.749391415Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Oct 31 05:28:29.752543 containerd[1681]: time="2025-10-31T05:28:29.752497333Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 05:28:29.755984 containerd[1681]: time="2025-10-31T05:28:29.755936634Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 05:28:29.756707 containerd[1681]: time="2025-10-31T05:28:29.756279364Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 05:28:29.757376 containerd[1681]: time="2025-10-31T05:28:29.757361694Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 7.678877339s" Oct 31 05:28:29.757439 containerd[1681]: time="2025-10-31T05:28:29.757428460Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Oct 31 05:28:29.791454 containerd[1681]: time="2025-10-31T05:28:29.791423567Z" level=info msg="CreateContainer within sandbox \"d3657c3af831de2c10299f719985c73d131289202b3f81429519cda6138dbc5b\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Oct 31 05:28:29.814299 containerd[1681]: time="2025-10-31T05:28:29.814276067Z" level=info msg="Container 9a486369aaabc5b6df377a46dfe8999ae939e1df43d392afef92f7e09aba3cbb: CDI devices from CRI Config.CDIDevices: []" Oct 31 05:28:29.816811 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1923904406.mount: Deactivated successfully. Oct 31 05:28:29.883013 containerd[1681]: time="2025-10-31T05:28:29.882981002Z" level=info msg="CreateContainer within sandbox \"d3657c3af831de2c10299f719985c73d131289202b3f81429519cda6138dbc5b\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"9a486369aaabc5b6df377a46dfe8999ae939e1df43d392afef92f7e09aba3cbb\"" Oct 31 05:28:29.883477 containerd[1681]: time="2025-10-31T05:28:29.883460976Z" level=info msg="StartContainer for \"9a486369aaabc5b6df377a46dfe8999ae939e1df43d392afef92f7e09aba3cbb\"" Oct 31 05:28:29.888645 containerd[1681]: time="2025-10-31T05:28:29.888621128Z" level=info msg="connecting to shim 9a486369aaabc5b6df377a46dfe8999ae939e1df43d392afef92f7e09aba3cbb" address="unix:///run/containerd/s/6cd99c696e166c29e6813eb72acf82c5fd7ac21c60e1dcd76428fcd86a0c56a4" protocol=ttrpc version=3 Oct 31 05:28:29.976097 systemd[1]: Started cri-containerd-9a486369aaabc5b6df377a46dfe8999ae939e1df43d392afef92f7e09aba3cbb.scope - libcontainer container 9a486369aaabc5b6df377a46dfe8999ae939e1df43d392afef92f7e09aba3cbb. Oct 31 05:28:30.012776 containerd[1681]: time="2025-10-31T05:28:30.012755553Z" level=info msg="StartContainer for \"9a486369aaabc5b6df377a46dfe8999ae939e1df43d392afef92f7e09aba3cbb\" returns successfully" Oct 31 05:28:30.094647 kubelet[2992]: I1031 05:28:30.094605 2992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-hfv9f" podStartSLOduration=1.394115589 podStartE2EDuration="21.091575524s" podCreationTimestamp="2025-10-31 05:28:09 +0000 UTC" firstStartedPulling="2025-10-31 05:28:10.065684963 +0000 UTC m=+18.575555387" lastFinishedPulling="2025-10-31 05:28:29.763144897 +0000 UTC m=+38.273015322" observedRunningTime="2025-10-31 05:28:30.08880554 +0000 UTC m=+38.598675970" watchObservedRunningTime="2025-10-31 05:28:30.091575524 +0000 UTC m=+38.601445953" Oct 31 05:28:30.938479 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Oct 31 05:28:30.943764 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Oct 31 05:28:30.949346 containerd[1681]: time="2025-10-31T05:28:30.949313197Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9a486369aaabc5b6df377a46dfe8999ae939e1df43d392afef92f7e09aba3cbb\" id:\"465938216aecf0028162b593acf29169426e32ce2a4c0d2396498b23a1d83345\" pid:4003 exit_status:1 exited_at:{seconds:1761888510 nanos:949083988}" Oct 31 05:28:31.213768 containerd[1681]: time="2025-10-31T05:28:31.213659666Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9a486369aaabc5b6df377a46dfe8999ae939e1df43d392afef92f7e09aba3cbb\" id:\"7e8147374bc25283ba9ed3c9c87134d68b3d44b114b30a2b3fe3fb686ca24d4d\" pid:4054 exit_status:1 exited_at:{seconds:1761888511 nanos:213259421}" Oct 31 05:28:31.578814 kubelet[2992]: I1031 05:28:31.578438 2992 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/bb0778d0-3d72-4ac2-9721-069eeeab611b-whisker-backend-key-pair\") pod \"bb0778d0-3d72-4ac2-9721-069eeeab611b\" (UID: \"bb0778d0-3d72-4ac2-9721-069eeeab611b\") " Oct 31 05:28:31.578814 kubelet[2992]: I1031 05:28:31.578493 2992 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bb0778d0-3d72-4ac2-9721-069eeeab611b-whisker-ca-bundle\") pod \"bb0778d0-3d72-4ac2-9721-069eeeab611b\" (UID: \"bb0778d0-3d72-4ac2-9721-069eeeab611b\") " Oct 31 05:28:31.578814 kubelet[2992]: I1031 05:28:31.578509 2992 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fxp6c\" (UniqueName: \"kubernetes.io/projected/bb0778d0-3d72-4ac2-9721-069eeeab611b-kube-api-access-fxp6c\") pod \"bb0778d0-3d72-4ac2-9721-069eeeab611b\" (UID: \"bb0778d0-3d72-4ac2-9721-069eeeab611b\") " Oct 31 05:28:31.587101 systemd[1]: var-lib-kubelet-pods-bb0778d0\x2d3d72\x2d4ac2\x2d9721\x2d069eeeab611b-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Oct 31 05:28:31.589589 kubelet[2992]: I1031 05:28:31.589562 2992 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bb0778d0-3d72-4ac2-9721-069eeeab611b-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "bb0778d0-3d72-4ac2-9721-069eeeab611b" (UID: "bb0778d0-3d72-4ac2-9721-069eeeab611b"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Oct 31 05:28:31.591853 systemd[1]: var-lib-kubelet-pods-bb0778d0\x2d3d72\x2d4ac2\x2d9721\x2d069eeeab611b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dfxp6c.mount: Deactivated successfully. Oct 31 05:28:31.592596 kubelet[2992]: I1031 05:28:31.592465 2992 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb0778d0-3d72-4ac2-9721-069eeeab611b-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "bb0778d0-3d72-4ac2-9721-069eeeab611b" (UID: "bb0778d0-3d72-4ac2-9721-069eeeab611b"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Oct 31 05:28:31.593050 kubelet[2992]: I1031 05:28:31.593027 2992 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb0778d0-3d72-4ac2-9721-069eeeab611b-kube-api-access-fxp6c" (OuterVolumeSpecName: "kube-api-access-fxp6c") pod "bb0778d0-3d72-4ac2-9721-069eeeab611b" (UID: "bb0778d0-3d72-4ac2-9721-069eeeab611b"). InnerVolumeSpecName "kube-api-access-fxp6c". PluginName "kubernetes.io/projected", VolumeGIDValue "" Oct 31 05:28:31.688790 kubelet[2992]: I1031 05:28:31.688723 2992 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/bb0778d0-3d72-4ac2-9721-069eeeab611b-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Oct 31 05:28:31.688790 kubelet[2992]: I1031 05:28:31.688767 2992 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bb0778d0-3d72-4ac2-9721-069eeeab611b-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Oct 31 05:28:31.688790 kubelet[2992]: I1031 05:28:31.688774 2992 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-fxp6c\" (UniqueName: \"kubernetes.io/projected/bb0778d0-3d72-4ac2-9721-069eeeab611b-kube-api-access-fxp6c\") on node \"localhost\" DevicePath \"\"" Oct 31 05:28:31.926974 systemd[1]: Removed slice kubepods-besteffort-podbb0778d0_3d72_4ac2_9721_069eeeab611b.slice - libcontainer container kubepods-besteffort-podbb0778d0_3d72_4ac2_9721_069eeeab611b.slice. Oct 31 05:28:32.185944 containerd[1681]: time="2025-10-31T05:28:32.185238183Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9a486369aaabc5b6df377a46dfe8999ae939e1df43d392afef92f7e09aba3cbb\" id:\"846a0bce7985b912ee6227cc4e9b4f442a9d7c662ab687aafdbc542fa700c8e0\" pid:4098 exit_status:1 exited_at:{seconds:1761888512 nanos:184861605}" Oct 31 05:28:32.623775 systemd[1]: Created slice kubepods-besteffort-podfbed9ce9_cd92_4a0b_8450_76a451690c79.slice - libcontainer container kubepods-besteffort-podfbed9ce9_cd92_4a0b_8450_76a451690c79.slice. Oct 31 05:28:32.768362 kubelet[2992]: I1031 05:28:32.744335 2992 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fbed9ce9-cd92-4a0b-8450-76a451690c79-whisker-ca-bundle\") pod \"whisker-b8c4b8ddf-ztgvm\" (UID: \"fbed9ce9-cd92-4a0b-8450-76a451690c79\") " pod="calico-system/whisker-b8c4b8ddf-ztgvm" Oct 31 05:28:32.768670 kubelet[2992]: I1031 05:28:32.768395 2992 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttqzx\" (UniqueName: \"kubernetes.io/projected/fbed9ce9-cd92-4a0b-8450-76a451690c79-kube-api-access-ttqzx\") pod \"whisker-b8c4b8ddf-ztgvm\" (UID: \"fbed9ce9-cd92-4a0b-8450-76a451690c79\") " pod="calico-system/whisker-b8c4b8ddf-ztgvm" Oct 31 05:28:32.768670 kubelet[2992]: I1031 05:28:32.768417 2992 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/fbed9ce9-cd92-4a0b-8450-76a451690c79-whisker-backend-key-pair\") pod \"whisker-b8c4b8ddf-ztgvm\" (UID: \"fbed9ce9-cd92-4a0b-8450-76a451690c79\") " pod="calico-system/whisker-b8c4b8ddf-ztgvm" Oct 31 05:28:32.959256 containerd[1681]: time="2025-10-31T05:28:32.959194101Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-b8c4b8ddf-ztgvm,Uid:fbed9ce9-cd92-4a0b-8450-76a451690c79,Namespace:calico-system,Attempt:0,}" Oct 31 05:28:33.074205 systemd-networkd[1579]: vxlan.calico: Link UP Oct 31 05:28:33.074209 systemd-networkd[1579]: vxlan.calico: Gained carrier Oct 31 05:28:33.695836 systemd-networkd[1579]: calia8ebf357584: Link UP Oct 31 05:28:33.696470 systemd-networkd[1579]: calia8ebf357584: Gained carrier Oct 31 05:28:33.710939 containerd[1681]: 2025-10-31 05:28:33.168 [INFO][4251] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--b8c4b8ddf--ztgvm-eth0 whisker-b8c4b8ddf- calico-system fbed9ce9-cd92-4a0b-8450-76a451690c79 927 0 2025-10-31 05:28:32 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:b8c4b8ddf projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-b8c4b8ddf-ztgvm eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calia8ebf357584 [] [] }} ContainerID="f09c9bfc9342f3c35cc996d23f7f0f10b6f5da9a0cf21418f8512be3bbef1f74" Namespace="calico-system" Pod="whisker-b8c4b8ddf-ztgvm" WorkloadEndpoint="localhost-k8s-whisker--b8c4b8ddf--ztgvm-" Oct 31 05:28:33.710939 containerd[1681]: 2025-10-31 05:28:33.169 [INFO][4251] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f09c9bfc9342f3c35cc996d23f7f0f10b6f5da9a0cf21418f8512be3bbef1f74" Namespace="calico-system" Pod="whisker-b8c4b8ddf-ztgvm" WorkloadEndpoint="localhost-k8s-whisker--b8c4b8ddf--ztgvm-eth0" Oct 31 05:28:33.710939 containerd[1681]: 2025-10-31 05:28:33.641 [INFO][4290] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f09c9bfc9342f3c35cc996d23f7f0f10b6f5da9a0cf21418f8512be3bbef1f74" HandleID="k8s-pod-network.f09c9bfc9342f3c35cc996d23f7f0f10b6f5da9a0cf21418f8512be3bbef1f74" Workload="localhost-k8s-whisker--b8c4b8ddf--ztgvm-eth0" Oct 31 05:28:33.712033 containerd[1681]: 2025-10-31 05:28:33.644 [INFO][4290] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="f09c9bfc9342f3c35cc996d23f7f0f10b6f5da9a0cf21418f8512be3bbef1f74" HandleID="k8s-pod-network.f09c9bfc9342f3c35cc996d23f7f0f10b6f5da9a0cf21418f8512be3bbef1f74" Workload="localhost-k8s-whisker--b8c4b8ddf--ztgvm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000392610), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-b8c4b8ddf-ztgvm", "timestamp":"2025-10-31 05:28:33.641727822 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 31 05:28:33.712033 containerd[1681]: 2025-10-31 05:28:33.644 [INFO][4290] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 05:28:33.712033 containerd[1681]: 2025-10-31 05:28:33.644 [INFO][4290] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 05:28:33.712033 containerd[1681]: 2025-10-31 05:28:33.645 [INFO][4290] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 31 05:28:33.712033 containerd[1681]: 2025-10-31 05:28:33.660 [INFO][4290] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f09c9bfc9342f3c35cc996d23f7f0f10b6f5da9a0cf21418f8512be3bbef1f74" host="localhost" Oct 31 05:28:33.712033 containerd[1681]: 2025-10-31 05:28:33.669 [INFO][4290] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 31 05:28:33.712033 containerd[1681]: 2025-10-31 05:28:33.674 [INFO][4290] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 31 05:28:33.712033 containerd[1681]: 2025-10-31 05:28:33.676 [INFO][4290] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 31 05:28:33.712033 containerd[1681]: 2025-10-31 05:28:33.678 [INFO][4290] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 31 05:28:33.712033 containerd[1681]: 2025-10-31 05:28:33.678 [INFO][4290] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f09c9bfc9342f3c35cc996d23f7f0f10b6f5da9a0cf21418f8512be3bbef1f74" host="localhost" Oct 31 05:28:33.713117 containerd[1681]: 2025-10-31 05:28:33.679 [INFO][4290] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.f09c9bfc9342f3c35cc996d23f7f0f10b6f5da9a0cf21418f8512be3bbef1f74 Oct 31 05:28:33.713117 containerd[1681]: 2025-10-31 05:28:33.682 [INFO][4290] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f09c9bfc9342f3c35cc996d23f7f0f10b6f5da9a0cf21418f8512be3bbef1f74" host="localhost" Oct 31 05:28:33.713117 containerd[1681]: 2025-10-31 05:28:33.686 [INFO][4290] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.f09c9bfc9342f3c35cc996d23f7f0f10b6f5da9a0cf21418f8512be3bbef1f74" host="localhost" Oct 31 05:28:33.713117 containerd[1681]: 2025-10-31 05:28:33.687 [INFO][4290] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.f09c9bfc9342f3c35cc996d23f7f0f10b6f5da9a0cf21418f8512be3bbef1f74" host="localhost" Oct 31 05:28:33.713117 containerd[1681]: 2025-10-31 05:28:33.687 [INFO][4290] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 05:28:33.713117 containerd[1681]: 2025-10-31 05:28:33.687 [INFO][4290] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="f09c9bfc9342f3c35cc996d23f7f0f10b6f5da9a0cf21418f8512be3bbef1f74" HandleID="k8s-pod-network.f09c9bfc9342f3c35cc996d23f7f0f10b6f5da9a0cf21418f8512be3bbef1f74" Workload="localhost-k8s-whisker--b8c4b8ddf--ztgvm-eth0" Oct 31 05:28:33.713297 containerd[1681]: 2025-10-31 05:28:33.689 [INFO][4251] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f09c9bfc9342f3c35cc996d23f7f0f10b6f5da9a0cf21418f8512be3bbef1f74" Namespace="calico-system" Pod="whisker-b8c4b8ddf-ztgvm" WorkloadEndpoint="localhost-k8s-whisker--b8c4b8ddf--ztgvm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--b8c4b8ddf--ztgvm-eth0", GenerateName:"whisker-b8c4b8ddf-", Namespace:"calico-system", SelfLink:"", UID:"fbed9ce9-cd92-4a0b-8450-76a451690c79", ResourceVersion:"927", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 5, 28, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"b8c4b8ddf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-b8c4b8ddf-ztgvm", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calia8ebf357584", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 05:28:33.713297 containerd[1681]: 2025-10-31 05:28:33.689 [INFO][4251] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="f09c9bfc9342f3c35cc996d23f7f0f10b6f5da9a0cf21418f8512be3bbef1f74" Namespace="calico-system" Pod="whisker-b8c4b8ddf-ztgvm" WorkloadEndpoint="localhost-k8s-whisker--b8c4b8ddf--ztgvm-eth0" Oct 31 05:28:33.713368 containerd[1681]: 2025-10-31 05:28:33.689 [INFO][4251] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia8ebf357584 ContainerID="f09c9bfc9342f3c35cc996d23f7f0f10b6f5da9a0cf21418f8512be3bbef1f74" Namespace="calico-system" Pod="whisker-b8c4b8ddf-ztgvm" WorkloadEndpoint="localhost-k8s-whisker--b8c4b8ddf--ztgvm-eth0" Oct 31 05:28:33.713368 containerd[1681]: 2025-10-31 05:28:33.697 [INFO][4251] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f09c9bfc9342f3c35cc996d23f7f0f10b6f5da9a0cf21418f8512be3bbef1f74" Namespace="calico-system" Pod="whisker-b8c4b8ddf-ztgvm" WorkloadEndpoint="localhost-k8s-whisker--b8c4b8ddf--ztgvm-eth0" Oct 31 05:28:33.713405 containerd[1681]: 2025-10-31 05:28:33.697 [INFO][4251] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f09c9bfc9342f3c35cc996d23f7f0f10b6f5da9a0cf21418f8512be3bbef1f74" Namespace="calico-system" Pod="whisker-b8c4b8ddf-ztgvm" WorkloadEndpoint="localhost-k8s-whisker--b8c4b8ddf--ztgvm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--b8c4b8ddf--ztgvm-eth0", GenerateName:"whisker-b8c4b8ddf-", Namespace:"calico-system", SelfLink:"", UID:"fbed9ce9-cd92-4a0b-8450-76a451690c79", ResourceVersion:"927", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 5, 28, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"b8c4b8ddf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f09c9bfc9342f3c35cc996d23f7f0f10b6f5da9a0cf21418f8512be3bbef1f74", Pod:"whisker-b8c4b8ddf-ztgvm", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calia8ebf357584", MAC:"42:f7:45:34:5c:de", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 05:28:33.713481 containerd[1681]: 2025-10-31 05:28:33.707 [INFO][4251] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f09c9bfc9342f3c35cc996d23f7f0f10b6f5da9a0cf21418f8512be3bbef1f74" Namespace="calico-system" Pod="whisker-b8c4b8ddf-ztgvm" WorkloadEndpoint="localhost-k8s-whisker--b8c4b8ddf--ztgvm-eth0" Oct 31 05:28:33.858954 kubelet[2992]: I1031 05:28:33.858778 2992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb0778d0-3d72-4ac2-9721-069eeeab611b" path="/var/lib/kubelet/pods/bb0778d0-3d72-4ac2-9721-069eeeab611b/volumes" Oct 31 05:28:33.866178 containerd[1681]: time="2025-10-31T05:28:33.866150763Z" level=info msg="connecting to shim f09c9bfc9342f3c35cc996d23f7f0f10b6f5da9a0cf21418f8512be3bbef1f74" address="unix:///run/containerd/s/cd2b6b28943704c4da716ab4483d545d2a21d120965d7e6c1e6c9184dcfa2b41" namespace=k8s.io protocol=ttrpc version=3 Oct 31 05:28:33.888259 systemd[1]: Started cri-containerd-f09c9bfc9342f3c35cc996d23f7f0f10b6f5da9a0cf21418f8512be3bbef1f74.scope - libcontainer container f09c9bfc9342f3c35cc996d23f7f0f10b6f5da9a0cf21418f8512be3bbef1f74. Oct 31 05:28:33.904135 systemd-resolved[1340]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 31 05:28:33.936273 containerd[1681]: time="2025-10-31T05:28:33.936242606Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-b8c4b8ddf-ztgvm,Uid:fbed9ce9-cd92-4a0b-8450-76a451690c79,Namespace:calico-system,Attempt:0,} returns sandbox id \"f09c9bfc9342f3c35cc996d23f7f0f10b6f5da9a0cf21418f8512be3bbef1f74\"" Oct 31 05:28:33.961506 containerd[1681]: time="2025-10-31T05:28:33.961414885Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 31 05:28:34.272441 containerd[1681]: time="2025-10-31T05:28:34.272168667Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 31 05:28:34.272749 containerd[1681]: time="2025-10-31T05:28:34.272672740Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 31 05:28:34.272749 containerd[1681]: time="2025-10-31T05:28:34.272732364Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Oct 31 05:28:34.276719 kubelet[2992]: E1031 05:28:34.274583 2992 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 31 05:28:34.276836 kubelet[2992]: E1031 05:28:34.276731 2992 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 31 05:28:34.295292 kubelet[2992]: E1031 05:28:34.295230 2992 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:97d129c82c6c425ba6815f4f838775d1,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ttqzx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-b8c4b8ddf-ztgvm_calico-system(fbed9ce9-cd92-4a0b-8450-76a451690c79): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 31 05:28:34.297681 containerd[1681]: time="2025-10-31T05:28:34.297582270Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 31 05:28:34.677992 containerd[1681]: time="2025-10-31T05:28:34.677898602Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 31 05:28:34.683366 containerd[1681]: time="2025-10-31T05:28:34.683320355Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 31 05:28:34.683521 containerd[1681]: time="2025-10-31T05:28:34.683392868Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Oct 31 05:28:34.683598 kubelet[2992]: E1031 05:28:34.683519 2992 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 31 05:28:34.683598 kubelet[2992]: E1031 05:28:34.683556 2992 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 31 05:28:34.684004 kubelet[2992]: E1031 05:28:34.683647 2992 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ttqzx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-b8c4b8ddf-ztgvm_calico-system(fbed9ce9-cd92-4a0b-8450-76a451690c79): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 31 05:28:34.685335 kubelet[2992]: E1031 05:28:34.685277 2992 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-b8c4b8ddf-ztgvm" podUID="fbed9ce9-cd92-4a0b-8450-76a451690c79" Oct 31 05:28:34.741074 systemd-networkd[1579]: vxlan.calico: Gained IPv6LL Oct 31 05:28:34.847835 containerd[1681]: time="2025-10-31T05:28:34.847784575Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-5zjgb,Uid:dbcbf5c8-0241-4abd-a1f5-927e98ef9e75,Namespace:kube-system,Attempt:0,}" Oct 31 05:28:34.952114 systemd-networkd[1579]: calicb615dee145: Link UP Oct 31 05:28:34.954019 systemd-networkd[1579]: calicb615dee145: Gained carrier Oct 31 05:28:34.971137 containerd[1681]: 2025-10-31 05:28:34.883 [INFO][4391] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--5zjgb-eth0 coredns-674b8bbfcf- kube-system dbcbf5c8-0241-4abd-a1f5-927e98ef9e75 846 0 2025-10-31 05:27:57 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-5zjgb eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calicb615dee145 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="fd03467c7a6b80084c6e531ab5cb17b75462203fcfdd9c61284a7fa151da47b1" Namespace="kube-system" Pod="coredns-674b8bbfcf-5zjgb" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--5zjgb-" Oct 31 05:28:34.971137 containerd[1681]: 2025-10-31 05:28:34.884 [INFO][4391] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fd03467c7a6b80084c6e531ab5cb17b75462203fcfdd9c61284a7fa151da47b1" Namespace="kube-system" Pod="coredns-674b8bbfcf-5zjgb" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--5zjgb-eth0" Oct 31 05:28:34.971137 containerd[1681]: 2025-10-31 05:28:34.918 [INFO][4403] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fd03467c7a6b80084c6e531ab5cb17b75462203fcfdd9c61284a7fa151da47b1" HandleID="k8s-pod-network.fd03467c7a6b80084c6e531ab5cb17b75462203fcfdd9c61284a7fa151da47b1" Workload="localhost-k8s-coredns--674b8bbfcf--5zjgb-eth0" Oct 31 05:28:34.971300 containerd[1681]: 2025-10-31 05:28:34.918 [INFO][4403] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="fd03467c7a6b80084c6e531ab5cb17b75462203fcfdd9c61284a7fa151da47b1" HandleID="k8s-pod-network.fd03467c7a6b80084c6e531ab5cb17b75462203fcfdd9c61284a7fa151da47b1" Workload="localhost-k8s-coredns--674b8bbfcf--5zjgb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d4fe0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-5zjgb", "timestamp":"2025-10-31 05:28:34.918237812 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 31 05:28:34.971300 containerd[1681]: 2025-10-31 05:28:34.918 [INFO][4403] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 05:28:34.971300 containerd[1681]: 2025-10-31 05:28:34.918 [INFO][4403] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 05:28:34.971300 containerd[1681]: 2025-10-31 05:28:34.918 [INFO][4403] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 31 05:28:34.971300 containerd[1681]: 2025-10-31 05:28:34.923 [INFO][4403] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.fd03467c7a6b80084c6e531ab5cb17b75462203fcfdd9c61284a7fa151da47b1" host="localhost" Oct 31 05:28:34.971300 containerd[1681]: 2025-10-31 05:28:34.927 [INFO][4403] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 31 05:28:34.971300 containerd[1681]: 2025-10-31 05:28:34.931 [INFO][4403] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 31 05:28:34.971300 containerd[1681]: 2025-10-31 05:28:34.933 [INFO][4403] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 31 05:28:34.971300 containerd[1681]: 2025-10-31 05:28:34.936 [INFO][4403] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 31 05:28:34.971300 containerd[1681]: 2025-10-31 05:28:34.936 [INFO][4403] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.fd03467c7a6b80084c6e531ab5cb17b75462203fcfdd9c61284a7fa151da47b1" host="localhost" Oct 31 05:28:34.972156 containerd[1681]: 2025-10-31 05:28:34.937 [INFO][4403] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.fd03467c7a6b80084c6e531ab5cb17b75462203fcfdd9c61284a7fa151da47b1 Oct 31 05:28:34.972156 containerd[1681]: 2025-10-31 05:28:34.941 [INFO][4403] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.fd03467c7a6b80084c6e531ab5cb17b75462203fcfdd9c61284a7fa151da47b1" host="localhost" Oct 31 05:28:34.972156 containerd[1681]: 2025-10-31 05:28:34.945 [INFO][4403] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.fd03467c7a6b80084c6e531ab5cb17b75462203fcfdd9c61284a7fa151da47b1" host="localhost" Oct 31 05:28:34.972156 containerd[1681]: 2025-10-31 05:28:34.945 [INFO][4403] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.fd03467c7a6b80084c6e531ab5cb17b75462203fcfdd9c61284a7fa151da47b1" host="localhost" Oct 31 05:28:34.972156 containerd[1681]: 2025-10-31 05:28:34.945 [INFO][4403] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 05:28:34.972156 containerd[1681]: 2025-10-31 05:28:34.945 [INFO][4403] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="fd03467c7a6b80084c6e531ab5cb17b75462203fcfdd9c61284a7fa151da47b1" HandleID="k8s-pod-network.fd03467c7a6b80084c6e531ab5cb17b75462203fcfdd9c61284a7fa151da47b1" Workload="localhost-k8s-coredns--674b8bbfcf--5zjgb-eth0" Oct 31 05:28:34.972657 containerd[1681]: 2025-10-31 05:28:34.949 [INFO][4391] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fd03467c7a6b80084c6e531ab5cb17b75462203fcfdd9c61284a7fa151da47b1" Namespace="kube-system" Pod="coredns-674b8bbfcf-5zjgb" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--5zjgb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--5zjgb-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"dbcbf5c8-0241-4abd-a1f5-927e98ef9e75", ResourceVersion:"846", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 5, 27, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-5zjgb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicb615dee145", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 05:28:34.973216 containerd[1681]: 2025-10-31 05:28:34.949 [INFO][4391] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="fd03467c7a6b80084c6e531ab5cb17b75462203fcfdd9c61284a7fa151da47b1" Namespace="kube-system" Pod="coredns-674b8bbfcf-5zjgb" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--5zjgb-eth0" Oct 31 05:28:34.973216 containerd[1681]: 2025-10-31 05:28:34.949 [INFO][4391] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicb615dee145 ContainerID="fd03467c7a6b80084c6e531ab5cb17b75462203fcfdd9c61284a7fa151da47b1" Namespace="kube-system" Pod="coredns-674b8bbfcf-5zjgb" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--5zjgb-eth0" Oct 31 05:28:34.973216 containerd[1681]: 2025-10-31 05:28:34.954 [INFO][4391] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fd03467c7a6b80084c6e531ab5cb17b75462203fcfdd9c61284a7fa151da47b1" Namespace="kube-system" Pod="coredns-674b8bbfcf-5zjgb" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--5zjgb-eth0" Oct 31 05:28:34.973279 containerd[1681]: 2025-10-31 05:28:34.955 [INFO][4391] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fd03467c7a6b80084c6e531ab5cb17b75462203fcfdd9c61284a7fa151da47b1" Namespace="kube-system" Pod="coredns-674b8bbfcf-5zjgb" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--5zjgb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--5zjgb-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"dbcbf5c8-0241-4abd-a1f5-927e98ef9e75", ResourceVersion:"846", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 5, 27, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fd03467c7a6b80084c6e531ab5cb17b75462203fcfdd9c61284a7fa151da47b1", Pod:"coredns-674b8bbfcf-5zjgb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicb615dee145", MAC:"2e:69:cd:f3:f3:79", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 05:28:34.973279 containerd[1681]: 2025-10-31 05:28:34.964 [INFO][4391] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fd03467c7a6b80084c6e531ab5cb17b75462203fcfdd9c61284a7fa151da47b1" Namespace="kube-system" Pod="coredns-674b8bbfcf-5zjgb" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--5zjgb-eth0" Oct 31 05:28:34.994086 containerd[1681]: time="2025-10-31T05:28:34.994044699Z" level=info msg="connecting to shim fd03467c7a6b80084c6e531ab5cb17b75462203fcfdd9c61284a7fa151da47b1" address="unix:///run/containerd/s/20752b3670623343fc98a4cc2c7cb619a7e6afbb2f6c2c3c289677bd3565c00f" namespace=k8s.io protocol=ttrpc version=3 Oct 31 05:28:35.018074 systemd[1]: Started cri-containerd-fd03467c7a6b80084c6e531ab5cb17b75462203fcfdd9c61284a7fa151da47b1.scope - libcontainer container fd03467c7a6b80084c6e531ab5cb17b75462203fcfdd9c61284a7fa151da47b1. Oct 31 05:28:35.027402 systemd-resolved[1340]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 31 05:28:35.063956 containerd[1681]: time="2025-10-31T05:28:35.063629787Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-5zjgb,Uid:dbcbf5c8-0241-4abd-a1f5-927e98ef9e75,Namespace:kube-system,Attempt:0,} returns sandbox id \"fd03467c7a6b80084c6e531ab5cb17b75462203fcfdd9c61284a7fa151da47b1\"" Oct 31 05:28:35.106910 containerd[1681]: time="2025-10-31T05:28:35.106872620Z" level=info msg="CreateContainer within sandbox \"fd03467c7a6b80084c6e531ab5cb17b75462203fcfdd9c61284a7fa151da47b1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 31 05:28:35.126348 containerd[1681]: time="2025-10-31T05:28:35.125573780Z" level=info msg="Container 2f319e006c46db9590fc60dbbf960b539d36f317d12e74cd2e8eab7ce6fd152e: CDI devices from CRI Config.CDIDevices: []" Oct 31 05:28:35.126155 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount324134712.mount: Deactivated successfully. Oct 31 05:28:35.131227 containerd[1681]: time="2025-10-31T05:28:35.131132136Z" level=info msg="CreateContainer within sandbox \"fd03467c7a6b80084c6e531ab5cb17b75462203fcfdd9c61284a7fa151da47b1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2f319e006c46db9590fc60dbbf960b539d36f317d12e74cd2e8eab7ce6fd152e\"" Oct 31 05:28:35.137066 containerd[1681]: time="2025-10-31T05:28:35.137039891Z" level=info msg="StartContainer for \"2f319e006c46db9590fc60dbbf960b539d36f317d12e74cd2e8eab7ce6fd152e\"" Oct 31 05:28:35.137607 containerd[1681]: time="2025-10-31T05:28:35.137585838Z" level=info msg="connecting to shim 2f319e006c46db9590fc60dbbf960b539d36f317d12e74cd2e8eab7ce6fd152e" address="unix:///run/containerd/s/20752b3670623343fc98a4cc2c7cb619a7e6afbb2f6c2c3c289677bd3565c00f" protocol=ttrpc version=3 Oct 31 05:28:35.156052 systemd[1]: Started cri-containerd-2f319e006c46db9590fc60dbbf960b539d36f317d12e74cd2e8eab7ce6fd152e.scope - libcontainer container 2f319e006c46db9590fc60dbbf960b539d36f317d12e74cd2e8eab7ce6fd152e. Oct 31 05:28:35.178129 containerd[1681]: time="2025-10-31T05:28:35.178104225Z" level=info msg="StartContainer for \"2f319e006c46db9590fc60dbbf960b539d36f317d12e74cd2e8eab7ce6fd152e\" returns successfully" Oct 31 05:28:35.199388 kubelet[2992]: E1031 05:28:35.199337 2992 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-b8c4b8ddf-ztgvm" podUID="fbed9ce9-cd92-4a0b-8450-76a451690c79" Oct 31 05:28:35.225433 kubelet[2992]: I1031 05:28:35.225105 2992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-5zjgb" podStartSLOduration=38.225083133 podStartE2EDuration="38.225083133s" podCreationTimestamp="2025-10-31 05:27:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-31 05:28:35.215108722 +0000 UTC m=+43.724979158" watchObservedRunningTime="2025-10-31 05:28:35.225083133 +0000 UTC m=+43.734953569" Oct 31 05:28:35.637107 systemd-networkd[1579]: calia8ebf357584: Gained IPv6LL Oct 31 05:28:35.848342 containerd[1681]: time="2025-10-31T05:28:35.848265178Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76bf9d45db-sqc28,Uid:886a9a56-e128-4afe-8400-8c8f904fe953,Namespace:calico-apiserver,Attempt:0,}" Oct 31 05:28:35.971625 systemd-networkd[1579]: calib6db13fbc62: Link UP Oct 31 05:28:35.972618 systemd-networkd[1579]: calib6db13fbc62: Gained carrier Oct 31 05:28:35.992179 containerd[1681]: 2025-10-31 05:28:35.902 [INFO][4500] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--76bf9d45db--sqc28-eth0 calico-apiserver-76bf9d45db- calico-apiserver 886a9a56-e128-4afe-8400-8c8f904fe953 849 0 2025-10-31 05:28:05 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:76bf9d45db projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-76bf9d45db-sqc28 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calib6db13fbc62 [] [] }} ContainerID="e4593aa9ba0886bb2570c3b26a41de4a52ec164ec8d937546c1d1d42f9023c81" Namespace="calico-apiserver" Pod="calico-apiserver-76bf9d45db-sqc28" WorkloadEndpoint="localhost-k8s-calico--apiserver--76bf9d45db--sqc28-" Oct 31 05:28:35.992179 containerd[1681]: 2025-10-31 05:28:35.902 [INFO][4500] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e4593aa9ba0886bb2570c3b26a41de4a52ec164ec8d937546c1d1d42f9023c81" Namespace="calico-apiserver" Pod="calico-apiserver-76bf9d45db-sqc28" WorkloadEndpoint="localhost-k8s-calico--apiserver--76bf9d45db--sqc28-eth0" Oct 31 05:28:35.992179 containerd[1681]: 2025-10-31 05:28:35.931 [INFO][4512] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e4593aa9ba0886bb2570c3b26a41de4a52ec164ec8d937546c1d1d42f9023c81" HandleID="k8s-pod-network.e4593aa9ba0886bb2570c3b26a41de4a52ec164ec8d937546c1d1d42f9023c81" Workload="localhost-k8s-calico--apiserver--76bf9d45db--sqc28-eth0" Oct 31 05:28:35.992179 containerd[1681]: 2025-10-31 05:28:35.931 [INFO][4512] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="e4593aa9ba0886bb2570c3b26a41de4a52ec164ec8d937546c1d1d42f9023c81" HandleID="k8s-pod-network.e4593aa9ba0886bb2570c3b26a41de4a52ec164ec8d937546c1d1d42f9023c81" Workload="localhost-k8s-calico--apiserver--76bf9d45db--sqc28-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024ef50), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-76bf9d45db-sqc28", "timestamp":"2025-10-31 05:28:35.931517012 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 31 05:28:35.992179 containerd[1681]: 2025-10-31 05:28:35.931 [INFO][4512] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 05:28:35.992179 containerd[1681]: 2025-10-31 05:28:35.931 [INFO][4512] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 05:28:35.992179 containerd[1681]: 2025-10-31 05:28:35.931 [INFO][4512] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 31 05:28:35.992179 containerd[1681]: 2025-10-31 05:28:35.937 [INFO][4512] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e4593aa9ba0886bb2570c3b26a41de4a52ec164ec8d937546c1d1d42f9023c81" host="localhost" Oct 31 05:28:35.992179 containerd[1681]: 2025-10-31 05:28:35.940 [INFO][4512] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 31 05:28:35.992179 containerd[1681]: 2025-10-31 05:28:35.943 [INFO][4512] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 31 05:28:35.992179 containerd[1681]: 2025-10-31 05:28:35.944 [INFO][4512] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 31 05:28:35.992179 containerd[1681]: 2025-10-31 05:28:35.945 [INFO][4512] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 31 05:28:35.992179 containerd[1681]: 2025-10-31 05:28:35.945 [INFO][4512] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e4593aa9ba0886bb2570c3b26a41de4a52ec164ec8d937546c1d1d42f9023c81" host="localhost" Oct 31 05:28:35.992179 containerd[1681]: 2025-10-31 05:28:35.947 [INFO][4512] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.e4593aa9ba0886bb2570c3b26a41de4a52ec164ec8d937546c1d1d42f9023c81 Oct 31 05:28:35.992179 containerd[1681]: 2025-10-31 05:28:35.951 [INFO][4512] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e4593aa9ba0886bb2570c3b26a41de4a52ec164ec8d937546c1d1d42f9023c81" host="localhost" Oct 31 05:28:35.992179 containerd[1681]: 2025-10-31 05:28:35.962 [INFO][4512] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.e4593aa9ba0886bb2570c3b26a41de4a52ec164ec8d937546c1d1d42f9023c81" host="localhost" Oct 31 05:28:35.992179 containerd[1681]: 2025-10-31 05:28:35.962 [INFO][4512] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.e4593aa9ba0886bb2570c3b26a41de4a52ec164ec8d937546c1d1d42f9023c81" host="localhost" Oct 31 05:28:35.992179 containerd[1681]: 2025-10-31 05:28:35.962 [INFO][4512] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 05:28:35.992179 containerd[1681]: 2025-10-31 05:28:35.962 [INFO][4512] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="e4593aa9ba0886bb2570c3b26a41de4a52ec164ec8d937546c1d1d42f9023c81" HandleID="k8s-pod-network.e4593aa9ba0886bb2570c3b26a41de4a52ec164ec8d937546c1d1d42f9023c81" Workload="localhost-k8s-calico--apiserver--76bf9d45db--sqc28-eth0" Oct 31 05:28:36.005587 containerd[1681]: 2025-10-31 05:28:35.964 [INFO][4500] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e4593aa9ba0886bb2570c3b26a41de4a52ec164ec8d937546c1d1d42f9023c81" Namespace="calico-apiserver" Pod="calico-apiserver-76bf9d45db-sqc28" WorkloadEndpoint="localhost-k8s-calico--apiserver--76bf9d45db--sqc28-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--76bf9d45db--sqc28-eth0", GenerateName:"calico-apiserver-76bf9d45db-", Namespace:"calico-apiserver", SelfLink:"", UID:"886a9a56-e128-4afe-8400-8c8f904fe953", ResourceVersion:"849", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 5, 28, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"76bf9d45db", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-76bf9d45db-sqc28", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib6db13fbc62", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 05:28:36.005587 containerd[1681]: 2025-10-31 05:28:35.969 [INFO][4500] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="e4593aa9ba0886bb2570c3b26a41de4a52ec164ec8d937546c1d1d42f9023c81" Namespace="calico-apiserver" Pod="calico-apiserver-76bf9d45db-sqc28" WorkloadEndpoint="localhost-k8s-calico--apiserver--76bf9d45db--sqc28-eth0" Oct 31 05:28:36.005587 containerd[1681]: 2025-10-31 05:28:35.969 [INFO][4500] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib6db13fbc62 ContainerID="e4593aa9ba0886bb2570c3b26a41de4a52ec164ec8d937546c1d1d42f9023c81" Namespace="calico-apiserver" Pod="calico-apiserver-76bf9d45db-sqc28" WorkloadEndpoint="localhost-k8s-calico--apiserver--76bf9d45db--sqc28-eth0" Oct 31 05:28:36.005587 containerd[1681]: 2025-10-31 05:28:35.972 [INFO][4500] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e4593aa9ba0886bb2570c3b26a41de4a52ec164ec8d937546c1d1d42f9023c81" Namespace="calico-apiserver" Pod="calico-apiserver-76bf9d45db-sqc28" WorkloadEndpoint="localhost-k8s-calico--apiserver--76bf9d45db--sqc28-eth0" Oct 31 05:28:36.005587 containerd[1681]: 2025-10-31 05:28:35.973 [INFO][4500] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e4593aa9ba0886bb2570c3b26a41de4a52ec164ec8d937546c1d1d42f9023c81" Namespace="calico-apiserver" Pod="calico-apiserver-76bf9d45db-sqc28" WorkloadEndpoint="localhost-k8s-calico--apiserver--76bf9d45db--sqc28-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--76bf9d45db--sqc28-eth0", GenerateName:"calico-apiserver-76bf9d45db-", Namespace:"calico-apiserver", SelfLink:"", UID:"886a9a56-e128-4afe-8400-8c8f904fe953", ResourceVersion:"849", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 5, 28, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"76bf9d45db", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e4593aa9ba0886bb2570c3b26a41de4a52ec164ec8d937546c1d1d42f9023c81", Pod:"calico-apiserver-76bf9d45db-sqc28", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib6db13fbc62", MAC:"c2:46:6d:be:69:35", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 05:28:36.005587 containerd[1681]: 2025-10-31 05:28:35.990 [INFO][4500] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e4593aa9ba0886bb2570c3b26a41de4a52ec164ec8d937546c1d1d42f9023c81" Namespace="calico-apiserver" Pod="calico-apiserver-76bf9d45db-sqc28" WorkloadEndpoint="localhost-k8s-calico--apiserver--76bf9d45db--sqc28-eth0" Oct 31 05:28:36.036454 containerd[1681]: time="2025-10-31T05:28:36.036042290Z" level=info msg="connecting to shim e4593aa9ba0886bb2570c3b26a41de4a52ec164ec8d937546c1d1d42f9023c81" address="unix:///run/containerd/s/6aa8ff81c7846791a076493ba2081e39be42413eafac30e4771686d8df8afc48" namespace=k8s.io protocol=ttrpc version=3 Oct 31 05:28:36.060054 systemd[1]: Started cri-containerd-e4593aa9ba0886bb2570c3b26a41de4a52ec164ec8d937546c1d1d42f9023c81.scope - libcontainer container e4593aa9ba0886bb2570c3b26a41de4a52ec164ec8d937546c1d1d42f9023c81. Oct 31 05:28:36.069630 systemd-resolved[1340]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 31 05:28:36.094557 containerd[1681]: time="2025-10-31T05:28:36.094528516Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76bf9d45db-sqc28,Uid:886a9a56-e128-4afe-8400-8c8f904fe953,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"e4593aa9ba0886bb2570c3b26a41de4a52ec164ec8d937546c1d1d42f9023c81\"" Oct 31 05:28:36.097808 containerd[1681]: time="2025-10-31T05:28:36.097703749Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 31 05:28:36.149014 systemd-networkd[1579]: calicb615dee145: Gained IPv6LL Oct 31 05:28:36.484499 containerd[1681]: time="2025-10-31T05:28:36.484456908Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 31 05:28:36.486010 containerd[1681]: time="2025-10-31T05:28:36.485988748Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 31 05:28:36.492552 containerd[1681]: time="2025-10-31T05:28:36.486054822Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 31 05:28:36.492635 kubelet[2992]: E1031 05:28:36.486155 2992 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 05:28:36.492635 kubelet[2992]: E1031 05:28:36.486196 2992 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 05:28:36.492635 kubelet[2992]: E1031 05:28:36.486299 2992 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kqjkq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-76bf9d45db-sqc28_calico-apiserver(886a9a56-e128-4afe-8400-8c8f904fe953): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 31 05:28:36.492635 kubelet[2992]: E1031 05:28:36.488287 2992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76bf9d45db-sqc28" podUID="886a9a56-e128-4afe-8400-8c8f904fe953" Oct 31 05:28:36.850122 containerd[1681]: time="2025-10-31T05:28:36.850093897Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7d5c795875-vrwbt,Uid:ca64dee5-fbb5-4c1e-b441-a24d0937f7dd,Namespace:calico-system,Attempt:0,}" Oct 31 05:28:36.851247 containerd[1681]: time="2025-10-31T05:28:36.850587500Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-tw9d4,Uid:f84825b5-aa90-4de7-a17e-50cbbffb855d,Namespace:kube-system,Attempt:0,}" Oct 31 05:28:36.851247 containerd[1681]: time="2025-10-31T05:28:36.850677333Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-k2xp4,Uid:5205e036-4995-4277-8348-1e3bf6f34336,Namespace:calico-system,Attempt:0,}" Oct 31 05:28:36.851247 containerd[1681]: time="2025-10-31T05:28:36.850802811Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76bf9d45db-nvffh,Uid:186659ac-48c5-4a61-b81f-4021ad112f63,Namespace:calico-apiserver,Attempt:0,}" Oct 31 05:28:36.851247 containerd[1681]: time="2025-10-31T05:28:36.850807810Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-d5mr7,Uid:dfabc2cb-7622-45ac-b566-baf9171817d8,Namespace:calico-system,Attempt:0,}" Oct 31 05:28:37.079813 systemd-networkd[1579]: cali839efc101ff: Link UP Oct 31 05:28:37.081126 systemd-networkd[1579]: cali839efc101ff: Gained carrier Oct 31 05:28:37.097497 containerd[1681]: 2025-10-31 05:28:36.943 [INFO][4593] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--76bf9d45db--nvffh-eth0 calico-apiserver-76bf9d45db- calico-apiserver 186659ac-48c5-4a61-b81f-4021ad112f63 848 0 2025-10-31 05:28:05 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:76bf9d45db projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-76bf9d45db-nvffh eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali839efc101ff [] [] }} ContainerID="acc23cb066a72ba78b772f83bb6f9549275708777a9d791686590abe6fd745c1" Namespace="calico-apiserver" Pod="calico-apiserver-76bf9d45db-nvffh" WorkloadEndpoint="localhost-k8s-calico--apiserver--76bf9d45db--nvffh-" Oct 31 05:28:37.097497 containerd[1681]: 2025-10-31 05:28:36.944 [INFO][4593] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="acc23cb066a72ba78b772f83bb6f9549275708777a9d791686590abe6fd745c1" Namespace="calico-apiserver" Pod="calico-apiserver-76bf9d45db-nvffh" WorkloadEndpoint="localhost-k8s-calico--apiserver--76bf9d45db--nvffh-eth0" Oct 31 05:28:37.097497 containerd[1681]: 2025-10-31 05:28:37.014 [INFO][4634] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="acc23cb066a72ba78b772f83bb6f9549275708777a9d791686590abe6fd745c1" HandleID="k8s-pod-network.acc23cb066a72ba78b772f83bb6f9549275708777a9d791686590abe6fd745c1" Workload="localhost-k8s-calico--apiserver--76bf9d45db--nvffh-eth0" Oct 31 05:28:37.097497 containerd[1681]: 2025-10-31 05:28:37.015 [INFO][4634] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="acc23cb066a72ba78b772f83bb6f9549275708777a9d791686590abe6fd745c1" HandleID="k8s-pod-network.acc23cb066a72ba78b772f83bb6f9549275708777a9d791686590abe6fd745c1" Workload="localhost-k8s-calico--apiserver--76bf9d45db--nvffh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d58c0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-76bf9d45db-nvffh", "timestamp":"2025-10-31 05:28:37.014312366 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 31 05:28:37.097497 containerd[1681]: 2025-10-31 05:28:37.015 [INFO][4634] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 05:28:37.097497 containerd[1681]: 2025-10-31 05:28:37.015 [INFO][4634] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 05:28:37.097497 containerd[1681]: 2025-10-31 05:28:37.015 [INFO][4634] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 31 05:28:37.097497 containerd[1681]: 2025-10-31 05:28:37.031 [INFO][4634] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.acc23cb066a72ba78b772f83bb6f9549275708777a9d791686590abe6fd745c1" host="localhost" Oct 31 05:28:37.097497 containerd[1681]: 2025-10-31 05:28:37.046 [INFO][4634] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 31 05:28:37.097497 containerd[1681]: 2025-10-31 05:28:37.050 [INFO][4634] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 31 05:28:37.097497 containerd[1681]: 2025-10-31 05:28:37.052 [INFO][4634] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 31 05:28:37.097497 containerd[1681]: 2025-10-31 05:28:37.055 [INFO][4634] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 31 05:28:37.097497 containerd[1681]: 2025-10-31 05:28:37.055 [INFO][4634] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.acc23cb066a72ba78b772f83bb6f9549275708777a9d791686590abe6fd745c1" host="localhost" Oct 31 05:28:37.097497 containerd[1681]: 2025-10-31 05:28:37.057 [INFO][4634] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.acc23cb066a72ba78b772f83bb6f9549275708777a9d791686590abe6fd745c1 Oct 31 05:28:37.097497 containerd[1681]: 2025-10-31 05:28:37.063 [INFO][4634] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.acc23cb066a72ba78b772f83bb6f9549275708777a9d791686590abe6fd745c1" host="localhost" Oct 31 05:28:37.097497 containerd[1681]: 2025-10-31 05:28:37.072 [INFO][4634] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.acc23cb066a72ba78b772f83bb6f9549275708777a9d791686590abe6fd745c1" host="localhost" Oct 31 05:28:37.097497 containerd[1681]: 2025-10-31 05:28:37.072 [INFO][4634] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.acc23cb066a72ba78b772f83bb6f9549275708777a9d791686590abe6fd745c1" host="localhost" Oct 31 05:28:37.097497 containerd[1681]: 2025-10-31 05:28:37.072 [INFO][4634] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 05:28:37.097497 containerd[1681]: 2025-10-31 05:28:37.072 [INFO][4634] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="acc23cb066a72ba78b772f83bb6f9549275708777a9d791686590abe6fd745c1" HandleID="k8s-pod-network.acc23cb066a72ba78b772f83bb6f9549275708777a9d791686590abe6fd745c1" Workload="localhost-k8s-calico--apiserver--76bf9d45db--nvffh-eth0" Oct 31 05:28:37.106296 containerd[1681]: 2025-10-31 05:28:37.074 [INFO][4593] cni-plugin/k8s.go 418: Populated endpoint ContainerID="acc23cb066a72ba78b772f83bb6f9549275708777a9d791686590abe6fd745c1" Namespace="calico-apiserver" Pod="calico-apiserver-76bf9d45db-nvffh" WorkloadEndpoint="localhost-k8s-calico--apiserver--76bf9d45db--nvffh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--76bf9d45db--nvffh-eth0", GenerateName:"calico-apiserver-76bf9d45db-", Namespace:"calico-apiserver", SelfLink:"", UID:"186659ac-48c5-4a61-b81f-4021ad112f63", ResourceVersion:"848", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 5, 28, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"76bf9d45db", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-76bf9d45db-nvffh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali839efc101ff", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 05:28:37.106296 containerd[1681]: 2025-10-31 05:28:37.075 [INFO][4593] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="acc23cb066a72ba78b772f83bb6f9549275708777a9d791686590abe6fd745c1" Namespace="calico-apiserver" Pod="calico-apiserver-76bf9d45db-nvffh" WorkloadEndpoint="localhost-k8s-calico--apiserver--76bf9d45db--nvffh-eth0" Oct 31 05:28:37.106296 containerd[1681]: 2025-10-31 05:28:37.075 [INFO][4593] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali839efc101ff ContainerID="acc23cb066a72ba78b772f83bb6f9549275708777a9d791686590abe6fd745c1" Namespace="calico-apiserver" Pod="calico-apiserver-76bf9d45db-nvffh" WorkloadEndpoint="localhost-k8s-calico--apiserver--76bf9d45db--nvffh-eth0" Oct 31 05:28:37.106296 containerd[1681]: 2025-10-31 05:28:37.081 [INFO][4593] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="acc23cb066a72ba78b772f83bb6f9549275708777a9d791686590abe6fd745c1" Namespace="calico-apiserver" Pod="calico-apiserver-76bf9d45db-nvffh" WorkloadEndpoint="localhost-k8s-calico--apiserver--76bf9d45db--nvffh-eth0" Oct 31 05:28:37.106296 containerd[1681]: 2025-10-31 05:28:37.081 [INFO][4593] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="acc23cb066a72ba78b772f83bb6f9549275708777a9d791686590abe6fd745c1" Namespace="calico-apiserver" Pod="calico-apiserver-76bf9d45db-nvffh" WorkloadEndpoint="localhost-k8s-calico--apiserver--76bf9d45db--nvffh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--76bf9d45db--nvffh-eth0", GenerateName:"calico-apiserver-76bf9d45db-", Namespace:"calico-apiserver", SelfLink:"", UID:"186659ac-48c5-4a61-b81f-4021ad112f63", ResourceVersion:"848", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 5, 28, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"76bf9d45db", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"acc23cb066a72ba78b772f83bb6f9549275708777a9d791686590abe6fd745c1", Pod:"calico-apiserver-76bf9d45db-nvffh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali839efc101ff", MAC:"4e:f5:c0:1b:24:16", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 05:28:37.106296 containerd[1681]: 2025-10-31 05:28:37.096 [INFO][4593] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="acc23cb066a72ba78b772f83bb6f9549275708777a9d791686590abe6fd745c1" Namespace="calico-apiserver" Pod="calico-apiserver-76bf9d45db-nvffh" WorkloadEndpoint="localhost-k8s-calico--apiserver--76bf9d45db--nvffh-eth0" Oct 31 05:28:37.169659 systemd-networkd[1579]: cali3a44bd48c49: Link UP Oct 31 05:28:37.170147 systemd-networkd[1579]: cali3a44bd48c49: Gained carrier Oct 31 05:28:37.182781 containerd[1681]: 2025-10-31 05:28:36.978 [INFO][4583] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--7d5c795875--vrwbt-eth0 calico-kube-controllers-7d5c795875- calico-system ca64dee5-fbb5-4c1e-b441-a24d0937f7dd 853 0 2025-10-31 05:28:09 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7d5c795875 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-7d5c795875-vrwbt eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali3a44bd48c49 [] [] }} ContainerID="c444535e011db1bd886ad3c9ad71c2c1bd67a72a793bb9bba267729f437f7063" Namespace="calico-system" Pod="calico-kube-controllers-7d5c795875-vrwbt" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7d5c795875--vrwbt-" Oct 31 05:28:37.182781 containerd[1681]: 2025-10-31 05:28:36.978 [INFO][4583] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c444535e011db1bd886ad3c9ad71c2c1bd67a72a793bb9bba267729f437f7063" Namespace="calico-system" Pod="calico-kube-controllers-7d5c795875-vrwbt" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7d5c795875--vrwbt-eth0" Oct 31 05:28:37.182781 containerd[1681]: 2025-10-31 05:28:37.023 [INFO][4649] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c444535e011db1bd886ad3c9ad71c2c1bd67a72a793bb9bba267729f437f7063" HandleID="k8s-pod-network.c444535e011db1bd886ad3c9ad71c2c1bd67a72a793bb9bba267729f437f7063" Workload="localhost-k8s-calico--kube--controllers--7d5c795875--vrwbt-eth0" Oct 31 05:28:37.182781 containerd[1681]: 2025-10-31 05:28:37.023 [INFO][4649] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c444535e011db1bd886ad3c9ad71c2c1bd67a72a793bb9bba267729f437f7063" HandleID="k8s-pod-network.c444535e011db1bd886ad3c9ad71c2c1bd67a72a793bb9bba267729f437f7063" Workload="localhost-k8s-calico--kube--controllers--7d5c795875--vrwbt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002f1010), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-7d5c795875-vrwbt", "timestamp":"2025-10-31 05:28:37.022990329 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 31 05:28:37.182781 containerd[1681]: 2025-10-31 05:28:37.023 [INFO][4649] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 05:28:37.182781 containerd[1681]: 2025-10-31 05:28:37.072 [INFO][4649] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 05:28:37.182781 containerd[1681]: 2025-10-31 05:28:37.072 [INFO][4649] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 31 05:28:37.182781 containerd[1681]: 2025-10-31 05:28:37.128 [INFO][4649] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c444535e011db1bd886ad3c9ad71c2c1bd67a72a793bb9bba267729f437f7063" host="localhost" Oct 31 05:28:37.182781 containerd[1681]: 2025-10-31 05:28:37.145 [INFO][4649] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 31 05:28:37.182781 containerd[1681]: 2025-10-31 05:28:37.149 [INFO][4649] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 31 05:28:37.182781 containerd[1681]: 2025-10-31 05:28:37.150 [INFO][4649] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 31 05:28:37.182781 containerd[1681]: 2025-10-31 05:28:37.151 [INFO][4649] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 31 05:28:37.182781 containerd[1681]: 2025-10-31 05:28:37.151 [INFO][4649] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c444535e011db1bd886ad3c9ad71c2c1bd67a72a793bb9bba267729f437f7063" host="localhost" Oct 31 05:28:37.182781 containerd[1681]: 2025-10-31 05:28:37.152 [INFO][4649] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c444535e011db1bd886ad3c9ad71c2c1bd67a72a793bb9bba267729f437f7063 Oct 31 05:28:37.182781 containerd[1681]: 2025-10-31 05:28:37.155 [INFO][4649] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c444535e011db1bd886ad3c9ad71c2c1bd67a72a793bb9bba267729f437f7063" host="localhost" Oct 31 05:28:37.182781 containerd[1681]: 2025-10-31 05:28:37.162 [INFO][4649] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.c444535e011db1bd886ad3c9ad71c2c1bd67a72a793bb9bba267729f437f7063" host="localhost" Oct 31 05:28:37.182781 containerd[1681]: 2025-10-31 05:28:37.162 [INFO][4649] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.c444535e011db1bd886ad3c9ad71c2c1bd67a72a793bb9bba267729f437f7063" host="localhost" Oct 31 05:28:37.182781 containerd[1681]: 2025-10-31 05:28:37.162 [INFO][4649] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 05:28:37.182781 containerd[1681]: 2025-10-31 05:28:37.162 [INFO][4649] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="c444535e011db1bd886ad3c9ad71c2c1bd67a72a793bb9bba267729f437f7063" HandleID="k8s-pod-network.c444535e011db1bd886ad3c9ad71c2c1bd67a72a793bb9bba267729f437f7063" Workload="localhost-k8s-calico--kube--controllers--7d5c795875--vrwbt-eth0" Oct 31 05:28:37.192837 containerd[1681]: 2025-10-31 05:28:37.164 [INFO][4583] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c444535e011db1bd886ad3c9ad71c2c1bd67a72a793bb9bba267729f437f7063" Namespace="calico-system" Pod="calico-kube-controllers-7d5c795875-vrwbt" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7d5c795875--vrwbt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7d5c795875--vrwbt-eth0", GenerateName:"calico-kube-controllers-7d5c795875-", Namespace:"calico-system", SelfLink:"", UID:"ca64dee5-fbb5-4c1e-b441-a24d0937f7dd", ResourceVersion:"853", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 5, 28, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7d5c795875", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-7d5c795875-vrwbt", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3a44bd48c49", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 05:28:37.192837 containerd[1681]: 2025-10-31 05:28:37.166 [INFO][4583] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="c444535e011db1bd886ad3c9ad71c2c1bd67a72a793bb9bba267729f437f7063" Namespace="calico-system" Pod="calico-kube-controllers-7d5c795875-vrwbt" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7d5c795875--vrwbt-eth0" Oct 31 05:28:37.192837 containerd[1681]: 2025-10-31 05:28:37.166 [INFO][4583] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3a44bd48c49 ContainerID="c444535e011db1bd886ad3c9ad71c2c1bd67a72a793bb9bba267729f437f7063" Namespace="calico-system" Pod="calico-kube-controllers-7d5c795875-vrwbt" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7d5c795875--vrwbt-eth0" Oct 31 05:28:37.192837 containerd[1681]: 2025-10-31 05:28:37.170 [INFO][4583] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c444535e011db1bd886ad3c9ad71c2c1bd67a72a793bb9bba267729f437f7063" Namespace="calico-system" Pod="calico-kube-controllers-7d5c795875-vrwbt" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7d5c795875--vrwbt-eth0" Oct 31 05:28:37.192837 containerd[1681]: 2025-10-31 05:28:37.170 [INFO][4583] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c444535e011db1bd886ad3c9ad71c2c1bd67a72a793bb9bba267729f437f7063" Namespace="calico-system" Pod="calico-kube-controllers-7d5c795875-vrwbt" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7d5c795875--vrwbt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7d5c795875--vrwbt-eth0", GenerateName:"calico-kube-controllers-7d5c795875-", Namespace:"calico-system", SelfLink:"", UID:"ca64dee5-fbb5-4c1e-b441-a24d0937f7dd", ResourceVersion:"853", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 5, 28, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7d5c795875", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c444535e011db1bd886ad3c9ad71c2c1bd67a72a793bb9bba267729f437f7063", Pod:"calico-kube-controllers-7d5c795875-vrwbt", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3a44bd48c49", MAC:"ba:8e:5e:8d:82:9e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 05:28:37.192837 containerd[1681]: 2025-10-31 05:28:37.180 [INFO][4583] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c444535e011db1bd886ad3c9ad71c2c1bd67a72a793bb9bba267729f437f7063" Namespace="calico-system" Pod="calico-kube-controllers-7d5c795875-vrwbt" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7d5c795875--vrwbt-eth0" Oct 31 05:28:37.201200 kubelet[2992]: E1031 05:28:37.201175 2992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76bf9d45db-sqc28" podUID="886a9a56-e128-4afe-8400-8c8f904fe953" Oct 31 05:28:37.258560 containerd[1681]: time="2025-10-31T05:28:37.258219787Z" level=info msg="connecting to shim acc23cb066a72ba78b772f83bb6f9549275708777a9d791686590abe6fd745c1" address="unix:///run/containerd/s/13f1eaf10eab392a746739d1c66928d692d031a34f1fd083776085291b10ce2a" namespace=k8s.io protocol=ttrpc version=3 Oct 31 05:28:37.265308 containerd[1681]: time="2025-10-31T05:28:37.265105531Z" level=info msg="connecting to shim c444535e011db1bd886ad3c9ad71c2c1bd67a72a793bb9bba267729f437f7063" address="unix:///run/containerd/s/ad49b51f33ca2d1ae3ceb47d35cf044d11931596e6f6cc1d6d08a7b55a11bb39" namespace=k8s.io protocol=ttrpc version=3 Oct 31 05:28:37.282830 systemd-networkd[1579]: calibd0dc125a75: Link UP Oct 31 05:28:37.286409 systemd-networkd[1579]: calibd0dc125a75: Gained carrier Oct 31 05:28:37.302383 systemd[1]: Started cri-containerd-acc23cb066a72ba78b772f83bb6f9549275708777a9d791686590abe6fd745c1.scope - libcontainer container acc23cb066a72ba78b772f83bb6f9549275708777a9d791686590abe6fd745c1. Oct 31 05:28:37.305856 systemd[1]: Started cri-containerd-c444535e011db1bd886ad3c9ad71c2c1bd67a72a793bb9bba267729f437f7063.scope - libcontainer container c444535e011db1bd886ad3c9ad71c2c1bd67a72a793bb9bba267729f437f7063. Oct 31 05:28:37.308864 containerd[1681]: 2025-10-31 05:28:37.000 [INFO][4594] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--d5mr7-eth0 csi-node-driver- calico-system dfabc2cb-7622-45ac-b566-baf9171817d8 734 0 2025-10-31 05:28:09 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-d5mr7 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calibd0dc125a75 [] [] }} ContainerID="ae2a91d4d7473ab4fca36d255d4c24316be2f306ccdb41f09823d156c56720fb" Namespace="calico-system" Pod="csi-node-driver-d5mr7" WorkloadEndpoint="localhost-k8s-csi--node--driver--d5mr7-" Oct 31 05:28:37.308864 containerd[1681]: 2025-10-31 05:28:37.001 [INFO][4594] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ae2a91d4d7473ab4fca36d255d4c24316be2f306ccdb41f09823d156c56720fb" Namespace="calico-system" Pod="csi-node-driver-d5mr7" WorkloadEndpoint="localhost-k8s-csi--node--driver--d5mr7-eth0" Oct 31 05:28:37.308864 containerd[1681]: 2025-10-31 05:28:37.058 [INFO][4659] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ae2a91d4d7473ab4fca36d255d4c24316be2f306ccdb41f09823d156c56720fb" HandleID="k8s-pod-network.ae2a91d4d7473ab4fca36d255d4c24316be2f306ccdb41f09823d156c56720fb" Workload="localhost-k8s-csi--node--driver--d5mr7-eth0" Oct 31 05:28:37.308864 containerd[1681]: 2025-10-31 05:28:37.058 [INFO][4659] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ae2a91d4d7473ab4fca36d255d4c24316be2f306ccdb41f09823d156c56720fb" HandleID="k8s-pod-network.ae2a91d4d7473ab4fca36d255d4c24316be2f306ccdb41f09823d156c56720fb" Workload="localhost-k8s-csi--node--driver--d5mr7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00032d400), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-d5mr7", "timestamp":"2025-10-31 05:28:37.058554022 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 31 05:28:37.308864 containerd[1681]: 2025-10-31 05:28:37.058 [INFO][4659] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 05:28:37.308864 containerd[1681]: 2025-10-31 05:28:37.162 [INFO][4659] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 05:28:37.308864 containerd[1681]: 2025-10-31 05:28:37.162 [INFO][4659] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 31 05:28:37.308864 containerd[1681]: 2025-10-31 05:28:37.229 [INFO][4659] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ae2a91d4d7473ab4fca36d255d4c24316be2f306ccdb41f09823d156c56720fb" host="localhost" Oct 31 05:28:37.308864 containerd[1681]: 2025-10-31 05:28:37.248 [INFO][4659] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 31 05:28:37.308864 containerd[1681]: 2025-10-31 05:28:37.252 [INFO][4659] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 31 05:28:37.308864 containerd[1681]: 2025-10-31 05:28:37.254 [INFO][4659] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 31 05:28:37.308864 containerd[1681]: 2025-10-31 05:28:37.257 [INFO][4659] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 31 05:28:37.308864 containerd[1681]: 2025-10-31 05:28:37.257 [INFO][4659] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ae2a91d4d7473ab4fca36d255d4c24316be2f306ccdb41f09823d156c56720fb" host="localhost" Oct 31 05:28:37.308864 containerd[1681]: 2025-10-31 05:28:37.259 [INFO][4659] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ae2a91d4d7473ab4fca36d255d4c24316be2f306ccdb41f09823d156c56720fb Oct 31 05:28:37.308864 containerd[1681]: 2025-10-31 05:28:37.266 [INFO][4659] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ae2a91d4d7473ab4fca36d255d4c24316be2f306ccdb41f09823d156c56720fb" host="localhost" Oct 31 05:28:37.308864 containerd[1681]: 2025-10-31 05:28:37.275 [INFO][4659] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.ae2a91d4d7473ab4fca36d255d4c24316be2f306ccdb41f09823d156c56720fb" host="localhost" Oct 31 05:28:37.308864 containerd[1681]: 2025-10-31 05:28:37.275 [INFO][4659] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.ae2a91d4d7473ab4fca36d255d4c24316be2f306ccdb41f09823d156c56720fb" host="localhost" Oct 31 05:28:37.308864 containerd[1681]: 2025-10-31 05:28:37.275 [INFO][4659] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 05:28:37.308864 containerd[1681]: 2025-10-31 05:28:37.275 [INFO][4659] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="ae2a91d4d7473ab4fca36d255d4c24316be2f306ccdb41f09823d156c56720fb" HandleID="k8s-pod-network.ae2a91d4d7473ab4fca36d255d4c24316be2f306ccdb41f09823d156c56720fb" Workload="localhost-k8s-csi--node--driver--d5mr7-eth0" Oct 31 05:28:37.310750 containerd[1681]: 2025-10-31 05:28:37.278 [INFO][4594] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ae2a91d4d7473ab4fca36d255d4c24316be2f306ccdb41f09823d156c56720fb" Namespace="calico-system" Pod="csi-node-driver-d5mr7" WorkloadEndpoint="localhost-k8s-csi--node--driver--d5mr7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--d5mr7-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"dfabc2cb-7622-45ac-b566-baf9171817d8", ResourceVersion:"734", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 5, 28, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-d5mr7", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calibd0dc125a75", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 05:28:37.310750 containerd[1681]: 2025-10-31 05:28:37.278 [INFO][4594] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="ae2a91d4d7473ab4fca36d255d4c24316be2f306ccdb41f09823d156c56720fb" Namespace="calico-system" Pod="csi-node-driver-d5mr7" WorkloadEndpoint="localhost-k8s-csi--node--driver--d5mr7-eth0" Oct 31 05:28:37.310750 containerd[1681]: 2025-10-31 05:28:37.278 [INFO][4594] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibd0dc125a75 ContainerID="ae2a91d4d7473ab4fca36d255d4c24316be2f306ccdb41f09823d156c56720fb" Namespace="calico-system" Pod="csi-node-driver-d5mr7" WorkloadEndpoint="localhost-k8s-csi--node--driver--d5mr7-eth0" Oct 31 05:28:37.310750 containerd[1681]: 2025-10-31 05:28:37.290 [INFO][4594] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ae2a91d4d7473ab4fca36d255d4c24316be2f306ccdb41f09823d156c56720fb" Namespace="calico-system" Pod="csi-node-driver-d5mr7" WorkloadEndpoint="localhost-k8s-csi--node--driver--d5mr7-eth0" Oct 31 05:28:37.310750 containerd[1681]: 2025-10-31 05:28:37.291 [INFO][4594] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ae2a91d4d7473ab4fca36d255d4c24316be2f306ccdb41f09823d156c56720fb" Namespace="calico-system" Pod="csi-node-driver-d5mr7" WorkloadEndpoint="localhost-k8s-csi--node--driver--d5mr7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--d5mr7-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"dfabc2cb-7622-45ac-b566-baf9171817d8", ResourceVersion:"734", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 5, 28, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ae2a91d4d7473ab4fca36d255d4c24316be2f306ccdb41f09823d156c56720fb", Pod:"csi-node-driver-d5mr7", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calibd0dc125a75", MAC:"fe:3f:fa:99:8f:33", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 05:28:37.310750 containerd[1681]: 2025-10-31 05:28:37.302 [INFO][4594] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ae2a91d4d7473ab4fca36d255d4c24316be2f306ccdb41f09823d156c56720fb" Namespace="calico-system" Pod="csi-node-driver-d5mr7" WorkloadEndpoint="localhost-k8s-csi--node--driver--d5mr7-eth0" Oct 31 05:28:37.323720 systemd-resolved[1340]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 31 05:28:37.336073 containerd[1681]: time="2025-10-31T05:28:37.336040292Z" level=info msg="connecting to shim ae2a91d4d7473ab4fca36d255d4c24316be2f306ccdb41f09823d156c56720fb" address="unix:///run/containerd/s/c7ce4cb0c683c0d0628dc904908f4a500868c490a8a0b7d83ef8ebcb4251f8c1" namespace=k8s.io protocol=ttrpc version=3 Oct 31 05:28:37.344273 systemd-resolved[1340]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 31 05:28:37.379098 systemd[1]: Started cri-containerd-ae2a91d4d7473ab4fca36d255d4c24316be2f306ccdb41f09823d156c56720fb.scope - libcontainer container ae2a91d4d7473ab4fca36d255d4c24316be2f306ccdb41f09823d156c56720fb. Oct 31 05:28:37.396647 systemd-networkd[1579]: cali62a53fcf259: Link UP Oct 31 05:28:37.406097 systemd-networkd[1579]: cali62a53fcf259: Gained carrier Oct 31 05:28:37.425151 containerd[1681]: 2025-10-31 05:28:36.949 [INFO][4574] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--666569f655--k2xp4-eth0 goldmane-666569f655- calico-system 5205e036-4995-4277-8348-1e3bf6f34336 850 0 2025-10-31 05:28:07 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-666569f655-k2xp4 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali62a53fcf259 [] [] }} ContainerID="d3f8e8ae0c72d3d8463d3e71ed46968040ce56d5a639d93bf505d86b613e26a5" Namespace="calico-system" Pod="goldmane-666569f655-k2xp4" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--k2xp4-" Oct 31 05:28:37.425151 containerd[1681]: 2025-10-31 05:28:36.949 [INFO][4574] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d3f8e8ae0c72d3d8463d3e71ed46968040ce56d5a639d93bf505d86b613e26a5" Namespace="calico-system" Pod="goldmane-666569f655-k2xp4" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--k2xp4-eth0" Oct 31 05:28:37.425151 containerd[1681]: 2025-10-31 05:28:37.069 [INFO][4637] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d3f8e8ae0c72d3d8463d3e71ed46968040ce56d5a639d93bf505d86b613e26a5" HandleID="k8s-pod-network.d3f8e8ae0c72d3d8463d3e71ed46968040ce56d5a639d93bf505d86b613e26a5" Workload="localhost-k8s-goldmane--666569f655--k2xp4-eth0" Oct 31 05:28:37.425151 containerd[1681]: 2025-10-31 05:28:37.072 [INFO][4637] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="d3f8e8ae0c72d3d8463d3e71ed46968040ce56d5a639d93bf505d86b613e26a5" HandleID="k8s-pod-network.d3f8e8ae0c72d3d8463d3e71ed46968040ce56d5a639d93bf505d86b613e26a5" Workload="localhost-k8s-goldmane--666569f655--k2xp4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5350), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-666569f655-k2xp4", "timestamp":"2025-10-31 05:28:37.069030136 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 31 05:28:37.425151 containerd[1681]: 2025-10-31 05:28:37.072 [INFO][4637] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 05:28:37.425151 containerd[1681]: 2025-10-31 05:28:37.275 [INFO][4637] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 05:28:37.425151 containerd[1681]: 2025-10-31 05:28:37.275 [INFO][4637] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 31 05:28:37.425151 containerd[1681]: 2025-10-31 05:28:37.330 [INFO][4637] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d3f8e8ae0c72d3d8463d3e71ed46968040ce56d5a639d93bf505d86b613e26a5" host="localhost" Oct 31 05:28:37.425151 containerd[1681]: 2025-10-31 05:28:37.346 [INFO][4637] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 31 05:28:37.425151 containerd[1681]: 2025-10-31 05:28:37.354 [INFO][4637] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 31 05:28:37.425151 containerd[1681]: 2025-10-31 05:28:37.357 [INFO][4637] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 31 05:28:37.425151 containerd[1681]: 2025-10-31 05:28:37.359 [INFO][4637] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 31 05:28:37.425151 containerd[1681]: 2025-10-31 05:28:37.360 [INFO][4637] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d3f8e8ae0c72d3d8463d3e71ed46968040ce56d5a639d93bf505d86b613e26a5" host="localhost" Oct 31 05:28:37.425151 containerd[1681]: 2025-10-31 05:28:37.362 [INFO][4637] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.d3f8e8ae0c72d3d8463d3e71ed46968040ce56d5a639d93bf505d86b613e26a5 Oct 31 05:28:37.425151 containerd[1681]: 2025-10-31 05:28:37.369 [INFO][4637] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d3f8e8ae0c72d3d8463d3e71ed46968040ce56d5a639d93bf505d86b613e26a5" host="localhost" Oct 31 05:28:37.425151 containerd[1681]: 2025-10-31 05:28:37.375 [INFO][4637] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.d3f8e8ae0c72d3d8463d3e71ed46968040ce56d5a639d93bf505d86b613e26a5" host="localhost" Oct 31 05:28:37.425151 containerd[1681]: 2025-10-31 05:28:37.375 [INFO][4637] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.d3f8e8ae0c72d3d8463d3e71ed46968040ce56d5a639d93bf505d86b613e26a5" host="localhost" Oct 31 05:28:37.425151 containerd[1681]: 2025-10-31 05:28:37.375 [INFO][4637] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 05:28:37.425151 containerd[1681]: 2025-10-31 05:28:37.375 [INFO][4637] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="d3f8e8ae0c72d3d8463d3e71ed46968040ce56d5a639d93bf505d86b613e26a5" HandleID="k8s-pod-network.d3f8e8ae0c72d3d8463d3e71ed46968040ce56d5a639d93bf505d86b613e26a5" Workload="localhost-k8s-goldmane--666569f655--k2xp4-eth0" Oct 31 05:28:37.426388 containerd[1681]: 2025-10-31 05:28:37.377 [INFO][4574] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d3f8e8ae0c72d3d8463d3e71ed46968040ce56d5a639d93bf505d86b613e26a5" Namespace="calico-system" Pod="goldmane-666569f655-k2xp4" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--k2xp4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--k2xp4-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"5205e036-4995-4277-8348-1e3bf6f34336", ResourceVersion:"850", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 5, 28, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-666569f655-k2xp4", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali62a53fcf259", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 05:28:37.426388 containerd[1681]: 2025-10-31 05:28:37.378 [INFO][4574] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="d3f8e8ae0c72d3d8463d3e71ed46968040ce56d5a639d93bf505d86b613e26a5" Namespace="calico-system" Pod="goldmane-666569f655-k2xp4" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--k2xp4-eth0" Oct 31 05:28:37.426388 containerd[1681]: 2025-10-31 05:28:37.378 [INFO][4574] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali62a53fcf259 ContainerID="d3f8e8ae0c72d3d8463d3e71ed46968040ce56d5a639d93bf505d86b613e26a5" Namespace="calico-system" Pod="goldmane-666569f655-k2xp4" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--k2xp4-eth0" Oct 31 05:28:37.426388 containerd[1681]: 2025-10-31 05:28:37.407 [INFO][4574] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d3f8e8ae0c72d3d8463d3e71ed46968040ce56d5a639d93bf505d86b613e26a5" Namespace="calico-system" Pod="goldmane-666569f655-k2xp4" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--k2xp4-eth0" Oct 31 05:28:37.426388 containerd[1681]: 2025-10-31 05:28:37.409 [INFO][4574] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d3f8e8ae0c72d3d8463d3e71ed46968040ce56d5a639d93bf505d86b613e26a5" Namespace="calico-system" Pod="goldmane-666569f655-k2xp4" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--k2xp4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--k2xp4-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"5205e036-4995-4277-8348-1e3bf6f34336", ResourceVersion:"850", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 5, 28, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d3f8e8ae0c72d3d8463d3e71ed46968040ce56d5a639d93bf505d86b613e26a5", Pod:"goldmane-666569f655-k2xp4", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali62a53fcf259", MAC:"9e:0f:ff:22:c8:a4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 05:28:37.426388 containerd[1681]: 2025-10-31 05:28:37.418 [INFO][4574] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d3f8e8ae0c72d3d8463d3e71ed46968040ce56d5a639d93bf505d86b613e26a5" Namespace="calico-system" Pod="goldmane-666569f655-k2xp4" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--k2xp4-eth0" Oct 31 05:28:37.439422 systemd-resolved[1340]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 31 05:28:37.448992 containerd[1681]: time="2025-10-31T05:28:37.448571111Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76bf9d45db-nvffh,Uid:186659ac-48c5-4a61-b81f-4021ad112f63,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"acc23cb066a72ba78b772f83bb6f9549275708777a9d791686590abe6fd745c1\"" Oct 31 05:28:37.452284 containerd[1681]: time="2025-10-31T05:28:37.452175044Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 31 05:28:37.466209 containerd[1681]: time="2025-10-31T05:28:37.466186251Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7d5c795875-vrwbt,Uid:ca64dee5-fbb5-4c1e-b441-a24d0937f7dd,Namespace:calico-system,Attempt:0,} returns sandbox id \"c444535e011db1bd886ad3c9ad71c2c1bd67a72a793bb9bba267729f437f7063\"" Oct 31 05:28:37.470115 containerd[1681]: time="2025-10-31T05:28:37.470073454Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-d5mr7,Uid:dfabc2cb-7622-45ac-b566-baf9171817d8,Namespace:calico-system,Attempt:0,} returns sandbox id \"ae2a91d4d7473ab4fca36d255d4c24316be2f306ccdb41f09823d156c56720fb\"" Oct 31 05:28:37.490115 systemd-networkd[1579]: cali19d456e3df4: Link UP Oct 31 05:28:37.491005 systemd-networkd[1579]: cali19d456e3df4: Gained carrier Oct 31 05:28:37.493075 systemd-networkd[1579]: calib6db13fbc62: Gained IPv6LL Oct 31 05:28:37.496177 containerd[1681]: time="2025-10-31T05:28:37.496150877Z" level=info msg="connecting to shim d3f8e8ae0c72d3d8463d3e71ed46968040ce56d5a639d93bf505d86b613e26a5" address="unix:///run/containerd/s/2a8d7e888810a6c6996c8ca03627e2dcab658ef8110ee07a37d66c8a26598739" namespace=k8s.io protocol=ttrpc version=3 Oct 31 05:28:37.505184 containerd[1681]: 2025-10-31 05:28:37.011 [INFO][4603] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--tw9d4-eth0 coredns-674b8bbfcf- kube-system f84825b5-aa90-4de7-a17e-50cbbffb855d 852 0 2025-10-31 05:27:57 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-tw9d4 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali19d456e3df4 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="f6455490050f5d3c169a0a517b30c3539eb20c1f7359bbe2a1f3db946ff6992c" Namespace="kube-system" Pod="coredns-674b8bbfcf-tw9d4" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--tw9d4-" Oct 31 05:28:37.505184 containerd[1681]: 2025-10-31 05:28:37.011 [INFO][4603] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f6455490050f5d3c169a0a517b30c3539eb20c1f7359bbe2a1f3db946ff6992c" Namespace="kube-system" Pod="coredns-674b8bbfcf-tw9d4" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--tw9d4-eth0" Oct 31 05:28:37.505184 containerd[1681]: 2025-10-31 05:28:37.078 [INFO][4662] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f6455490050f5d3c169a0a517b30c3539eb20c1f7359bbe2a1f3db946ff6992c" HandleID="k8s-pod-network.f6455490050f5d3c169a0a517b30c3539eb20c1f7359bbe2a1f3db946ff6992c" Workload="localhost-k8s-coredns--674b8bbfcf--tw9d4-eth0" Oct 31 05:28:37.505184 containerd[1681]: 2025-10-31 05:28:37.079 [INFO][4662] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="f6455490050f5d3c169a0a517b30c3539eb20c1f7359bbe2a1f3db946ff6992c" HandleID="k8s-pod-network.f6455490050f5d3c169a0a517b30c3539eb20c1f7359bbe2a1f3db946ff6992c" Workload="localhost-k8s-coredns--674b8bbfcf--tw9d4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cb600), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-tw9d4", "timestamp":"2025-10-31 05:28:37.078048289 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 31 05:28:37.505184 containerd[1681]: 2025-10-31 05:28:37.079 [INFO][4662] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 05:28:37.505184 containerd[1681]: 2025-10-31 05:28:37.375 [INFO][4662] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 05:28:37.505184 containerd[1681]: 2025-10-31 05:28:37.375 [INFO][4662] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 31 05:28:37.505184 containerd[1681]: 2025-10-31 05:28:37.432 [INFO][4662] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f6455490050f5d3c169a0a517b30c3539eb20c1f7359bbe2a1f3db946ff6992c" host="localhost" Oct 31 05:28:37.505184 containerd[1681]: 2025-10-31 05:28:37.448 [INFO][4662] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 31 05:28:37.505184 containerd[1681]: 2025-10-31 05:28:37.456 [INFO][4662] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 31 05:28:37.505184 containerd[1681]: 2025-10-31 05:28:37.457 [INFO][4662] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 31 05:28:37.505184 containerd[1681]: 2025-10-31 05:28:37.459 [INFO][4662] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 31 05:28:37.505184 containerd[1681]: 2025-10-31 05:28:37.459 [INFO][4662] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f6455490050f5d3c169a0a517b30c3539eb20c1f7359bbe2a1f3db946ff6992c" host="localhost" Oct 31 05:28:37.505184 containerd[1681]: 2025-10-31 05:28:37.461 [INFO][4662] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.f6455490050f5d3c169a0a517b30c3539eb20c1f7359bbe2a1f3db946ff6992c Oct 31 05:28:37.505184 containerd[1681]: 2025-10-31 05:28:37.470 [INFO][4662] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f6455490050f5d3c169a0a517b30c3539eb20c1f7359bbe2a1f3db946ff6992c" host="localhost" Oct 31 05:28:37.505184 containerd[1681]: 2025-10-31 05:28:37.481 [INFO][4662] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.f6455490050f5d3c169a0a517b30c3539eb20c1f7359bbe2a1f3db946ff6992c" host="localhost" Oct 31 05:28:37.505184 containerd[1681]: 2025-10-31 05:28:37.481 [INFO][4662] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.f6455490050f5d3c169a0a517b30c3539eb20c1f7359bbe2a1f3db946ff6992c" host="localhost" Oct 31 05:28:37.505184 containerd[1681]: 2025-10-31 05:28:37.481 [INFO][4662] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 05:28:37.505184 containerd[1681]: 2025-10-31 05:28:37.481 [INFO][4662] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="f6455490050f5d3c169a0a517b30c3539eb20c1f7359bbe2a1f3db946ff6992c" HandleID="k8s-pod-network.f6455490050f5d3c169a0a517b30c3539eb20c1f7359bbe2a1f3db946ff6992c" Workload="localhost-k8s-coredns--674b8bbfcf--tw9d4-eth0" Oct 31 05:28:37.505589 containerd[1681]: 2025-10-31 05:28:37.483 [INFO][4603] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f6455490050f5d3c169a0a517b30c3539eb20c1f7359bbe2a1f3db946ff6992c" Namespace="kube-system" Pod="coredns-674b8bbfcf-tw9d4" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--tw9d4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--tw9d4-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"f84825b5-aa90-4de7-a17e-50cbbffb855d", ResourceVersion:"852", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 5, 27, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-tw9d4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali19d456e3df4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 05:28:37.505589 containerd[1681]: 2025-10-31 05:28:37.483 [INFO][4603] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="f6455490050f5d3c169a0a517b30c3539eb20c1f7359bbe2a1f3db946ff6992c" Namespace="kube-system" Pod="coredns-674b8bbfcf-tw9d4" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--tw9d4-eth0" Oct 31 05:28:37.505589 containerd[1681]: 2025-10-31 05:28:37.483 [INFO][4603] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali19d456e3df4 ContainerID="f6455490050f5d3c169a0a517b30c3539eb20c1f7359bbe2a1f3db946ff6992c" Namespace="kube-system" Pod="coredns-674b8bbfcf-tw9d4" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--tw9d4-eth0" Oct 31 05:28:37.505589 containerd[1681]: 2025-10-31 05:28:37.491 [INFO][4603] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f6455490050f5d3c169a0a517b30c3539eb20c1f7359bbe2a1f3db946ff6992c" Namespace="kube-system" Pod="coredns-674b8bbfcf-tw9d4" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--tw9d4-eth0" Oct 31 05:28:37.505589 containerd[1681]: 2025-10-31 05:28:37.491 [INFO][4603] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f6455490050f5d3c169a0a517b30c3539eb20c1f7359bbe2a1f3db946ff6992c" Namespace="kube-system" Pod="coredns-674b8bbfcf-tw9d4" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--tw9d4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--tw9d4-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"f84825b5-aa90-4de7-a17e-50cbbffb855d", ResourceVersion:"852", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 5, 27, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f6455490050f5d3c169a0a517b30c3539eb20c1f7359bbe2a1f3db946ff6992c", Pod:"coredns-674b8bbfcf-tw9d4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali19d456e3df4", MAC:"2e:a2:54:b7:ce:50", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 05:28:37.505589 containerd[1681]: 2025-10-31 05:28:37.501 [INFO][4603] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f6455490050f5d3c169a0a517b30c3539eb20c1f7359bbe2a1f3db946ff6992c" Namespace="kube-system" Pod="coredns-674b8bbfcf-tw9d4" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--tw9d4-eth0" Oct 31 05:28:37.521008 systemd[1]: Started cri-containerd-d3f8e8ae0c72d3d8463d3e71ed46968040ce56d5a639d93bf505d86b613e26a5.scope - libcontainer container d3f8e8ae0c72d3d8463d3e71ed46968040ce56d5a639d93bf505d86b613e26a5. Oct 31 05:28:37.530206 containerd[1681]: time="2025-10-31T05:28:37.530170128Z" level=info msg="connecting to shim f6455490050f5d3c169a0a517b30c3539eb20c1f7359bbe2a1f3db946ff6992c" address="unix:///run/containerd/s/b95678d3baee505e204bb95dce3ceacc357f3ebc7d612f6580321d6855b8fce4" namespace=k8s.io protocol=ttrpc version=3 Oct 31 05:28:37.534452 systemd-resolved[1340]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 31 05:28:37.555051 systemd[1]: Started cri-containerd-f6455490050f5d3c169a0a517b30c3539eb20c1f7359bbe2a1f3db946ff6992c.scope - libcontainer container f6455490050f5d3c169a0a517b30c3539eb20c1f7359bbe2a1f3db946ff6992c. Oct 31 05:28:37.568998 systemd-resolved[1340]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 31 05:28:37.584195 containerd[1681]: time="2025-10-31T05:28:37.584163553Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-k2xp4,Uid:5205e036-4995-4277-8348-1e3bf6f34336,Namespace:calico-system,Attempt:0,} returns sandbox id \"d3f8e8ae0c72d3d8463d3e71ed46968040ce56d5a639d93bf505d86b613e26a5\"" Oct 31 05:28:37.601087 containerd[1681]: time="2025-10-31T05:28:37.601059185Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-tw9d4,Uid:f84825b5-aa90-4de7-a17e-50cbbffb855d,Namespace:kube-system,Attempt:0,} returns sandbox id \"f6455490050f5d3c169a0a517b30c3539eb20c1f7359bbe2a1f3db946ff6992c\"" Oct 31 05:28:37.607069 containerd[1681]: time="2025-10-31T05:28:37.606796136Z" level=info msg="CreateContainer within sandbox \"f6455490050f5d3c169a0a517b30c3539eb20c1f7359bbe2a1f3db946ff6992c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 31 05:28:37.611149 containerd[1681]: time="2025-10-31T05:28:37.611130973Z" level=info msg="Container f056138473a412761beb51fa80083a4b342b3d0c9ebdaff4dc208983fff2ad70: CDI devices from CRI Config.CDIDevices: []" Oct 31 05:28:37.614661 containerd[1681]: time="2025-10-31T05:28:37.614642342Z" level=info msg="CreateContainer within sandbox \"f6455490050f5d3c169a0a517b30c3539eb20c1f7359bbe2a1f3db946ff6992c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f056138473a412761beb51fa80083a4b342b3d0c9ebdaff4dc208983fff2ad70\"" Oct 31 05:28:37.616119 containerd[1681]: time="2025-10-31T05:28:37.616105005Z" level=info msg="StartContainer for \"f056138473a412761beb51fa80083a4b342b3d0c9ebdaff4dc208983fff2ad70\"" Oct 31 05:28:37.616708 containerd[1681]: time="2025-10-31T05:28:37.616694687Z" level=info msg="connecting to shim f056138473a412761beb51fa80083a4b342b3d0c9ebdaff4dc208983fff2ad70" address="unix:///run/containerd/s/b95678d3baee505e204bb95dce3ceacc357f3ebc7d612f6580321d6855b8fce4" protocol=ttrpc version=3 Oct 31 05:28:37.643069 systemd[1]: Started cri-containerd-f056138473a412761beb51fa80083a4b342b3d0c9ebdaff4dc208983fff2ad70.scope - libcontainer container f056138473a412761beb51fa80083a4b342b3d0c9ebdaff4dc208983fff2ad70. Oct 31 05:28:37.666441 containerd[1681]: time="2025-10-31T05:28:37.666405704Z" level=info msg="StartContainer for \"f056138473a412761beb51fa80083a4b342b3d0c9ebdaff4dc208983fff2ad70\" returns successfully" Oct 31 05:28:37.824231 containerd[1681]: time="2025-10-31T05:28:37.824192639Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 31 05:28:37.824658 containerd[1681]: time="2025-10-31T05:28:37.824637285Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 31 05:28:37.824736 containerd[1681]: time="2025-10-31T05:28:37.824681799Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 31 05:28:37.824882 kubelet[2992]: E1031 05:28:37.824858 2992 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 05:28:37.825355 kubelet[2992]: E1031 05:28:37.825140 2992 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 05:28:37.825355 kubelet[2992]: E1031 05:28:37.825320 2992 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z6gs9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-76bf9d45db-nvffh_calico-apiserver(186659ac-48c5-4a61-b81f-4021ad112f63): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 31 05:28:37.825729 containerd[1681]: time="2025-10-31T05:28:37.825552189Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 31 05:28:37.826983 kubelet[2992]: E1031 05:28:37.826964 2992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76bf9d45db-nvffh" podUID="186659ac-48c5-4a61-b81f-4021ad112f63" Oct 31 05:28:38.148720 containerd[1681]: time="2025-10-31T05:28:38.148685332Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 31 05:28:38.152660 containerd[1681]: time="2025-10-31T05:28:38.152635733Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 31 05:28:38.152723 containerd[1681]: time="2025-10-31T05:28:38.152689788Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Oct 31 05:28:38.152802 kubelet[2992]: E1031 05:28:38.152775 2992 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 31 05:28:38.152858 kubelet[2992]: E1031 05:28:38.152808 2992 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 31 05:28:38.153279 containerd[1681]: time="2025-10-31T05:28:38.153071889Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 31 05:28:38.153319 kubelet[2992]: E1031 05:28:38.153272 2992 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zlb7g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7d5c795875-vrwbt_calico-system(ca64dee5-fbb5-4c1e-b441-a24d0937f7dd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 31 05:28:38.154477 kubelet[2992]: E1031 05:28:38.154424 2992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7d5c795875-vrwbt" podUID="ca64dee5-fbb5-4c1e-b441-a24d0937f7dd" Oct 31 05:28:38.233478 kubelet[2992]: E1031 05:28:38.233420 2992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76bf9d45db-nvffh" podUID="186659ac-48c5-4a61-b81f-4021ad112f63" Oct 31 05:28:38.237441 kubelet[2992]: I1031 05:28:38.237396 2992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-tw9d4" podStartSLOduration=41.237382289 podStartE2EDuration="41.237382289s" podCreationTimestamp="2025-10-31 05:27:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-31 05:28:38.234210411 +0000 UTC m=+46.744080846" watchObservedRunningTime="2025-10-31 05:28:38.237382289 +0000 UTC m=+46.747252725" Oct 31 05:28:38.237622 kubelet[2992]: E1031 05:28:38.237602 2992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7d5c795875-vrwbt" podUID="ca64dee5-fbb5-4c1e-b441-a24d0937f7dd" Oct 31 05:28:38.325129 systemd-networkd[1579]: cali3a44bd48c49: Gained IPv6LL Oct 31 05:28:38.453039 systemd-networkd[1579]: cali839efc101ff: Gained IPv6LL Oct 31 05:28:38.487846 containerd[1681]: time="2025-10-31T05:28:38.487818415Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 31 05:28:38.491450 containerd[1681]: time="2025-10-31T05:28:38.491415066Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 31 05:28:38.491549 containerd[1681]: time="2025-10-31T05:28:38.491487698Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Oct 31 05:28:38.491699 kubelet[2992]: E1031 05:28:38.491673 2992 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 31 05:28:38.491739 kubelet[2992]: E1031 05:28:38.491707 2992 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 31 05:28:38.491939 containerd[1681]: time="2025-10-31T05:28:38.491873083Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 31 05:28:38.510628 kubelet[2992]: E1031 05:28:38.510549 2992 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6k4sl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-d5mr7_calico-system(dfabc2cb-7622-45ac-b566-baf9171817d8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 31 05:28:38.837160 systemd-networkd[1579]: cali19d456e3df4: Gained IPv6LL Oct 31 05:28:38.845795 containerd[1681]: time="2025-10-31T05:28:38.845722625Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 31 05:28:38.849212 containerd[1681]: time="2025-10-31T05:28:38.849161296Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 31 05:28:38.849383 containerd[1681]: time="2025-10-31T05:28:38.849165471Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Oct 31 05:28:38.849514 kubelet[2992]: E1031 05:28:38.849476 2992 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 31 05:28:38.849763 kubelet[2992]: E1031 05:28:38.849531 2992 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 31 05:28:38.849763 kubelet[2992]: E1031 05:28:38.849721 2992 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zp988,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-k2xp4_calico-system(5205e036-4995-4277-8348-1e3bf6f34336): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 31 05:28:38.850973 kubelet[2992]: E1031 05:28:38.850910 2992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-k2xp4" podUID="5205e036-4995-4277-8348-1e3bf6f34336" Oct 31 05:28:38.851373 containerd[1681]: time="2025-10-31T05:28:38.851308403Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 31 05:28:39.153156 containerd[1681]: time="2025-10-31T05:28:39.153025912Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 31 05:28:39.156331 containerd[1681]: time="2025-10-31T05:28:39.156292917Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 31 05:28:39.156417 containerd[1681]: time="2025-10-31T05:28:39.156358348Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Oct 31 05:28:39.156546 kubelet[2992]: E1031 05:28:39.156520 2992 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 31 05:28:39.156581 kubelet[2992]: E1031 05:28:39.156554 2992 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 31 05:28:39.156662 kubelet[2992]: E1031 05:28:39.156633 2992 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6k4sl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-d5mr7_calico-system(dfabc2cb-7622-45ac-b566-baf9171817d8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 31 05:28:39.208712 kubelet[2992]: E1031 05:28:39.157870 2992 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-d5mr7" podUID="dfabc2cb-7622-45ac-b566-baf9171817d8" Oct 31 05:28:39.221075 systemd-networkd[1579]: calibd0dc125a75: Gained IPv6LL Oct 31 05:28:39.240601 kubelet[2992]: E1031 05:28:39.240574 2992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7d5c795875-vrwbt" podUID="ca64dee5-fbb5-4c1e-b441-a24d0937f7dd" Oct 31 05:28:39.241113 kubelet[2992]: E1031 05:28:39.241096 2992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-k2xp4" podUID="5205e036-4995-4277-8348-1e3bf6f34336" Oct 31 05:28:39.241207 kubelet[2992]: E1031 05:28:39.241187 2992 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-d5mr7" podUID="dfabc2cb-7622-45ac-b566-baf9171817d8" Oct 31 05:28:39.285043 systemd-networkd[1579]: cali62a53fcf259: Gained IPv6LL Oct 31 05:28:39.295612 kubelet[2992]: E1031 05:28:39.295581 2992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76bf9d45db-nvffh" podUID="186659ac-48c5-4a61-b81f-4021ad112f63" Oct 31 05:28:47.848763 containerd[1681]: time="2025-10-31T05:28:47.848698207Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 31 05:28:48.198272 containerd[1681]: time="2025-10-31T05:28:48.198170759Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 31 05:28:48.209151 containerd[1681]: time="2025-10-31T05:28:48.209110383Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 31 05:28:48.209212 containerd[1681]: time="2025-10-31T05:28:48.209179111Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Oct 31 05:28:48.209342 kubelet[2992]: E1031 05:28:48.209294 2992 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 31 05:28:48.209581 kubelet[2992]: E1031 05:28:48.209349 2992 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 31 05:28:48.209581 kubelet[2992]: E1031 05:28:48.209453 2992 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:97d129c82c6c425ba6815f4f838775d1,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ttqzx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-b8c4b8ddf-ztgvm_calico-system(fbed9ce9-cd92-4a0b-8450-76a451690c79): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 31 05:28:48.211858 containerd[1681]: time="2025-10-31T05:28:48.211816737Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 31 05:28:48.560870 containerd[1681]: time="2025-10-31T05:28:48.560776456Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 31 05:28:48.565020 containerd[1681]: time="2025-10-31T05:28:48.564982592Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 31 05:28:48.565076 containerd[1681]: time="2025-10-31T05:28:48.565053624Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Oct 31 05:28:48.565203 kubelet[2992]: E1031 05:28:48.565155 2992 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 31 05:28:48.565274 kubelet[2992]: E1031 05:28:48.565262 2992 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 31 05:28:48.565411 kubelet[2992]: E1031 05:28:48.565389 2992 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ttqzx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-b8c4b8ddf-ztgvm_calico-system(fbed9ce9-cd92-4a0b-8450-76a451690c79): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 31 05:28:48.567483 kubelet[2992]: E1031 05:28:48.567429 2992 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-b8c4b8ddf-ztgvm" podUID="fbed9ce9-cd92-4a0b-8450-76a451690c79" Oct 31 05:28:49.847859 containerd[1681]: time="2025-10-31T05:28:49.847387533Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 31 05:28:50.154161 containerd[1681]: time="2025-10-31T05:28:50.154075044Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 31 05:28:50.154516 containerd[1681]: time="2025-10-31T05:28:50.154492803Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 31 05:28:50.154583 containerd[1681]: time="2025-10-31T05:28:50.154570136Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 31 05:28:50.154701 kubelet[2992]: E1031 05:28:50.154667 2992 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 05:28:50.154986 kubelet[2992]: E1031 05:28:50.154707 2992 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 05:28:50.154986 kubelet[2992]: E1031 05:28:50.154842 2992 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kqjkq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-76bf9d45db-sqc28_calico-apiserver(886a9a56-e128-4afe-8400-8c8f904fe953): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 31 05:28:50.156595 kubelet[2992]: E1031 05:28:50.156147 2992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76bf9d45db-sqc28" podUID="886a9a56-e128-4afe-8400-8c8f904fe953" Oct 31 05:28:51.849655 containerd[1681]: time="2025-10-31T05:28:51.849538115Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 31 05:28:52.166415 containerd[1681]: time="2025-10-31T05:28:52.166327109Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 31 05:28:52.166734 containerd[1681]: time="2025-10-31T05:28:52.166713484Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 31 05:28:52.166805 containerd[1681]: time="2025-10-31T05:28:52.166758414Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 31 05:28:52.166888 kubelet[2992]: E1031 05:28:52.166855 2992 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 05:28:52.167075 kubelet[2992]: E1031 05:28:52.166895 2992 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 05:28:52.167075 kubelet[2992]: E1031 05:28:52.167044 2992 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z6gs9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-76bf9d45db-nvffh_calico-apiserver(186659ac-48c5-4a61-b81f-4021ad112f63): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 31 05:28:52.167607 containerd[1681]: time="2025-10-31T05:28:52.167593270Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 31 05:28:52.168846 kubelet[2992]: E1031 05:28:52.168821 2992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76bf9d45db-nvffh" podUID="186659ac-48c5-4a61-b81f-4021ad112f63" Oct 31 05:28:52.488558 containerd[1681]: time="2025-10-31T05:28:52.488484387Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 31 05:28:52.488964 containerd[1681]: time="2025-10-31T05:28:52.488903772Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 31 05:28:52.488964 containerd[1681]: time="2025-10-31T05:28:52.488936285Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Oct 31 05:28:52.489051 kubelet[2992]: E1031 05:28:52.489020 2992 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 31 05:28:52.489095 kubelet[2992]: E1031 05:28:52.489061 2992 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 31 05:28:52.489405 kubelet[2992]: E1031 05:28:52.489313 2992 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6k4sl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-d5mr7_calico-system(dfabc2cb-7622-45ac-b566-baf9171817d8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 31 05:28:52.489499 containerd[1681]: time="2025-10-31T05:28:52.489351964Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 31 05:28:52.842807 containerd[1681]: time="2025-10-31T05:28:52.842735391Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 31 05:28:52.843429 containerd[1681]: time="2025-10-31T05:28:52.843359053Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 31 05:28:52.843429 containerd[1681]: time="2025-10-31T05:28:52.843421363Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Oct 31 05:28:52.843636 kubelet[2992]: E1031 05:28:52.843608 2992 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 31 05:28:52.843692 kubelet[2992]: E1031 05:28:52.843646 2992 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 31 05:28:52.844245 kubelet[2992]: E1031 05:28:52.843802 2992 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zp988,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-k2xp4_calico-system(5205e036-4995-4277-8348-1e3bf6f34336): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 31 05:28:52.844334 containerd[1681]: time="2025-10-31T05:28:52.844057483Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 31 05:28:52.845668 kubelet[2992]: E1031 05:28:52.845484 2992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-k2xp4" podUID="5205e036-4995-4277-8348-1e3bf6f34336" Oct 31 05:28:53.163410 containerd[1681]: time="2025-10-31T05:28:53.163223537Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 31 05:28:53.164027 containerd[1681]: time="2025-10-31T05:28:53.163960180Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 31 05:28:53.164027 containerd[1681]: time="2025-10-31T05:28:53.164008004Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Oct 31 05:28:53.164138 kubelet[2992]: E1031 05:28:53.164100 2992 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 31 05:28:53.164138 kubelet[2992]: E1031 05:28:53.164133 2992 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 31 05:28:53.164264 kubelet[2992]: E1031 05:28:53.164223 2992 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6k4sl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-d5mr7_calico-system(dfabc2cb-7622-45ac-b566-baf9171817d8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 31 05:28:53.165424 kubelet[2992]: E1031 05:28:53.165382 2992 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-d5mr7" podUID="dfabc2cb-7622-45ac-b566-baf9171817d8" Oct 31 05:28:53.848942 containerd[1681]: time="2025-10-31T05:28:53.848709144Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 31 05:28:54.229074 containerd[1681]: time="2025-10-31T05:28:54.228957204Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 31 05:28:54.229759 containerd[1681]: time="2025-10-31T05:28:54.229651226Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 31 05:28:54.229759 containerd[1681]: time="2025-10-31T05:28:54.229677770Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Oct 31 05:28:54.229889 kubelet[2992]: E1031 05:28:54.229859 2992 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 31 05:28:54.230060 kubelet[2992]: E1031 05:28:54.229902 2992 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 31 05:28:54.230060 kubelet[2992]: E1031 05:28:54.230009 2992 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zlb7g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7d5c795875-vrwbt_calico-system(ca64dee5-fbb5-4c1e-b441-a24d0937f7dd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 31 05:28:54.231334 kubelet[2992]: E1031 05:28:54.231274 2992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7d5c795875-vrwbt" podUID="ca64dee5-fbb5-4c1e-b441-a24d0937f7dd" Oct 31 05:29:00.848069 kubelet[2992]: E1031 05:29:00.848038 2992 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-b8c4b8ddf-ztgvm" podUID="fbed9ce9-cd92-4a0b-8450-76a451690c79" Oct 31 05:29:02.218326 containerd[1681]: time="2025-10-31T05:29:02.218300337Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9a486369aaabc5b6df377a46dfe8999ae939e1df43d392afef92f7e09aba3cbb\" id:\"7d26df07acc210b35d1e5d76b520634886ef3055cad773d18f6602ec61339405\" pid:5032 exited_at:{seconds:1761888542 nanos:217996516}" Oct 31 05:29:04.848260 kubelet[2992]: E1031 05:29:04.848210 2992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76bf9d45db-sqc28" podUID="886a9a56-e128-4afe-8400-8c8f904fe953" Oct 31 05:29:05.850248 kubelet[2992]: E1031 05:29:05.850034 2992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76bf9d45db-nvffh" podUID="186659ac-48c5-4a61-b81f-4021ad112f63" Oct 31 05:29:05.850929 kubelet[2992]: E1031 05:29:05.850794 2992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-k2xp4" podUID="5205e036-4995-4277-8348-1e3bf6f34336" Oct 31 05:29:07.848634 kubelet[2992]: E1031 05:29:07.848558 2992 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-d5mr7" podUID="dfabc2cb-7622-45ac-b566-baf9171817d8" Oct 31 05:29:08.219029 systemd[1]: Started sshd@7-139.178.70.106:22-147.75.109.163:57910.service - OpenSSH per-connection server daemon (147.75.109.163:57910). Oct 31 05:29:08.603813 sshd[5055]: Accepted publickey for core from 147.75.109.163 port 57910 ssh2: RSA SHA256:UFAhvEX7xC3a4oKPhS739blDe3ccIA0DQUFWMCfmmc4 Oct 31 05:29:08.605601 sshd-session[5055]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 05:29:08.609895 systemd-logind[1655]: New session 10 of user core. Oct 31 05:29:08.618064 systemd[1]: Started session-10.scope - Session 10 of User core. Oct 31 05:29:09.328936 sshd[5058]: Connection closed by 147.75.109.163 port 57910 Oct 31 05:29:09.329594 sshd-session[5055]: pam_unix(sshd:session): session closed for user core Oct 31 05:29:09.336755 systemd[1]: sshd@7-139.178.70.106:22-147.75.109.163:57910.service: Deactivated successfully. Oct 31 05:29:09.338770 systemd[1]: session-10.scope: Deactivated successfully. Oct 31 05:29:09.340538 systemd-logind[1655]: Session 10 logged out. Waiting for processes to exit. Oct 31 05:29:09.343610 systemd-logind[1655]: Removed session 10. Oct 31 05:29:09.850698 kubelet[2992]: E1031 05:29:09.850452 2992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7d5c795875-vrwbt" podUID="ca64dee5-fbb5-4c1e-b441-a24d0937f7dd" Oct 31 05:29:14.344536 systemd[1]: Started sshd@8-139.178.70.106:22-147.75.109.163:38062.service - OpenSSH per-connection server daemon (147.75.109.163:38062). Oct 31 05:29:14.502970 sshd[5077]: Accepted publickey for core from 147.75.109.163 port 38062 ssh2: RSA SHA256:UFAhvEX7xC3a4oKPhS739blDe3ccIA0DQUFWMCfmmc4 Oct 31 05:29:14.504760 sshd-session[5077]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 05:29:14.510035 systemd-logind[1655]: New session 11 of user core. Oct 31 05:29:14.515312 systemd[1]: Started session-11.scope - Session 11 of User core. Oct 31 05:29:14.848276 containerd[1681]: time="2025-10-31T05:29:14.848026309Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 31 05:29:15.135482 sshd[5080]: Connection closed by 147.75.109.163 port 38062 Oct 31 05:29:15.134491 sshd-session[5077]: pam_unix(sshd:session): session closed for user core Oct 31 05:29:15.139337 systemd[1]: sshd@8-139.178.70.106:22-147.75.109.163:38062.service: Deactivated successfully. Oct 31 05:29:15.142099 systemd[1]: session-11.scope: Deactivated successfully. Oct 31 05:29:15.145300 systemd-logind[1655]: Session 11 logged out. Waiting for processes to exit. Oct 31 05:29:15.146607 systemd-logind[1655]: Removed session 11. Oct 31 05:29:15.169330 containerd[1681]: time="2025-10-31T05:29:15.169302512Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 31 05:29:15.173713 containerd[1681]: time="2025-10-31T05:29:15.173614197Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 31 05:29:15.173713 containerd[1681]: time="2025-10-31T05:29:15.173689274Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Oct 31 05:29:15.173988 kubelet[2992]: E1031 05:29:15.173895 2992 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 31 05:29:15.174777 kubelet[2992]: E1031 05:29:15.174107 2992 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 31 05:29:15.174777 kubelet[2992]: E1031 05:29:15.174192 2992 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:97d129c82c6c425ba6815f4f838775d1,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ttqzx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-b8c4b8ddf-ztgvm_calico-system(fbed9ce9-cd92-4a0b-8450-76a451690c79): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 31 05:29:15.177387 containerd[1681]: time="2025-10-31T05:29:15.177360686Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 31 05:29:15.543663 containerd[1681]: time="2025-10-31T05:29:15.543341560Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 31 05:29:15.544150 containerd[1681]: time="2025-10-31T05:29:15.544086171Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 31 05:29:15.544150 containerd[1681]: time="2025-10-31T05:29:15.544135010Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Oct 31 05:29:15.544316 kubelet[2992]: E1031 05:29:15.544286 2992 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 31 05:29:15.544356 kubelet[2992]: E1031 05:29:15.544321 2992 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 31 05:29:15.544456 kubelet[2992]: E1031 05:29:15.544420 2992 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ttqzx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-b8c4b8ddf-ztgvm_calico-system(fbed9ce9-cd92-4a0b-8450-76a451690c79): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 31 05:29:15.545806 kubelet[2992]: E1031 05:29:15.545766 2992 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-b8c4b8ddf-ztgvm" podUID="fbed9ce9-cd92-4a0b-8450-76a451690c79" Oct 31 05:29:17.848124 containerd[1681]: time="2025-10-31T05:29:17.848096504Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 31 05:29:18.200858 containerd[1681]: time="2025-10-31T05:29:18.200767877Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 31 05:29:18.201133 containerd[1681]: time="2025-10-31T05:29:18.201105102Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 31 05:29:18.201171 containerd[1681]: time="2025-10-31T05:29:18.201163012Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 31 05:29:18.201340 kubelet[2992]: E1031 05:29:18.201280 2992 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 05:29:18.201528 kubelet[2992]: E1031 05:29:18.201348 2992 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 05:29:18.201719 kubelet[2992]: E1031 05:29:18.201685 2992 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kqjkq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-76bf9d45db-sqc28_calico-apiserver(886a9a56-e128-4afe-8400-8c8f904fe953): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 31 05:29:18.203017 kubelet[2992]: E1031 05:29:18.202982 2992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76bf9d45db-sqc28" podUID="886a9a56-e128-4afe-8400-8c8f904fe953" Oct 31 05:29:18.848252 containerd[1681]: time="2025-10-31T05:29:18.848227684Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 31 05:29:19.203375 containerd[1681]: time="2025-10-31T05:29:19.203138372Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 31 05:29:19.203714 containerd[1681]: time="2025-10-31T05:29:19.203694728Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 31 05:29:19.203790 containerd[1681]: time="2025-10-31T05:29:19.203774133Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Oct 31 05:29:19.203967 kubelet[2992]: E1031 05:29:19.203939 2992 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 31 05:29:19.204163 kubelet[2992]: E1031 05:29:19.203976 2992 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 31 05:29:19.204163 kubelet[2992]: E1031 05:29:19.204063 2992 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zp988,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-k2xp4_calico-system(5205e036-4995-4277-8348-1e3bf6f34336): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 31 05:29:19.205373 kubelet[2992]: E1031 05:29:19.205353 2992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-k2xp4" podUID="5205e036-4995-4277-8348-1e3bf6f34336" Oct 31 05:29:20.145543 systemd[1]: Started sshd@9-139.178.70.106:22-147.75.109.163:43246.service - OpenSSH per-connection server daemon (147.75.109.163:43246). Oct 31 05:29:20.332353 sshd[5094]: Accepted publickey for core from 147.75.109.163 port 43246 ssh2: RSA SHA256:UFAhvEX7xC3a4oKPhS739blDe3ccIA0DQUFWMCfmmc4 Oct 31 05:29:20.333138 sshd-session[5094]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 05:29:20.335889 systemd-logind[1655]: New session 12 of user core. Oct 31 05:29:20.343046 systemd[1]: Started session-12.scope - Session 12 of User core. Oct 31 05:29:20.406062 sshd[5097]: Connection closed by 147.75.109.163 port 43246 Oct 31 05:29:20.406496 sshd-session[5094]: pam_unix(sshd:session): session closed for user core Oct 31 05:29:20.412179 systemd[1]: sshd@9-139.178.70.106:22-147.75.109.163:43246.service: Deactivated successfully. Oct 31 05:29:20.413606 systemd[1]: session-12.scope: Deactivated successfully. Oct 31 05:29:20.414302 systemd-logind[1655]: Session 12 logged out. Waiting for processes to exit. Oct 31 05:29:20.417076 systemd[1]: Started sshd@10-139.178.70.106:22-147.75.109.163:43256.service - OpenSSH per-connection server daemon (147.75.109.163:43256). Oct 31 05:29:20.418465 systemd-logind[1655]: Removed session 12. Oct 31 05:29:20.456266 sshd[5110]: Accepted publickey for core from 147.75.109.163 port 43256 ssh2: RSA SHA256:UFAhvEX7xC3a4oKPhS739blDe3ccIA0DQUFWMCfmmc4 Oct 31 05:29:20.457150 sshd-session[5110]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 05:29:20.461972 systemd-logind[1655]: New session 13 of user core. Oct 31 05:29:20.467033 systemd[1]: Started session-13.scope - Session 13 of User core. Oct 31 05:29:20.543023 sshd[5113]: Connection closed by 147.75.109.163 port 43256 Oct 31 05:29:20.543238 sshd-session[5110]: pam_unix(sshd:session): session closed for user core Oct 31 05:29:20.550986 systemd[1]: sshd@10-139.178.70.106:22-147.75.109.163:43256.service: Deactivated successfully. Oct 31 05:29:20.554644 systemd[1]: session-13.scope: Deactivated successfully. Oct 31 05:29:20.556064 systemd-logind[1655]: Session 13 logged out. Waiting for processes to exit. Oct 31 05:29:20.560341 systemd[1]: Started sshd@11-139.178.70.106:22-147.75.109.163:43264.service - OpenSSH per-connection server daemon (147.75.109.163:43264). Oct 31 05:29:20.562880 systemd-logind[1655]: Removed session 13. Oct 31 05:29:20.603617 sshd[5123]: Accepted publickey for core from 147.75.109.163 port 43264 ssh2: RSA SHA256:UFAhvEX7xC3a4oKPhS739blDe3ccIA0DQUFWMCfmmc4 Oct 31 05:29:20.604434 sshd-session[5123]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 05:29:20.607509 systemd-logind[1655]: New session 14 of user core. Oct 31 05:29:20.613071 systemd[1]: Started session-14.scope - Session 14 of User core. Oct 31 05:29:20.668174 sshd[5126]: Connection closed by 147.75.109.163 port 43264 Oct 31 05:29:20.668714 sshd-session[5123]: pam_unix(sshd:session): session closed for user core Oct 31 05:29:20.671556 systemd-logind[1655]: Session 14 logged out. Waiting for processes to exit. Oct 31 05:29:20.671716 systemd[1]: sshd@11-139.178.70.106:22-147.75.109.163:43264.service: Deactivated successfully. Oct 31 05:29:20.672872 systemd[1]: session-14.scope: Deactivated successfully. Oct 31 05:29:20.674046 systemd-logind[1655]: Removed session 14. Oct 31 05:29:20.849092 containerd[1681]: time="2025-10-31T05:29:20.849019867Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 31 05:29:21.197089 containerd[1681]: time="2025-10-31T05:29:21.197041867Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 31 05:29:21.242085 containerd[1681]: time="2025-10-31T05:29:21.241970717Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 31 05:29:21.242085 containerd[1681]: time="2025-10-31T05:29:21.242040051Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 31 05:29:21.242203 kubelet[2992]: E1031 05:29:21.242158 2992 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 05:29:21.242400 kubelet[2992]: E1031 05:29:21.242210 2992 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 05:29:21.242554 kubelet[2992]: E1031 05:29:21.242527 2992 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z6gs9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-76bf9d45db-nvffh_calico-apiserver(186659ac-48c5-4a61-b81f-4021ad112f63): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 31 05:29:21.243650 kubelet[2992]: E1031 05:29:21.243631 2992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76bf9d45db-nvffh" podUID="186659ac-48c5-4a61-b81f-4021ad112f63" Oct 31 05:29:21.848802 containerd[1681]: time="2025-10-31T05:29:21.848756232Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 31 05:29:22.209387 containerd[1681]: time="2025-10-31T05:29:22.209267005Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 31 05:29:22.214806 containerd[1681]: time="2025-10-31T05:29:22.214772515Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 31 05:29:22.214896 containerd[1681]: time="2025-10-31T05:29:22.214835909Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Oct 31 05:29:22.215008 kubelet[2992]: E1031 05:29:22.214975 2992 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 31 05:29:22.215081 kubelet[2992]: E1031 05:29:22.215022 2992 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 31 05:29:22.215219 kubelet[2992]: E1031 05:29:22.215152 2992 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6k4sl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-d5mr7_calico-system(dfabc2cb-7622-45ac-b566-baf9171817d8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 31 05:29:22.217155 containerd[1681]: time="2025-10-31T05:29:22.217136280Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 31 05:29:22.561690 containerd[1681]: time="2025-10-31T05:29:22.561502960Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 31 05:29:22.564598 containerd[1681]: time="2025-10-31T05:29:22.564560831Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 31 05:29:22.564708 containerd[1681]: time="2025-10-31T05:29:22.564687430Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Oct 31 05:29:22.564925 kubelet[2992]: E1031 05:29:22.564841 2992 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 31 05:29:22.565394 kubelet[2992]: E1031 05:29:22.565016 2992 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 31 05:29:22.565481 kubelet[2992]: E1031 05:29:22.565450 2992 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6k4sl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-d5mr7_calico-system(dfabc2cb-7622-45ac-b566-baf9171817d8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 31 05:29:22.566582 kubelet[2992]: E1031 05:29:22.566556 2992 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-d5mr7" podUID="dfabc2cb-7622-45ac-b566-baf9171817d8" Oct 31 05:29:24.847933 containerd[1681]: time="2025-10-31T05:29:24.847755660Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 31 05:29:25.184939 containerd[1681]: time="2025-10-31T05:29:25.184732808Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 31 05:29:25.201862 containerd[1681]: time="2025-10-31T05:29:25.187282317Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 31 05:29:25.201941 kubelet[2992]: E1031 05:29:25.201898 2992 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 31 05:29:25.202124 kubelet[2992]: E1031 05:29:25.201969 2992 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 31 05:29:25.202124 kubelet[2992]: E1031 05:29:25.202055 2992 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zlb7g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7d5c795875-vrwbt_calico-system(ca64dee5-fbb5-4c1e-b441-a24d0937f7dd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 31 05:29:25.203532 kubelet[2992]: E1031 05:29:25.203240 2992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7d5c795875-vrwbt" podUID="ca64dee5-fbb5-4c1e-b441-a24d0937f7dd" Oct 31 05:29:25.204867 containerd[1681]: time="2025-10-31T05:29:25.187349475Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Oct 31 05:29:25.678072 systemd[1]: Started sshd@12-139.178.70.106:22-147.75.109.163:43268.service - OpenSSH per-connection server daemon (147.75.109.163:43268). Oct 31 05:29:25.739142 sshd[5142]: Accepted publickey for core from 147.75.109.163 port 43268 ssh2: RSA SHA256:UFAhvEX7xC3a4oKPhS739blDe3ccIA0DQUFWMCfmmc4 Oct 31 05:29:25.739391 sshd-session[5142]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 05:29:25.743966 systemd-logind[1655]: New session 15 of user core. Oct 31 05:29:25.747009 systemd[1]: Started session-15.scope - Session 15 of User core. Oct 31 05:29:25.822139 sshd[5145]: Connection closed by 147.75.109.163 port 43268 Oct 31 05:29:25.822631 sshd-session[5142]: pam_unix(sshd:session): session closed for user core Oct 31 05:29:25.851130 systemd[1]: sshd@12-139.178.70.106:22-147.75.109.163:43268.service: Deactivated successfully. Oct 31 05:29:25.852815 systemd[1]: session-15.scope: Deactivated successfully. Oct 31 05:29:25.854956 systemd-logind[1655]: Session 15 logged out. Waiting for processes to exit. Oct 31 05:29:25.855872 systemd-logind[1655]: Removed session 15. Oct 31 05:29:26.848314 kubelet[2992]: E1031 05:29:26.848275 2992 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-b8c4b8ddf-ztgvm" podUID="fbed9ce9-cd92-4a0b-8450-76a451690c79" Oct 31 05:29:29.848332 kubelet[2992]: E1031 05:29:29.848131 2992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-k2xp4" podUID="5205e036-4995-4277-8348-1e3bf6f34336" Oct 31 05:29:30.830170 systemd[1]: Started sshd@13-139.178.70.106:22-147.75.109.163:38588.service - OpenSSH per-connection server daemon (147.75.109.163:38588). Oct 31 05:29:30.849282 kubelet[2992]: E1031 05:29:30.848611 2992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76bf9d45db-sqc28" podUID="886a9a56-e128-4afe-8400-8c8f904fe953" Oct 31 05:29:30.977834 sshd[5160]: Accepted publickey for core from 147.75.109.163 port 38588 ssh2: RSA SHA256:UFAhvEX7xC3a4oKPhS739blDe3ccIA0DQUFWMCfmmc4 Oct 31 05:29:30.979088 sshd-session[5160]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 05:29:30.986065 systemd-logind[1655]: New session 16 of user core. Oct 31 05:29:30.989026 systemd[1]: Started session-16.scope - Session 16 of User core. Oct 31 05:29:31.125631 sshd[5163]: Connection closed by 147.75.109.163 port 38588 Oct 31 05:29:31.126089 sshd-session[5160]: pam_unix(sshd:session): session closed for user core Oct 31 05:29:31.128535 systemd[1]: sshd@13-139.178.70.106:22-147.75.109.163:38588.service: Deactivated successfully. Oct 31 05:29:31.129528 systemd[1]: session-16.scope: Deactivated successfully. Oct 31 05:29:31.130487 systemd-logind[1655]: Session 16 logged out. Waiting for processes to exit. Oct 31 05:29:31.131050 systemd-logind[1655]: Removed session 16. Oct 31 05:29:32.310756 containerd[1681]: time="2025-10-31T05:29:32.310727136Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9a486369aaabc5b6df377a46dfe8999ae939e1df43d392afef92f7e09aba3cbb\" id:\"1b664a88ce2d0b179bdfbb628d82e14cdc722474f12d40ce017e3904aee89dc2\" pid:5185 exited_at:{seconds:1761888572 nanos:306710781}" Oct 31 05:29:32.848388 kubelet[2992]: E1031 05:29:32.848346 2992 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-d5mr7" podUID="dfabc2cb-7622-45ac-b566-baf9171817d8" Oct 31 05:29:35.852932 kubelet[2992]: E1031 05:29:35.852166 2992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7d5c795875-vrwbt" podUID="ca64dee5-fbb5-4c1e-b441-a24d0937f7dd" Oct 31 05:29:35.853657 kubelet[2992]: E1031 05:29:35.853638 2992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76bf9d45db-nvffh" podUID="186659ac-48c5-4a61-b81f-4021ad112f63" Oct 31 05:29:36.136434 systemd[1]: Started sshd@14-139.178.70.106:22-147.75.109.163:38598.service - OpenSSH per-connection server daemon (147.75.109.163:38598). Oct 31 05:29:36.283896 sshd[5197]: Accepted publickey for core from 147.75.109.163 port 38598 ssh2: RSA SHA256:UFAhvEX7xC3a4oKPhS739blDe3ccIA0DQUFWMCfmmc4 Oct 31 05:29:36.291678 sshd-session[5197]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 05:29:36.294446 systemd-logind[1655]: New session 17 of user core. Oct 31 05:29:36.303213 systemd[1]: Started session-17.scope - Session 17 of User core. Oct 31 05:29:36.473452 sshd[5200]: Connection closed by 147.75.109.163 port 38598 Oct 31 05:29:36.477400 systemd[1]: sshd@14-139.178.70.106:22-147.75.109.163:38598.service: Deactivated successfully. Oct 31 05:29:36.473585 sshd-session[5197]: pam_unix(sshd:session): session closed for user core Oct 31 05:29:36.480563 systemd[1]: session-17.scope: Deactivated successfully. Oct 31 05:29:36.481278 systemd-logind[1655]: Session 17 logged out. Waiting for processes to exit. Oct 31 05:29:36.482408 systemd-logind[1655]: Removed session 17. Oct 31 05:29:40.848336 kubelet[2992]: E1031 05:29:40.848008 2992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-k2xp4" podUID="5205e036-4995-4277-8348-1e3bf6f34336" Oct 31 05:29:41.483578 systemd[1]: Started sshd@15-139.178.70.106:22-147.75.109.163:58578.service - OpenSSH per-connection server daemon (147.75.109.163:58578). Oct 31 05:29:41.525456 sshd[5213]: Accepted publickey for core from 147.75.109.163 port 58578 ssh2: RSA SHA256:UFAhvEX7xC3a4oKPhS739blDe3ccIA0DQUFWMCfmmc4 Oct 31 05:29:41.526075 sshd-session[5213]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 05:29:41.528602 systemd-logind[1655]: New session 18 of user core. Oct 31 05:29:41.536146 systemd[1]: Started session-18.scope - Session 18 of User core. Oct 31 05:29:41.593033 sshd[5216]: Connection closed by 147.75.109.163 port 58578 Oct 31 05:29:41.593815 sshd-session[5213]: pam_unix(sshd:session): session closed for user core Oct 31 05:29:41.600574 systemd[1]: sshd@15-139.178.70.106:22-147.75.109.163:58578.service: Deactivated successfully. Oct 31 05:29:41.601770 systemd[1]: session-18.scope: Deactivated successfully. Oct 31 05:29:41.602479 systemd-logind[1655]: Session 18 logged out. Waiting for processes to exit. Oct 31 05:29:41.603682 systemd-logind[1655]: Removed session 18. Oct 31 05:29:41.604415 systemd[1]: Started sshd@16-139.178.70.106:22-147.75.109.163:58592.service - OpenSSH per-connection server daemon (147.75.109.163:58592). Oct 31 05:29:41.643952 sshd[5228]: Accepted publickey for core from 147.75.109.163 port 58592 ssh2: RSA SHA256:UFAhvEX7xC3a4oKPhS739blDe3ccIA0DQUFWMCfmmc4 Oct 31 05:29:41.645103 sshd-session[5228]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 05:29:41.647695 systemd-logind[1655]: New session 19 of user core. Oct 31 05:29:41.653261 systemd[1]: Started session-19.scope - Session 19 of User core. Oct 31 05:29:41.850825 kubelet[2992]: E1031 05:29:41.850791 2992 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-b8c4b8ddf-ztgvm" podUID="fbed9ce9-cd92-4a0b-8450-76a451690c79" Oct 31 05:29:42.062596 sshd[5231]: Connection closed by 147.75.109.163 port 58592 Oct 31 05:29:42.062459 sshd-session[5228]: pam_unix(sshd:session): session closed for user core Oct 31 05:29:42.070177 systemd[1]: sshd@16-139.178.70.106:22-147.75.109.163:58592.service: Deactivated successfully. Oct 31 05:29:42.072194 systemd[1]: session-19.scope: Deactivated successfully. Oct 31 05:29:42.073848 systemd-logind[1655]: Session 19 logged out. Waiting for processes to exit. Oct 31 05:29:42.074732 systemd[1]: Started sshd@17-139.178.70.106:22-147.75.109.163:58596.service - OpenSSH per-connection server daemon (147.75.109.163:58596). Oct 31 05:29:42.077472 systemd-logind[1655]: Removed session 19. Oct 31 05:29:42.147971 sshd[5241]: Accepted publickey for core from 147.75.109.163 port 58596 ssh2: RSA SHA256:UFAhvEX7xC3a4oKPhS739blDe3ccIA0DQUFWMCfmmc4 Oct 31 05:29:42.149239 sshd-session[5241]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 05:29:42.155102 systemd-logind[1655]: New session 20 of user core. Oct 31 05:29:42.161166 systemd[1]: Started session-20.scope - Session 20 of User core. Oct 31 05:29:42.738792 sshd[5244]: Connection closed by 147.75.109.163 port 58596 Oct 31 05:29:42.746303 sshd-session[5241]: pam_unix(sshd:session): session closed for user core Oct 31 05:29:42.757120 systemd[1]: Started sshd@18-139.178.70.106:22-147.75.109.163:58600.service - OpenSSH per-connection server daemon (147.75.109.163:58600). Oct 31 05:29:42.758340 systemd[1]: sshd@17-139.178.70.106:22-147.75.109.163:58596.service: Deactivated successfully. Oct 31 05:29:42.761623 systemd[1]: session-20.scope: Deactivated successfully. Oct 31 05:29:42.764644 systemd-logind[1655]: Session 20 logged out. Waiting for processes to exit. Oct 31 05:29:42.766180 systemd-logind[1655]: Removed session 20. Oct 31 05:29:42.833444 sshd[5255]: Accepted publickey for core from 147.75.109.163 port 58600 ssh2: RSA SHA256:UFAhvEX7xC3a4oKPhS739blDe3ccIA0DQUFWMCfmmc4 Oct 31 05:29:42.833978 sshd-session[5255]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 05:29:42.837079 systemd-logind[1655]: New session 21 of user core. Oct 31 05:29:42.843012 systemd[1]: Started session-21.scope - Session 21 of User core. Oct 31 05:29:43.159339 sshd[5264]: Connection closed by 147.75.109.163 port 58600 Oct 31 05:29:43.158597 sshd-session[5255]: pam_unix(sshd:session): session closed for user core Oct 31 05:29:43.166118 systemd[1]: sshd@18-139.178.70.106:22-147.75.109.163:58600.service: Deactivated successfully. Oct 31 05:29:43.167811 systemd[1]: session-21.scope: Deactivated successfully. Oct 31 05:29:43.168378 systemd-logind[1655]: Session 21 logged out. Waiting for processes to exit. Oct 31 05:29:43.170755 systemd[1]: Started sshd@19-139.178.70.106:22-147.75.109.163:58602.service - OpenSSH per-connection server daemon (147.75.109.163:58602). Oct 31 05:29:43.172404 systemd-logind[1655]: Removed session 21. Oct 31 05:29:43.241179 sshd[5274]: Accepted publickey for core from 147.75.109.163 port 58602 ssh2: RSA SHA256:UFAhvEX7xC3a4oKPhS739blDe3ccIA0DQUFWMCfmmc4 Oct 31 05:29:43.242017 sshd-session[5274]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 05:29:43.245368 systemd-logind[1655]: New session 22 of user core. Oct 31 05:29:43.250056 systemd[1]: Started session-22.scope - Session 22 of User core. Oct 31 05:29:43.366118 sshd[5277]: Connection closed by 147.75.109.163 port 58602 Oct 31 05:29:43.366550 sshd-session[5274]: pam_unix(sshd:session): session closed for user core Oct 31 05:29:43.368685 systemd[1]: sshd@19-139.178.70.106:22-147.75.109.163:58602.service: Deactivated successfully. Oct 31 05:29:43.371352 systemd[1]: session-22.scope: Deactivated successfully. Oct 31 05:29:43.372830 systemd-logind[1655]: Session 22 logged out. Waiting for processes to exit. Oct 31 05:29:43.373482 systemd-logind[1655]: Removed session 22. Oct 31 05:29:45.848909 kubelet[2992]: E1031 05:29:45.848800 2992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76bf9d45db-sqc28" podUID="886a9a56-e128-4afe-8400-8c8f904fe953" Oct 31 05:29:45.850138 kubelet[2992]: E1031 05:29:45.850093 2992 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-d5mr7" podUID="dfabc2cb-7622-45ac-b566-baf9171817d8" Oct 31 05:29:48.374880 systemd[1]: Started sshd@20-139.178.70.106:22-147.75.109.163:58612.service - OpenSSH per-connection server daemon (147.75.109.163:58612). Oct 31 05:29:48.502763 update_engine[1656]: I20251031 05:29:48.502722 1656 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Oct 31 05:29:48.502763 update_engine[1656]: I20251031 05:29:48.502763 1656 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Oct 31 05:29:48.520023 update_engine[1656]: I20251031 05:29:48.519904 1656 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Oct 31 05:29:48.521146 update_engine[1656]: I20251031 05:29:48.521052 1656 omaha_request_params.cc:62] Current group set to developer Oct 31 05:29:48.522520 update_engine[1656]: I20251031 05:29:48.522497 1656 update_attempter.cc:499] Already updated boot flags. Skipping. Oct 31 05:29:48.522520 update_engine[1656]: I20251031 05:29:48.522513 1656 update_attempter.cc:643] Scheduling an action processor start. Oct 31 05:29:48.522602 update_engine[1656]: I20251031 05:29:48.522527 1656 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Oct 31 05:29:48.522602 update_engine[1656]: I20251031 05:29:48.522567 1656 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Oct 31 05:29:48.522636 update_engine[1656]: I20251031 05:29:48.522605 1656 omaha_request_action.cc:271] Posting an Omaha request to disabled Oct 31 05:29:48.522636 update_engine[1656]: I20251031 05:29:48.522610 1656 omaha_request_action.cc:272] Request: Oct 31 05:29:48.522636 update_engine[1656]: Oct 31 05:29:48.522636 update_engine[1656]: Oct 31 05:29:48.522636 update_engine[1656]: Oct 31 05:29:48.522636 update_engine[1656]: Oct 31 05:29:48.522636 update_engine[1656]: Oct 31 05:29:48.522636 update_engine[1656]: Oct 31 05:29:48.522636 update_engine[1656]: Oct 31 05:29:48.522636 update_engine[1656]: Oct 31 05:29:48.522636 update_engine[1656]: I20251031 05:29:48.522614 1656 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Oct 31 05:29:48.539623 update_engine[1656]: I20251031 05:29:48.538059 1656 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Oct 31 05:29:48.540931 update_engine[1656]: I20251031 05:29:48.538456 1656 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Oct 31 05:29:48.542837 update_engine[1656]: E20251031 05:29:48.542484 1656 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Oct 31 05:29:48.542837 update_engine[1656]: I20251031 05:29:48.542542 1656 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Oct 31 05:29:48.561444 locksmithd[1718]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Oct 31 05:29:48.592179 sshd[5291]: Accepted publickey for core from 147.75.109.163 port 58612 ssh2: RSA SHA256:UFAhvEX7xC3a4oKPhS739blDe3ccIA0DQUFWMCfmmc4 Oct 31 05:29:48.593559 sshd-session[5291]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 05:29:48.597609 systemd-logind[1655]: New session 23 of user core. Oct 31 05:29:48.604069 systemd[1]: Started session-23.scope - Session 23 of User core. Oct 31 05:29:48.849263 kubelet[2992]: E1031 05:29:48.848741 2992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7d5c795875-vrwbt" podUID="ca64dee5-fbb5-4c1e-b441-a24d0937f7dd" Oct 31 05:29:48.921720 sshd[5294]: Connection closed by 147.75.109.163 port 58612 Oct 31 05:29:48.924692 systemd[1]: sshd@20-139.178.70.106:22-147.75.109.163:58612.service: Deactivated successfully. Oct 31 05:29:48.922115 sshd-session[5291]: pam_unix(sshd:session): session closed for user core Oct 31 05:29:48.926087 systemd[1]: session-23.scope: Deactivated successfully. Oct 31 05:29:48.926753 systemd-logind[1655]: Session 23 logged out. Waiting for processes to exit. Oct 31 05:29:48.927606 systemd-logind[1655]: Removed session 23. Oct 31 05:29:49.850165 kubelet[2992]: E1031 05:29:49.850131 2992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76bf9d45db-nvffh" podUID="186659ac-48c5-4a61-b81f-4021ad112f63" Oct 31 05:29:53.935437 systemd[1]: Started sshd@21-139.178.70.106:22-147.75.109.163:54856.service - OpenSSH per-connection server daemon (147.75.109.163:54856). Oct 31 05:29:53.991537 sshd[5314]: Accepted publickey for core from 147.75.109.163 port 54856 ssh2: RSA SHA256:UFAhvEX7xC3a4oKPhS739blDe3ccIA0DQUFWMCfmmc4 Oct 31 05:29:53.992096 sshd-session[5314]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 05:29:53.997928 systemd-logind[1655]: New session 24 of user core. Oct 31 05:29:54.002039 systemd[1]: Started session-24.scope - Session 24 of User core. Oct 31 05:29:54.576379 sshd[5317]: Connection closed by 147.75.109.163 port 54856 Oct 31 05:29:54.576840 sshd-session[5314]: pam_unix(sshd:session): session closed for user core Oct 31 05:29:54.580527 systemd[1]: sshd@21-139.178.70.106:22-147.75.109.163:54856.service: Deactivated successfully. Oct 31 05:29:54.582671 systemd[1]: session-24.scope: Deactivated successfully. Oct 31 05:29:54.585627 systemd-logind[1655]: Session 24 logged out. Waiting for processes to exit. Oct 31 05:29:54.586382 systemd-logind[1655]: Removed session 24. Oct 31 05:29:55.848583 kubelet[2992]: E1031 05:29:55.848236 2992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-k2xp4" podUID="5205e036-4995-4277-8348-1e3bf6f34336" Oct 31 05:29:55.967836 containerd[1681]: time="2025-10-31T05:29:55.967759190Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 31 05:29:56.302893 containerd[1681]: time="2025-10-31T05:29:56.302655677Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 31 05:29:56.303330 containerd[1681]: time="2025-10-31T05:29:56.303308325Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 31 05:29:56.303419 containerd[1681]: time="2025-10-31T05:29:56.303366837Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Oct 31 05:29:56.305924 kubelet[2992]: E1031 05:29:56.303534 2992 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 31 05:29:56.307744 kubelet[2992]: E1031 05:29:56.307594 2992 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 31 05:29:56.321279 kubelet[2992]: E1031 05:29:56.321225 2992 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:97d129c82c6c425ba6815f4f838775d1,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ttqzx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-b8c4b8ddf-ztgvm_calico-system(fbed9ce9-cd92-4a0b-8450-76a451690c79): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 31 05:29:56.323491 containerd[1681]: time="2025-10-31T05:29:56.323322241Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 31 05:29:56.649705 containerd[1681]: time="2025-10-31T05:29:56.649535151Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 31 05:29:56.653651 containerd[1681]: time="2025-10-31T05:29:56.653632209Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 31 05:29:56.653745 containerd[1681]: time="2025-10-31T05:29:56.653650134Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Oct 31 05:29:56.656391 kubelet[2992]: E1031 05:29:56.653847 2992 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 31 05:29:56.656391 kubelet[2992]: E1031 05:29:56.653880 2992 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 31 05:29:56.656391 kubelet[2992]: E1031 05:29:56.653988 2992 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ttqzx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-b8c4b8ddf-ztgvm_calico-system(fbed9ce9-cd92-4a0b-8450-76a451690c79): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 31 05:29:56.656391 kubelet[2992]: E1031 05:29:56.655303 2992 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-b8c4b8ddf-ztgvm" podUID="fbed9ce9-cd92-4a0b-8450-76a451690c79" Oct 31 05:29:58.380173 update_engine[1656]: I20251031 05:29:58.380105 1656 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Oct 31 05:29:58.380173 update_engine[1656]: I20251031 05:29:58.380170 1656 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Oct 31 05:29:58.380517 update_engine[1656]: I20251031 05:29:58.380437 1656 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Oct 31 05:29:58.383992 update_engine[1656]: E20251031 05:29:58.383970 1656 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Oct 31 05:29:58.384047 update_engine[1656]: I20251031 05:29:58.384019 1656 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Oct 31 05:29:59.588455 systemd[1]: Started sshd@22-139.178.70.106:22-147.75.109.163:54870.service - OpenSSH per-connection server daemon (147.75.109.163:54870). Oct 31 05:29:59.667061 sshd[5336]: Accepted publickey for core from 147.75.109.163 port 54870 ssh2: RSA SHA256:UFAhvEX7xC3a4oKPhS739blDe3ccIA0DQUFWMCfmmc4 Oct 31 05:29:59.667961 sshd-session[5336]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 05:29:59.672024 systemd-logind[1655]: New session 25 of user core. Oct 31 05:29:59.678159 systemd[1]: Started session-25.scope - Session 25 of User core. Oct 31 05:29:59.772655 sshd[5339]: Connection closed by 147.75.109.163 port 54870 Oct 31 05:29:59.772860 sshd-session[5336]: pam_unix(sshd:session): session closed for user core Oct 31 05:29:59.776353 systemd[1]: sshd@22-139.178.70.106:22-147.75.109.163:54870.service: Deactivated successfully. Oct 31 05:29:59.778297 systemd[1]: session-25.scope: Deactivated successfully. Oct 31 05:29:59.779011 systemd-logind[1655]: Session 25 logged out. Waiting for processes to exit. Oct 31 05:29:59.780497 systemd-logind[1655]: Removed session 25. Oct 31 05:29:59.849650 kubelet[2992]: E1031 05:29:59.849618 2992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7d5c795875-vrwbt" podUID="ca64dee5-fbb5-4c1e-b441-a24d0937f7dd" Oct 31 05:29:59.850958 kubelet[2992]: E1031 05:29:59.850488 2992 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-d5mr7" podUID="dfabc2cb-7622-45ac-b566-baf9171817d8" Oct 31 05:30:00.848436 containerd[1681]: time="2025-10-31T05:30:00.848410215Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 31 05:30:01.464684 containerd[1681]: time="2025-10-31T05:30:01.464641356Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 31 05:30:01.464990 containerd[1681]: time="2025-10-31T05:30:01.464963826Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 31 05:30:01.465123 containerd[1681]: time="2025-10-31T05:30:01.465017220Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 31 05:30:01.465158 kubelet[2992]: E1031 05:30:01.465119 2992 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 05:30:01.465726 kubelet[2992]: E1031 05:30:01.465161 2992 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 05:30:01.468720 kubelet[2992]: E1031 05:30:01.468653 2992 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kqjkq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-76bf9d45db-sqc28_calico-apiserver(886a9a56-e128-4afe-8400-8c8f904fe953): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 31 05:30:01.470194 kubelet[2992]: E1031 05:30:01.470151 2992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76bf9d45db-sqc28" podUID="886a9a56-e128-4afe-8400-8c8f904fe953"