Nov 6 00:39:30.503160 kernel: Linux version 6.12.54-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Wed Nov 5 22:11:41 -00 2025 Nov 6 00:39:30.503184 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=5a467f58ff1d38830572ea713da04924778847a98299b0cfa25690713b346f38 Nov 6 00:39:30.503191 kernel: Disabled fast string operations Nov 6 00:39:30.503195 kernel: BIOS-provided physical RAM map: Nov 6 00:39:30.503200 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ebff] usable Nov 6 00:39:30.503204 kernel: BIOS-e820: [mem 0x000000000009ec00-0x000000000009ffff] reserved Nov 6 00:39:30.503210 kernel: BIOS-e820: [mem 0x00000000000dc000-0x00000000000fffff] reserved Nov 6 00:39:30.503215 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007fedffff] usable Nov 6 00:39:30.503220 kernel: BIOS-e820: [mem 0x000000007fee0000-0x000000007fefefff] ACPI data Nov 6 00:39:30.503224 kernel: BIOS-e820: [mem 0x000000007feff000-0x000000007fefffff] ACPI NVS Nov 6 00:39:30.503229 kernel: BIOS-e820: [mem 0x000000007ff00000-0x000000007fffffff] usable Nov 6 00:39:30.503233 kernel: BIOS-e820: [mem 0x00000000f0000000-0x00000000f7ffffff] reserved Nov 6 00:39:30.503238 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec0ffff] reserved Nov 6 00:39:30.503243 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Nov 6 00:39:30.503249 kernel: BIOS-e820: [mem 0x00000000fffe0000-0x00000000ffffffff] reserved Nov 6 00:39:30.503255 kernel: NX (Execute Disable) protection: active Nov 6 00:39:30.503260 kernel: APIC: Static calls initialized Nov 6 00:39:30.503265 kernel: SMBIOS 2.7 present. Nov 6 00:39:30.503270 kernel: DMI: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 05/28/2020 Nov 6 00:39:30.503275 kernel: DMI: Memory slots populated: 1/128 Nov 6 00:39:30.503366 kernel: vmware: hypercall mode: 0x00 Nov 6 00:39:30.503372 kernel: Hypervisor detected: VMware Nov 6 00:39:30.503377 kernel: vmware: TSC freq read from hypervisor : 3408.000 MHz Nov 6 00:39:30.503382 kernel: vmware: Host bus clock speed read from hypervisor : 66000000 Hz Nov 6 00:39:30.503387 kernel: vmware: using clock offset of 2870533312 ns Nov 6 00:39:30.503393 kernel: tsc: Detected 3408.000 MHz processor Nov 6 00:39:30.503401 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 6 00:39:30.503411 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 6 00:39:30.503419 kernel: last_pfn = 0x80000 max_arch_pfn = 0x400000000 Nov 6 00:39:30.503428 kernel: total RAM covered: 3072M Nov 6 00:39:30.503433 kernel: Found optimal setting for mtrr clean up Nov 6 00:39:30.503439 kernel: gran_size: 64K chunk_size: 64K num_reg: 2 lose cover RAM: 0G Nov 6 00:39:30.503444 kernel: MTRR map: 6 entries (5 fixed + 1 variable; max 21), built from 8 variable MTRRs Nov 6 00:39:30.503450 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 6 00:39:30.503455 kernel: Using GB pages for direct mapping Nov 6 00:39:30.503460 kernel: ACPI: Early table checksum verification disabled Nov 6 00:39:30.503466 kernel: ACPI: RSDP 0x00000000000F6A00 000024 (v02 PTLTD ) Nov 6 00:39:30.503497 kernel: ACPI: XSDT 0x000000007FEE965B 00005C (v01 INTEL 440BX 06040000 VMW 01324272) Nov 6 00:39:30.503504 kernel: ACPI: FACP 0x000000007FEFEE73 0000F4 (v04 INTEL 440BX 06040000 PTL 000F4240) Nov 6 00:39:30.503510 kernel: ACPI: DSDT 0x000000007FEEAD55 01411E (v01 PTLTD Custom 06040000 MSFT 03000001) Nov 6 00:39:30.503518 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Nov 6 00:39:30.503523 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Nov 6 00:39:30.503530 kernel: ACPI: BOOT 0x000000007FEEAD2D 000028 (v01 PTLTD $SBFTBL$ 06040000 LTP 00000001) Nov 6 00:39:30.503535 kernel: ACPI: APIC 0x000000007FEEA5EB 000742 (v01 PTLTD ? APIC 06040000 LTP 00000000) Nov 6 00:39:30.503541 kernel: ACPI: MCFG 0x000000007FEEA5AF 00003C (v01 PTLTD $PCITBL$ 06040000 LTP 00000001) Nov 6 00:39:30.503547 kernel: ACPI: SRAT 0x000000007FEE9757 0008A8 (v02 VMWARE MEMPLUG 06040000 VMW 00000001) Nov 6 00:39:30.503552 kernel: ACPI: HPET 0x000000007FEE971F 000038 (v01 VMWARE VMW HPET 06040000 VMW 00000001) Nov 6 00:39:30.503558 kernel: ACPI: WAET 0x000000007FEE96F7 000028 (v01 VMWARE VMW WAET 06040000 VMW 00000001) Nov 6 00:39:30.503565 kernel: ACPI: Reserving FACP table memory at [mem 0x7fefee73-0x7fefef66] Nov 6 00:39:30.503571 kernel: ACPI: Reserving DSDT table memory at [mem 0x7feead55-0x7fefee72] Nov 6 00:39:30.503576 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Nov 6 00:39:30.503582 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Nov 6 00:39:30.503587 kernel: ACPI: Reserving BOOT table memory at [mem 0x7feead2d-0x7feead54] Nov 6 00:39:30.503593 kernel: ACPI: Reserving APIC table memory at [mem 0x7feea5eb-0x7feead2c] Nov 6 00:39:30.503598 kernel: ACPI: Reserving MCFG table memory at [mem 0x7feea5af-0x7feea5ea] Nov 6 00:39:30.503604 kernel: ACPI: Reserving SRAT table memory at [mem 0x7fee9757-0x7fee9ffe] Nov 6 00:39:30.503610 kernel: ACPI: Reserving HPET table memory at [mem 0x7fee971f-0x7fee9756] Nov 6 00:39:30.503615 kernel: ACPI: Reserving WAET table memory at [mem 0x7fee96f7-0x7fee971e] Nov 6 00:39:30.503621 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Nov 6 00:39:30.503627 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Nov 6 00:39:30.503633 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000-0xbfffffff] hotplug Nov 6 00:39:30.503638 kernel: NUMA: Node 0 [mem 0x00001000-0x0009ffff] + [mem 0x00100000-0x7fffffff] -> [mem 0x00001000-0x7fffffff] Nov 6 00:39:30.503644 kernel: NODE_DATA(0) allocated [mem 0x7fff8dc0-0x7fffffff] Nov 6 00:39:30.503651 kernel: Zone ranges: Nov 6 00:39:30.503657 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 6 00:39:30.503662 kernel: DMA32 [mem 0x0000000001000000-0x000000007fffffff] Nov 6 00:39:30.503668 kernel: Normal empty Nov 6 00:39:30.503673 kernel: Device empty Nov 6 00:39:30.503679 kernel: Movable zone start for each node Nov 6 00:39:30.503684 kernel: Early memory node ranges Nov 6 00:39:30.503690 kernel: node 0: [mem 0x0000000000001000-0x000000000009dfff] Nov 6 00:39:30.503695 kernel: node 0: [mem 0x0000000000100000-0x000000007fedffff] Nov 6 00:39:30.503701 kernel: node 0: [mem 0x000000007ff00000-0x000000007fffffff] Nov 6 00:39:30.503707 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007fffffff] Nov 6 00:39:30.503712 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 6 00:39:30.503718 kernel: On node 0, zone DMA: 98 pages in unavailable ranges Nov 6 00:39:30.503724 kernel: On node 0, zone DMA32: 32 pages in unavailable ranges Nov 6 00:39:30.503729 kernel: ACPI: PM-Timer IO Port: 0x1008 Nov 6 00:39:30.503735 kernel: ACPI: LAPIC_NMI (acpi_id[0x00] high edge lint[0x1]) Nov 6 00:39:30.503741 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Nov 6 00:39:30.503747 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Nov 6 00:39:30.503753 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Nov 6 00:39:30.503758 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Nov 6 00:39:30.503763 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Nov 6 00:39:30.503769 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Nov 6 00:39:30.503774 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Nov 6 00:39:30.503780 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Nov 6 00:39:30.503785 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Nov 6 00:39:30.503791 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Nov 6 00:39:30.503796 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Nov 6 00:39:30.503802 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Nov 6 00:39:30.503807 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Nov 6 00:39:30.503812 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Nov 6 00:39:30.503818 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Nov 6 00:39:30.503823 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Nov 6 00:39:30.503830 kernel: ACPI: LAPIC_NMI (acpi_id[0x11] high edge lint[0x1]) Nov 6 00:39:30.503835 kernel: ACPI: LAPIC_NMI (acpi_id[0x12] high edge lint[0x1]) Nov 6 00:39:30.503841 kernel: ACPI: LAPIC_NMI (acpi_id[0x13] high edge lint[0x1]) Nov 6 00:39:30.503846 kernel: ACPI: LAPIC_NMI (acpi_id[0x14] high edge lint[0x1]) Nov 6 00:39:30.503851 kernel: ACPI: LAPIC_NMI (acpi_id[0x15] high edge lint[0x1]) Nov 6 00:39:30.503857 kernel: ACPI: LAPIC_NMI (acpi_id[0x16] high edge lint[0x1]) Nov 6 00:39:30.503862 kernel: ACPI: LAPIC_NMI (acpi_id[0x17] high edge lint[0x1]) Nov 6 00:39:30.503868 kernel: ACPI: LAPIC_NMI (acpi_id[0x18] high edge lint[0x1]) Nov 6 00:39:30.503873 kernel: ACPI: LAPIC_NMI (acpi_id[0x19] high edge lint[0x1]) Nov 6 00:39:30.503879 kernel: ACPI: LAPIC_NMI (acpi_id[0x1a] high edge lint[0x1]) Nov 6 00:39:30.503884 kernel: ACPI: LAPIC_NMI (acpi_id[0x1b] high edge lint[0x1]) Nov 6 00:39:30.503890 kernel: ACPI: LAPIC_NMI (acpi_id[0x1c] high edge lint[0x1]) Nov 6 00:39:30.503895 kernel: ACPI: LAPIC_NMI (acpi_id[0x1d] high edge lint[0x1]) Nov 6 00:39:30.503901 kernel: ACPI: LAPIC_NMI (acpi_id[0x1e] high edge lint[0x1]) Nov 6 00:39:30.503906 kernel: ACPI: LAPIC_NMI (acpi_id[0x1f] high edge lint[0x1]) Nov 6 00:39:30.503911 kernel: ACPI: LAPIC_NMI (acpi_id[0x20] high edge lint[0x1]) Nov 6 00:39:30.503918 kernel: ACPI: LAPIC_NMI (acpi_id[0x21] high edge lint[0x1]) Nov 6 00:39:30.503923 kernel: ACPI: LAPIC_NMI (acpi_id[0x22] high edge lint[0x1]) Nov 6 00:39:30.503929 kernel: ACPI: LAPIC_NMI (acpi_id[0x23] high edge lint[0x1]) Nov 6 00:39:30.503934 kernel: ACPI: LAPIC_NMI (acpi_id[0x24] high edge lint[0x1]) Nov 6 00:39:30.503939 kernel: ACPI: LAPIC_NMI (acpi_id[0x25] high edge lint[0x1]) Nov 6 00:39:30.503945 kernel: ACPI: LAPIC_NMI (acpi_id[0x26] high edge lint[0x1]) Nov 6 00:39:30.503951 kernel: ACPI: LAPIC_NMI (acpi_id[0x27] high edge lint[0x1]) Nov 6 00:39:30.503960 kernel: ACPI: LAPIC_NMI (acpi_id[0x28] high edge lint[0x1]) Nov 6 00:39:30.503966 kernel: ACPI: LAPIC_NMI (acpi_id[0x29] high edge lint[0x1]) Nov 6 00:39:30.503972 kernel: ACPI: LAPIC_NMI (acpi_id[0x2a] high edge lint[0x1]) Nov 6 00:39:30.503978 kernel: ACPI: LAPIC_NMI (acpi_id[0x2b] high edge lint[0x1]) Nov 6 00:39:30.503984 kernel: ACPI: LAPIC_NMI (acpi_id[0x2c] high edge lint[0x1]) Nov 6 00:39:30.503990 kernel: ACPI: LAPIC_NMI (acpi_id[0x2d] high edge lint[0x1]) Nov 6 00:39:30.503995 kernel: ACPI: LAPIC_NMI (acpi_id[0x2e] high edge lint[0x1]) Nov 6 00:39:30.504001 kernel: ACPI: LAPIC_NMI (acpi_id[0x2f] high edge lint[0x1]) Nov 6 00:39:30.504006 kernel: ACPI: LAPIC_NMI (acpi_id[0x30] high edge lint[0x1]) Nov 6 00:39:30.504013 kernel: ACPI: LAPIC_NMI (acpi_id[0x31] high edge lint[0x1]) Nov 6 00:39:30.504019 kernel: ACPI: LAPIC_NMI (acpi_id[0x32] high edge lint[0x1]) Nov 6 00:39:30.504024 kernel: ACPI: LAPIC_NMI (acpi_id[0x33] high edge lint[0x1]) Nov 6 00:39:30.504030 kernel: ACPI: LAPIC_NMI (acpi_id[0x34] high edge lint[0x1]) Nov 6 00:39:30.504035 kernel: ACPI: LAPIC_NMI (acpi_id[0x35] high edge lint[0x1]) Nov 6 00:39:30.504041 kernel: ACPI: LAPIC_NMI (acpi_id[0x36] high edge lint[0x1]) Nov 6 00:39:30.504047 kernel: ACPI: LAPIC_NMI (acpi_id[0x37] high edge lint[0x1]) Nov 6 00:39:30.504052 kernel: ACPI: LAPIC_NMI (acpi_id[0x38] high edge lint[0x1]) Nov 6 00:39:30.504059 kernel: ACPI: LAPIC_NMI (acpi_id[0x39] high edge lint[0x1]) Nov 6 00:39:30.504065 kernel: ACPI: LAPIC_NMI (acpi_id[0x3a] high edge lint[0x1]) Nov 6 00:39:30.504071 kernel: ACPI: LAPIC_NMI (acpi_id[0x3b] high edge lint[0x1]) Nov 6 00:39:30.504076 kernel: ACPI: LAPIC_NMI (acpi_id[0x3c] high edge lint[0x1]) Nov 6 00:39:30.504082 kernel: ACPI: LAPIC_NMI (acpi_id[0x3d] high edge lint[0x1]) Nov 6 00:39:30.504088 kernel: ACPI: LAPIC_NMI (acpi_id[0x3e] high edge lint[0x1]) Nov 6 00:39:30.504093 kernel: ACPI: LAPIC_NMI (acpi_id[0x3f] high edge lint[0x1]) Nov 6 00:39:30.504099 kernel: ACPI: LAPIC_NMI (acpi_id[0x40] high edge lint[0x1]) Nov 6 00:39:30.504106 kernel: ACPI: LAPIC_NMI (acpi_id[0x41] high edge lint[0x1]) Nov 6 00:39:30.504111 kernel: ACPI: LAPIC_NMI (acpi_id[0x42] high edge lint[0x1]) Nov 6 00:39:30.504117 kernel: ACPI: LAPIC_NMI (acpi_id[0x43] high edge lint[0x1]) Nov 6 00:39:30.504123 kernel: ACPI: LAPIC_NMI (acpi_id[0x44] high edge lint[0x1]) Nov 6 00:39:30.504129 kernel: ACPI: LAPIC_NMI (acpi_id[0x45] high edge lint[0x1]) Nov 6 00:39:30.504134 kernel: ACPI: LAPIC_NMI (acpi_id[0x46] high edge lint[0x1]) Nov 6 00:39:30.504140 kernel: ACPI: LAPIC_NMI (acpi_id[0x47] high edge lint[0x1]) Nov 6 00:39:30.504146 kernel: ACPI: LAPIC_NMI (acpi_id[0x48] high edge lint[0x1]) Nov 6 00:39:30.504152 kernel: ACPI: LAPIC_NMI (acpi_id[0x49] high edge lint[0x1]) Nov 6 00:39:30.504158 kernel: ACPI: LAPIC_NMI (acpi_id[0x4a] high edge lint[0x1]) Nov 6 00:39:30.504164 kernel: ACPI: LAPIC_NMI (acpi_id[0x4b] high edge lint[0x1]) Nov 6 00:39:30.504170 kernel: ACPI: LAPIC_NMI (acpi_id[0x4c] high edge lint[0x1]) Nov 6 00:39:30.504175 kernel: ACPI: LAPIC_NMI (acpi_id[0x4d] high edge lint[0x1]) Nov 6 00:39:30.504181 kernel: ACPI: LAPIC_NMI (acpi_id[0x4e] high edge lint[0x1]) Nov 6 00:39:30.504187 kernel: ACPI: LAPIC_NMI (acpi_id[0x4f] high edge lint[0x1]) Nov 6 00:39:30.504192 kernel: ACPI: LAPIC_NMI (acpi_id[0x50] high edge lint[0x1]) Nov 6 00:39:30.504199 kernel: ACPI: LAPIC_NMI (acpi_id[0x51] high edge lint[0x1]) Nov 6 00:39:30.504205 kernel: ACPI: LAPIC_NMI (acpi_id[0x52] high edge lint[0x1]) Nov 6 00:39:30.504210 kernel: ACPI: LAPIC_NMI (acpi_id[0x53] high edge lint[0x1]) Nov 6 00:39:30.504216 kernel: ACPI: LAPIC_NMI (acpi_id[0x54] high edge lint[0x1]) Nov 6 00:39:30.504222 kernel: ACPI: LAPIC_NMI (acpi_id[0x55] high edge lint[0x1]) Nov 6 00:39:30.504227 kernel: ACPI: LAPIC_NMI (acpi_id[0x56] high edge lint[0x1]) Nov 6 00:39:30.504233 kernel: ACPI: LAPIC_NMI (acpi_id[0x57] high edge lint[0x1]) Nov 6 00:39:30.504239 kernel: ACPI: LAPIC_NMI (acpi_id[0x58] high edge lint[0x1]) Nov 6 00:39:30.504244 kernel: ACPI: LAPIC_NMI (acpi_id[0x59] high edge lint[0x1]) Nov 6 00:39:30.504251 kernel: ACPI: LAPIC_NMI (acpi_id[0x5a] high edge lint[0x1]) Nov 6 00:39:30.504257 kernel: ACPI: LAPIC_NMI (acpi_id[0x5b] high edge lint[0x1]) Nov 6 00:39:30.504263 kernel: ACPI: LAPIC_NMI (acpi_id[0x5c] high edge lint[0x1]) Nov 6 00:39:30.504268 kernel: ACPI: LAPIC_NMI (acpi_id[0x5d] high edge lint[0x1]) Nov 6 00:39:30.504274 kernel: ACPI: LAPIC_NMI (acpi_id[0x5e] high edge lint[0x1]) Nov 6 00:39:30.504280 kernel: ACPI: LAPIC_NMI (acpi_id[0x5f] high edge lint[0x1]) Nov 6 00:39:30.504285 kernel: ACPI: LAPIC_NMI (acpi_id[0x60] high edge lint[0x1]) Nov 6 00:39:30.504291 kernel: ACPI: LAPIC_NMI (acpi_id[0x61] high edge lint[0x1]) Nov 6 00:39:30.504297 kernel: ACPI: LAPIC_NMI (acpi_id[0x62] high edge lint[0x1]) Nov 6 00:39:30.504303 kernel: ACPI: LAPIC_NMI (acpi_id[0x63] high edge lint[0x1]) Nov 6 00:39:30.504315 kernel: ACPI: LAPIC_NMI (acpi_id[0x64] high edge lint[0x1]) Nov 6 00:39:30.504322 kernel: ACPI: LAPIC_NMI (acpi_id[0x65] high edge lint[0x1]) Nov 6 00:39:30.504327 kernel: ACPI: LAPIC_NMI (acpi_id[0x66] high edge lint[0x1]) Nov 6 00:39:30.504333 kernel: ACPI: LAPIC_NMI (acpi_id[0x67] high edge lint[0x1]) Nov 6 00:39:30.504339 kernel: ACPI: LAPIC_NMI (acpi_id[0x68] high edge lint[0x1]) Nov 6 00:39:30.504344 kernel: ACPI: LAPIC_NMI (acpi_id[0x69] high edge lint[0x1]) Nov 6 00:39:30.504351 kernel: ACPI: LAPIC_NMI (acpi_id[0x6a] high edge lint[0x1]) Nov 6 00:39:30.504357 kernel: ACPI: LAPIC_NMI (acpi_id[0x6b] high edge lint[0x1]) Nov 6 00:39:30.504363 kernel: ACPI: LAPIC_NMI (acpi_id[0x6c] high edge lint[0x1]) Nov 6 00:39:30.504369 kernel: ACPI: LAPIC_NMI (acpi_id[0x6d] high edge lint[0x1]) Nov 6 00:39:30.504374 kernel: ACPI: LAPIC_NMI (acpi_id[0x6e] high edge lint[0x1]) Nov 6 00:39:30.504399 kernel: ACPI: LAPIC_NMI (acpi_id[0x6f] high edge lint[0x1]) Nov 6 00:39:30.504404 kernel: ACPI: LAPIC_NMI (acpi_id[0x70] high edge lint[0x1]) Nov 6 00:39:30.504410 kernel: ACPI: LAPIC_NMI (acpi_id[0x71] high edge lint[0x1]) Nov 6 00:39:30.504417 kernel: ACPI: LAPIC_NMI (acpi_id[0x72] high edge lint[0x1]) Nov 6 00:39:30.504423 kernel: ACPI: LAPIC_NMI (acpi_id[0x73] high edge lint[0x1]) Nov 6 00:39:30.504429 kernel: ACPI: LAPIC_NMI (acpi_id[0x74] high edge lint[0x1]) Nov 6 00:39:30.504435 kernel: ACPI: LAPIC_NMI (acpi_id[0x75] high edge lint[0x1]) Nov 6 00:39:30.504441 kernel: ACPI: LAPIC_NMI (acpi_id[0x76] high edge lint[0x1]) Nov 6 00:39:30.504461 kernel: ACPI: LAPIC_NMI (acpi_id[0x77] high edge lint[0x1]) Nov 6 00:39:30.504467 kernel: ACPI: LAPIC_NMI (acpi_id[0x78] high edge lint[0x1]) Nov 6 00:39:30.504472 kernel: ACPI: LAPIC_NMI (acpi_id[0x79] high edge lint[0x1]) Nov 6 00:39:30.504493 kernel: ACPI: LAPIC_NMI (acpi_id[0x7a] high edge lint[0x1]) Nov 6 00:39:30.504502 kernel: ACPI: LAPIC_NMI (acpi_id[0x7b] high edge lint[0x1]) Nov 6 00:39:30.504507 kernel: ACPI: LAPIC_NMI (acpi_id[0x7c] high edge lint[0x1]) Nov 6 00:39:30.504513 kernel: ACPI: LAPIC_NMI (acpi_id[0x7d] high edge lint[0x1]) Nov 6 00:39:30.504519 kernel: ACPI: LAPIC_NMI (acpi_id[0x7e] high edge lint[0x1]) Nov 6 00:39:30.504525 kernel: ACPI: LAPIC_NMI (acpi_id[0x7f] high edge lint[0x1]) Nov 6 00:39:30.504536 kernel: IOAPIC[0]: apic_id 1, version 17, address 0xfec00000, GSI 0-23 Nov 6 00:39:30.504542 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 high edge) Nov 6 00:39:30.504548 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 6 00:39:30.504555 kernel: ACPI: HPET id: 0x8086af01 base: 0xfed00000 Nov 6 00:39:30.504561 kernel: TSC deadline timer available Nov 6 00:39:30.504567 kernel: CPU topo: Max. logical packages: 128 Nov 6 00:39:30.504573 kernel: CPU topo: Max. logical dies: 128 Nov 6 00:39:30.504578 kernel: CPU topo: Max. dies per package: 1 Nov 6 00:39:30.504584 kernel: CPU topo: Max. threads per core: 1 Nov 6 00:39:30.504590 kernel: CPU topo: Num. cores per package: 1 Nov 6 00:39:30.504596 kernel: CPU topo: Num. threads per package: 1 Nov 6 00:39:30.504602 kernel: CPU topo: Allowing 2 present CPUs plus 126 hotplug CPUs Nov 6 00:39:30.504608 kernel: [mem 0x80000000-0xefffffff] available for PCI devices Nov 6 00:39:30.504615 kernel: Booting paravirtualized kernel on VMware hypervisor Nov 6 00:39:30.504621 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 6 00:39:30.504627 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:128 nr_cpu_ids:128 nr_node_ids:1 Nov 6 00:39:30.504633 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u262144 Nov 6 00:39:30.504639 kernel: pcpu-alloc: s207832 r8192 d29736 u262144 alloc=1*2097152 Nov 6 00:39:30.504645 kernel: pcpu-alloc: [0] 000 001 002 003 004 005 006 007 Nov 6 00:39:30.504651 kernel: pcpu-alloc: [0] 008 009 010 011 012 013 014 015 Nov 6 00:39:30.504657 kernel: pcpu-alloc: [0] 016 017 018 019 020 021 022 023 Nov 6 00:39:30.504663 kernel: pcpu-alloc: [0] 024 025 026 027 028 029 030 031 Nov 6 00:39:30.504669 kernel: pcpu-alloc: [0] 032 033 034 035 036 037 038 039 Nov 6 00:39:30.504675 kernel: pcpu-alloc: [0] 040 041 042 043 044 045 046 047 Nov 6 00:39:30.504680 kernel: pcpu-alloc: [0] 048 049 050 051 052 053 054 055 Nov 6 00:39:30.504687 kernel: pcpu-alloc: [0] 056 057 058 059 060 061 062 063 Nov 6 00:39:30.504693 kernel: pcpu-alloc: [0] 064 065 066 067 068 069 070 071 Nov 6 00:39:30.504699 kernel: pcpu-alloc: [0] 072 073 074 075 076 077 078 079 Nov 6 00:39:30.504705 kernel: pcpu-alloc: [0] 080 081 082 083 084 085 086 087 Nov 6 00:39:30.504710 kernel: pcpu-alloc: [0] 088 089 090 091 092 093 094 095 Nov 6 00:39:30.504716 kernel: pcpu-alloc: [0] 096 097 098 099 100 101 102 103 Nov 6 00:39:30.504722 kernel: pcpu-alloc: [0] 104 105 106 107 108 109 110 111 Nov 6 00:39:30.504728 kernel: pcpu-alloc: [0] 112 113 114 115 116 117 118 119 Nov 6 00:39:30.504734 kernel: pcpu-alloc: [0] 120 121 122 123 124 125 126 127 Nov 6 00:39:30.504741 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=5a467f58ff1d38830572ea713da04924778847a98299b0cfa25690713b346f38 Nov 6 00:39:30.504747 kernel: random: crng init done Nov 6 00:39:30.504753 kernel: printk: log_buf_len individual max cpu contribution: 4096 bytes Nov 6 00:39:30.504759 kernel: printk: log_buf_len total cpu_extra contributions: 520192 bytes Nov 6 00:39:30.504765 kernel: printk: log_buf_len min size: 262144 bytes Nov 6 00:39:30.504771 kernel: printk: log_buf_len: 1048576 bytes Nov 6 00:39:30.504777 kernel: printk: early log buf free: 245688(93%) Nov 6 00:39:30.504783 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 6 00:39:30.504789 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Nov 6 00:39:30.504796 kernel: Fallback order for Node 0: 0 Nov 6 00:39:30.504802 kernel: Built 1 zonelists, mobility grouping on. Total pages: 524157 Nov 6 00:39:30.504808 kernel: Policy zone: DMA32 Nov 6 00:39:30.504814 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 6 00:39:30.504820 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=128, Nodes=1 Nov 6 00:39:30.504826 kernel: ftrace: allocating 40092 entries in 157 pages Nov 6 00:39:30.504850 kernel: ftrace: allocated 157 pages with 5 groups Nov 6 00:39:30.504856 kernel: Dynamic Preempt: voluntary Nov 6 00:39:30.504862 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 6 00:39:30.504869 kernel: rcu: RCU event tracing is enabled. Nov 6 00:39:30.504890 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=128. Nov 6 00:39:30.504896 kernel: Trampoline variant of Tasks RCU enabled. Nov 6 00:39:30.504902 kernel: Rude variant of Tasks RCU enabled. Nov 6 00:39:30.504908 kernel: Tracing variant of Tasks RCU enabled. Nov 6 00:39:30.504914 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 6 00:39:30.504920 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=128 Nov 6 00:39:30.504926 kernel: RCU Tasks: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Nov 6 00:39:30.504932 kernel: RCU Tasks Rude: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Nov 6 00:39:30.504945 kernel: RCU Tasks Trace: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Nov 6 00:39:30.504953 kernel: NR_IRQS: 33024, nr_irqs: 1448, preallocated irqs: 16 Nov 6 00:39:30.504959 kernel: rcu: srcu_init: Setting srcu_struct sizes to big. Nov 6 00:39:30.504965 kernel: Console: colour VGA+ 80x25 Nov 6 00:39:30.504971 kernel: printk: legacy console [tty0] enabled Nov 6 00:39:30.504977 kernel: printk: legacy console [ttyS0] enabled Nov 6 00:39:30.504983 kernel: ACPI: Core revision 20240827 Nov 6 00:39:30.504990 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 133484882848 ns Nov 6 00:39:30.504996 kernel: APIC: Switch to symmetric I/O mode setup Nov 6 00:39:30.505002 kernel: x2apic enabled Nov 6 00:39:30.505008 kernel: APIC: Switched APIC routing to: physical x2apic Nov 6 00:39:30.505014 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Nov 6 00:39:30.505020 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Nov 6 00:39:30.505031 kernel: Calibrating delay loop (skipped) preset value.. 6816.00 BogoMIPS (lpj=3408000) Nov 6 00:39:30.505040 kernel: Disabled fast string operations Nov 6 00:39:30.505046 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Nov 6 00:39:30.505052 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Nov 6 00:39:30.505058 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 6 00:39:30.505064 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall and VM exit Nov 6 00:39:30.505070 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Nov 6 00:39:30.505076 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Nov 6 00:39:30.505083 kernel: RETBleed: Mitigation: Enhanced IBRS Nov 6 00:39:30.505089 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 6 00:39:30.505095 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 6 00:39:30.505101 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Nov 6 00:39:30.505107 kernel: SRBDS: Unknown: Dependent on hypervisor status Nov 6 00:39:30.505113 kernel: GDS: Unknown: Dependent on hypervisor status Nov 6 00:39:30.505119 kernel: active return thunk: its_return_thunk Nov 6 00:39:30.505126 kernel: ITS: Mitigation: Aligned branch/return thunks Nov 6 00:39:30.505132 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 6 00:39:30.505138 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 6 00:39:30.505144 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 6 00:39:30.505150 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 6 00:39:30.505156 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Nov 6 00:39:30.505162 kernel: Freeing SMP alternatives memory: 32K Nov 6 00:39:30.505168 kernel: pid_max: default: 131072 minimum: 1024 Nov 6 00:39:30.505174 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Nov 6 00:39:30.505181 kernel: landlock: Up and running. Nov 6 00:39:30.505187 kernel: SELinux: Initializing. Nov 6 00:39:30.505193 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Nov 6 00:39:30.505199 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Nov 6 00:39:30.505205 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Nov 6 00:39:30.505211 kernel: Performance Events: Skylake events, core PMU driver. Nov 6 00:39:30.505218 kernel: core: CPUID marked event: 'cpu cycles' unavailable Nov 6 00:39:30.505224 kernel: core: CPUID marked event: 'instructions' unavailable Nov 6 00:39:30.505230 kernel: core: CPUID marked event: 'bus cycles' unavailable Nov 6 00:39:30.505235 kernel: core: CPUID marked event: 'cache references' unavailable Nov 6 00:39:30.505241 kernel: core: CPUID marked event: 'cache misses' unavailable Nov 6 00:39:30.505247 kernel: core: CPUID marked event: 'branch instructions' unavailable Nov 6 00:39:30.505254 kernel: core: CPUID marked event: 'branch misses' unavailable Nov 6 00:39:30.505260 kernel: ... version: 1 Nov 6 00:39:30.505266 kernel: ... bit width: 48 Nov 6 00:39:30.505272 kernel: ... generic registers: 4 Nov 6 00:39:30.505278 kernel: ... value mask: 0000ffffffffffff Nov 6 00:39:30.505284 kernel: ... max period: 000000007fffffff Nov 6 00:39:30.505290 kernel: ... fixed-purpose events: 0 Nov 6 00:39:30.505296 kernel: ... event mask: 000000000000000f Nov 6 00:39:30.505303 kernel: signal: max sigframe size: 1776 Nov 6 00:39:30.505309 kernel: rcu: Hierarchical SRCU implementation. Nov 6 00:39:30.505315 kernel: rcu: Max phase no-delay instances is 400. Nov 6 00:39:30.505321 kernel: Timer migration: 3 hierarchy levels; 8 children per group; 3 crossnode level Nov 6 00:39:30.505326 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Nov 6 00:39:30.505332 kernel: smp: Bringing up secondary CPUs ... Nov 6 00:39:30.505338 kernel: smpboot: x86: Booting SMP configuration: Nov 6 00:39:30.505345 kernel: .... node #0, CPUs: #1 Nov 6 00:39:30.505351 kernel: Disabled fast string operations Nov 6 00:39:30.505357 kernel: smp: Brought up 1 node, 2 CPUs Nov 6 00:39:30.505363 kernel: smpboot: Total of 2 processors activated (13632.00 BogoMIPS) Nov 6 00:39:30.505369 kernel: Memory: 1946760K/2096628K available (14336K kernel code, 2443K rwdata, 26064K rodata, 15936K init, 2108K bss, 138484K reserved, 0K cma-reserved) Nov 6 00:39:30.505375 kernel: devtmpfs: initialized Nov 6 00:39:30.505381 kernel: x86/mm: Memory block size: 128MB Nov 6 00:39:30.505388 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7feff000-0x7fefffff] (4096 bytes) Nov 6 00:39:30.505394 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 6 00:39:30.505448 kernel: futex hash table entries: 32768 (order: 9, 2097152 bytes, linear) Nov 6 00:39:30.505454 kernel: pinctrl core: initialized pinctrl subsystem Nov 6 00:39:30.505461 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 6 00:39:30.505466 kernel: audit: initializing netlink subsys (disabled) Nov 6 00:39:30.505472 kernel: audit: type=2000 audit(1762389567.271:1): state=initialized audit_enabled=0 res=1 Nov 6 00:39:30.505488 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 6 00:39:30.505494 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 6 00:39:30.505500 kernel: cpuidle: using governor menu Nov 6 00:39:30.505506 kernel: Simple Boot Flag at 0x36 set to 0x80 Nov 6 00:39:30.505512 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 6 00:39:30.505518 kernel: dca service started, version 1.12.1 Nov 6 00:39:30.505524 kernel: PCI: ECAM [mem 0xf0000000-0xf7ffffff] (base 0xf0000000) for domain 0000 [bus 00-7f] Nov 6 00:39:30.505536 kernel: PCI: Using configuration type 1 for base access Nov 6 00:39:30.505543 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 6 00:39:30.505550 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 6 00:39:30.505556 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 6 00:39:30.505562 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 6 00:39:30.505568 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 6 00:39:30.505574 kernel: ACPI: Added _OSI(Module Device) Nov 6 00:39:30.505582 kernel: ACPI: Added _OSI(Processor Device) Nov 6 00:39:30.505588 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 6 00:39:30.505594 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 6 00:39:30.505600 kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored Nov 6 00:39:30.505607 kernel: ACPI: Interpreter enabled Nov 6 00:39:30.505613 kernel: ACPI: PM: (supports S0 S1 S5) Nov 6 00:39:30.505619 kernel: ACPI: Using IOAPIC for interrupt routing Nov 6 00:39:30.505625 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 6 00:39:30.505632 kernel: PCI: Using E820 reservations for host bridge windows Nov 6 00:39:30.505638 kernel: ACPI: Enabled 4 GPEs in block 00 to 0F Nov 6 00:39:30.505645 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-7f]) Nov 6 00:39:30.505742 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 6 00:39:30.505812 kernel: acpi PNP0A03:00: _OSC: platform does not support [AER LTR] Nov 6 00:39:30.505880 kernel: acpi PNP0A03:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability] Nov 6 00:39:30.505889 kernel: PCI host bridge to bus 0000:00 Nov 6 00:39:30.505956 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 6 00:39:30.506014 kernel: pci_bus 0000:00: root bus resource [mem 0x000cc000-0x000dbfff window] Nov 6 00:39:30.506072 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Nov 6 00:39:30.506128 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 6 00:39:30.506187 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xfeff window] Nov 6 00:39:30.506244 kernel: pci_bus 0000:00: root bus resource [bus 00-7f] Nov 6 00:39:30.506331 kernel: pci 0000:00:00.0: [8086:7190] type 00 class 0x060000 conventional PCI endpoint Nov 6 00:39:30.506406 kernel: pci 0000:00:01.0: [8086:7191] type 01 class 0x060400 conventional PCI bridge Nov 6 00:39:30.506485 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Nov 6 00:39:30.506566 kernel: pci 0000:00:07.0: [8086:7110] type 00 class 0x060100 conventional PCI endpoint Nov 6 00:39:30.506639 kernel: pci 0000:00:07.1: [8086:7111] type 00 class 0x01018a conventional PCI endpoint Nov 6 00:39:30.506707 kernel: pci 0000:00:07.1: BAR 4 [io 0x1060-0x106f] Nov 6 00:39:30.506773 kernel: pci 0000:00:07.1: BAR 0 [io 0x01f0-0x01f7]: legacy IDE quirk Nov 6 00:39:30.506838 kernel: pci 0000:00:07.1: BAR 1 [io 0x03f6]: legacy IDE quirk Nov 6 00:39:30.506902 kernel: pci 0000:00:07.1: BAR 2 [io 0x0170-0x0177]: legacy IDE quirk Nov 6 00:39:30.506965 kernel: pci 0000:00:07.1: BAR 3 [io 0x0376]: legacy IDE quirk Nov 6 00:39:30.507041 kernel: pci 0000:00:07.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint Nov 6 00:39:30.507131 kernel: pci 0000:00:07.3: quirk: [io 0x1000-0x103f] claimed by PIIX4 ACPI Nov 6 00:39:30.507199 kernel: pci 0000:00:07.3: quirk: [io 0x1040-0x104f] claimed by PIIX4 SMB Nov 6 00:39:30.507268 kernel: pci 0000:00:07.7: [15ad:0740] type 00 class 0x088000 conventional PCI endpoint Nov 6 00:39:30.507332 kernel: pci 0000:00:07.7: BAR 0 [io 0x1080-0x10bf] Nov 6 00:39:30.507404 kernel: pci 0000:00:07.7: BAR 1 [mem 0xfebfe000-0xfebfffff 64bit] Nov 6 00:39:30.507502 kernel: pci 0000:00:0f.0: [15ad:0405] type 00 class 0x030000 conventional PCI endpoint Nov 6 00:39:30.507601 kernel: pci 0000:00:0f.0: BAR 0 [io 0x1070-0x107f] Nov 6 00:39:30.507799 kernel: pci 0000:00:0f.0: BAR 1 [mem 0xe8000000-0xefffffff pref] Nov 6 00:39:30.507873 kernel: pci 0000:00:0f.0: BAR 2 [mem 0xfe000000-0xfe7fffff] Nov 6 00:39:30.507938 kernel: pci 0000:00:0f.0: ROM [mem 0x00000000-0x00007fff pref] Nov 6 00:39:30.508012 kernel: pci 0000:00:0f.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 6 00:39:30.508086 kernel: pci 0000:00:11.0: [15ad:0790] type 01 class 0x060401 conventional PCI bridge Nov 6 00:39:30.508155 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] (subtractive decode) Nov 6 00:39:30.508224 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Nov 6 00:39:30.508290 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Nov 6 00:39:30.508355 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Nov 6 00:39:30.508423 kernel: pci 0000:00:15.0: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 6 00:39:30.508504 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Nov 6 00:39:30.508577 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Nov 6 00:39:30.508644 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Nov 6 00:39:30.508710 kernel: pci 0000:00:15.0: PME# supported from D0 D3hot D3cold Nov 6 00:39:30.508781 kernel: pci 0000:00:15.1: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 6 00:39:30.508853 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Nov 6 00:39:30.508919 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Nov 6 00:39:30.508994 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Nov 6 00:39:30.509061 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Nov 6 00:39:30.509125 kernel: pci 0000:00:15.1: PME# supported from D0 D3hot D3cold Nov 6 00:39:30.511521 kernel: pci 0000:00:15.2: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 6 00:39:30.511607 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Nov 6 00:39:30.511682 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Nov 6 00:39:30.511750 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Nov 6 00:39:30.511818 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Nov 6 00:39:30.511890 kernel: pci 0000:00:15.2: PME# supported from D0 D3hot D3cold Nov 6 00:39:30.511964 kernel: pci 0000:00:15.3: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 6 00:39:30.512031 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Nov 6 00:39:30.512100 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Nov 6 00:39:30.512167 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Nov 6 00:39:30.512233 kernel: pci 0000:00:15.3: PME# supported from D0 D3hot D3cold Nov 6 00:39:30.512311 kernel: pci 0000:00:15.4: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 6 00:39:30.512378 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Nov 6 00:39:30.512444 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Nov 6 00:39:30.512941 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Nov 6 00:39:30.513008 kernel: pci 0000:00:15.4: PME# supported from D0 D3hot D3cold Nov 6 00:39:30.513079 kernel: pci 0000:00:15.5: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 6 00:39:30.513147 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Nov 6 00:39:30.513213 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Nov 6 00:39:30.513278 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Nov 6 00:39:30.513347 kernel: pci 0000:00:15.5: PME# supported from D0 D3hot D3cold Nov 6 00:39:30.513417 kernel: pci 0000:00:15.6: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 6 00:39:30.513519 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Nov 6 00:39:30.513891 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Nov 6 00:39:30.513965 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Nov 6 00:39:30.514033 kernel: pci 0000:00:15.6: PME# supported from D0 D3hot D3cold Nov 6 00:39:30.514111 kernel: pci 0000:00:15.7: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 6 00:39:30.514180 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Nov 6 00:39:30.514246 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Nov 6 00:39:30.514313 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Nov 6 00:39:30.514378 kernel: pci 0000:00:15.7: PME# supported from D0 D3hot D3cold Nov 6 00:39:30.514449 kernel: pci 0000:00:16.0: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 6 00:39:30.516549 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Nov 6 00:39:30.516624 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Nov 6 00:39:30.516694 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Nov 6 00:39:30.516761 kernel: pci 0000:00:16.0: PME# supported from D0 D3hot D3cold Nov 6 00:39:30.516833 kernel: pci 0000:00:16.1: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 6 00:39:30.516904 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Nov 6 00:39:30.516972 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Nov 6 00:39:30.517038 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Nov 6 00:39:30.517104 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Nov 6 00:39:30.517169 kernel: pci 0000:00:16.1: PME# supported from D0 D3hot D3cold Nov 6 00:39:30.517240 kernel: pci 0000:00:16.2: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 6 00:39:30.517310 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Nov 6 00:39:30.517376 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Nov 6 00:39:30.517442 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Nov 6 00:39:30.517532 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Nov 6 00:39:30.517599 kernel: pci 0000:00:16.2: PME# supported from D0 D3hot D3cold Nov 6 00:39:30.517673 kernel: pci 0000:00:16.3: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 6 00:39:30.517741 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Nov 6 00:39:30.517807 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Nov 6 00:39:30.517878 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Nov 6 00:39:30.517945 kernel: pci 0000:00:16.3: PME# supported from D0 D3hot D3cold Nov 6 00:39:30.518017 kernel: pci 0000:00:16.4: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 6 00:39:30.518082 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Nov 6 00:39:30.518150 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Nov 6 00:39:30.518215 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Nov 6 00:39:30.518279 kernel: pci 0000:00:16.4: PME# supported from D0 D3hot D3cold Nov 6 00:39:30.518348 kernel: pci 0000:00:16.5: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 6 00:39:30.518414 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Nov 6 00:39:30.518493 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Nov 6 00:39:30.518563 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Nov 6 00:39:30.518628 kernel: pci 0000:00:16.5: PME# supported from D0 D3hot D3cold Nov 6 00:39:30.518696 kernel: pci 0000:00:16.6: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 6 00:39:30.518762 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Nov 6 00:39:30.518837 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Nov 6 00:39:30.520305 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Nov 6 00:39:30.520398 kernel: pci 0000:00:16.6: PME# supported from D0 D3hot D3cold Nov 6 00:39:30.520508 kernel: pci 0000:00:16.7: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 6 00:39:30.520592 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Nov 6 00:39:30.520677 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Nov 6 00:39:30.520756 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Nov 6 00:39:30.520853 kernel: pci 0000:00:16.7: PME# supported from D0 D3hot D3cold Nov 6 00:39:30.520947 kernel: pci 0000:00:17.0: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 6 00:39:30.521026 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Nov 6 00:39:30.521093 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Nov 6 00:39:30.521158 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Nov 6 00:39:30.521223 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Nov 6 00:39:30.521288 kernel: pci 0000:00:17.0: PME# supported from D0 D3hot D3cold Nov 6 00:39:30.521361 kernel: pci 0000:00:17.1: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 6 00:39:30.521427 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Nov 6 00:39:30.521510 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Nov 6 00:39:30.521588 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Nov 6 00:39:30.521654 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Nov 6 00:39:30.521718 kernel: pci 0000:00:17.1: PME# supported from D0 D3hot D3cold Nov 6 00:39:30.521788 kernel: pci 0000:00:17.2: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 6 00:39:30.521875 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Nov 6 00:39:30.521968 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Nov 6 00:39:30.522055 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Nov 6 00:39:30.522123 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Nov 6 00:39:30.522188 kernel: pci 0000:00:17.2: PME# supported from D0 D3hot D3cold Nov 6 00:39:30.522260 kernel: pci 0000:00:17.3: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 6 00:39:30.522326 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Nov 6 00:39:30.522395 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Nov 6 00:39:30.522461 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Nov 6 00:39:30.523614 kernel: pci 0000:00:17.3: PME# supported from D0 D3hot D3cold Nov 6 00:39:30.523694 kernel: pci 0000:00:17.4: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 6 00:39:30.523764 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Nov 6 00:39:30.523832 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Nov 6 00:39:30.523902 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Nov 6 00:39:30.523969 kernel: pci 0000:00:17.4: PME# supported from D0 D3hot D3cold Nov 6 00:39:30.524042 kernel: pci 0000:00:17.5: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 6 00:39:30.524109 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Nov 6 00:39:30.524188 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Nov 6 00:39:30.524255 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Nov 6 00:39:30.524324 kernel: pci 0000:00:17.5: PME# supported from D0 D3hot D3cold Nov 6 00:39:30.524395 kernel: pci 0000:00:17.6: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 6 00:39:30.526790 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Nov 6 00:39:30.527097 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Nov 6 00:39:30.527984 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Nov 6 00:39:30.528558 kernel: pci 0000:00:17.6: PME# supported from D0 D3hot D3cold Nov 6 00:39:30.528640 kernel: pci 0000:00:17.7: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 6 00:39:30.528710 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Nov 6 00:39:30.528779 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Nov 6 00:39:30.528844 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Nov 6 00:39:30.528924 kernel: pci 0000:00:17.7: PME# supported from D0 D3hot D3cold Nov 6 00:39:30.528993 kernel: pci 0000:00:18.0: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 6 00:39:30.529062 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Nov 6 00:39:30.529127 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Nov 6 00:39:30.529191 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Nov 6 00:39:30.529255 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Nov 6 00:39:30.529319 kernel: pci 0000:00:18.0: PME# supported from D0 D3hot D3cold Nov 6 00:39:30.529388 kernel: pci 0000:00:18.1: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 6 00:39:30.529455 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Nov 6 00:39:30.529535 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Nov 6 00:39:30.529601 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Nov 6 00:39:30.529665 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Nov 6 00:39:30.529729 kernel: pci 0000:00:18.1: PME# supported from D0 D3hot D3cold Nov 6 00:39:30.529801 kernel: pci 0000:00:18.2: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 6 00:39:30.529868 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Nov 6 00:39:30.529933 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Nov 6 00:39:30.530009 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Nov 6 00:39:30.530087 kernel: pci 0000:00:18.2: PME# supported from D0 D3hot D3cold Nov 6 00:39:30.530159 kernel: pci 0000:00:18.3: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 6 00:39:30.530227 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Nov 6 00:39:30.530291 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Nov 6 00:39:30.530356 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Nov 6 00:39:30.530420 kernel: pci 0000:00:18.3: PME# supported from D0 D3hot D3cold Nov 6 00:39:30.530550 kernel: pci 0000:00:18.4: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 6 00:39:30.530619 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Nov 6 00:39:30.530687 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Nov 6 00:39:30.530752 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Nov 6 00:39:30.530816 kernel: pci 0000:00:18.4: PME# supported from D0 D3hot D3cold Nov 6 00:39:30.530896 kernel: pci 0000:00:18.5: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 6 00:39:30.530961 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Nov 6 00:39:30.531027 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Nov 6 00:39:30.531093 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Nov 6 00:39:30.531176 kernel: pci 0000:00:18.5: PME# supported from D0 D3hot D3cold Nov 6 00:39:30.531247 kernel: pci 0000:00:18.6: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 6 00:39:30.531319 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Nov 6 00:39:30.531386 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Nov 6 00:39:30.531452 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Nov 6 00:39:30.531530 kernel: pci 0000:00:18.6: PME# supported from D0 D3hot D3cold Nov 6 00:39:30.533217 kernel: pci 0000:00:18.7: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 6 00:39:30.533291 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Nov 6 00:39:30.533359 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Nov 6 00:39:30.533425 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Nov 6 00:39:30.534466 kernel: pci 0000:00:18.7: PME# supported from D0 D3hot D3cold Nov 6 00:39:30.534557 kernel: pci_bus 0000:01: extended config space not accessible Nov 6 00:39:30.534648 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Nov 6 00:39:30.534732 kernel: pci_bus 0000:02: extended config space not accessible Nov 6 00:39:30.534742 kernel: acpiphp: Slot [32] registered Nov 6 00:39:30.534749 kernel: acpiphp: Slot [33] registered Nov 6 00:39:30.534756 kernel: acpiphp: Slot [34] registered Nov 6 00:39:30.534762 kernel: acpiphp: Slot [35] registered Nov 6 00:39:30.534771 kernel: acpiphp: Slot [36] registered Nov 6 00:39:30.534777 kernel: acpiphp: Slot [37] registered Nov 6 00:39:30.534783 kernel: acpiphp: Slot [38] registered Nov 6 00:39:30.534790 kernel: acpiphp: Slot [39] registered Nov 6 00:39:30.534796 kernel: acpiphp: Slot [40] registered Nov 6 00:39:30.534802 kernel: acpiphp: Slot [41] registered Nov 6 00:39:30.534808 kernel: acpiphp: Slot [42] registered Nov 6 00:39:30.534815 kernel: acpiphp: Slot [43] registered Nov 6 00:39:30.534822 kernel: acpiphp: Slot [44] registered Nov 6 00:39:30.534828 kernel: acpiphp: Slot [45] registered Nov 6 00:39:30.534852 kernel: acpiphp: Slot [46] registered Nov 6 00:39:30.534859 kernel: acpiphp: Slot [47] registered Nov 6 00:39:30.534865 kernel: acpiphp: Slot [48] registered Nov 6 00:39:30.534872 kernel: acpiphp: Slot [49] registered Nov 6 00:39:30.534893 kernel: acpiphp: Slot [50] registered Nov 6 00:39:30.534900 kernel: acpiphp: Slot [51] registered Nov 6 00:39:30.534907 kernel: acpiphp: Slot [52] registered Nov 6 00:39:30.534913 kernel: acpiphp: Slot [53] registered Nov 6 00:39:30.534919 kernel: acpiphp: Slot [54] registered Nov 6 00:39:30.534925 kernel: acpiphp: Slot [55] registered Nov 6 00:39:30.534931 kernel: acpiphp: Slot [56] registered Nov 6 00:39:30.534938 kernel: acpiphp: Slot [57] registered Nov 6 00:39:30.534945 kernel: acpiphp: Slot [58] registered Nov 6 00:39:30.534951 kernel: acpiphp: Slot [59] registered Nov 6 00:39:30.534958 kernel: acpiphp: Slot [60] registered Nov 6 00:39:30.534964 kernel: acpiphp: Slot [61] registered Nov 6 00:39:30.534970 kernel: acpiphp: Slot [62] registered Nov 6 00:39:30.534976 kernel: acpiphp: Slot [63] registered Nov 6 00:39:30.535044 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] (subtractive decode) Nov 6 00:39:30.535112 kernel: pci 0000:00:11.0: bridge window [mem 0x000a0000-0x000bffff window] (subtractive decode) Nov 6 00:39:30.535177 kernel: pci 0000:00:11.0: bridge window [mem 0x000cc000-0x000dbfff window] (subtractive decode) Nov 6 00:39:30.535241 kernel: pci 0000:00:11.0: bridge window [mem 0xc0000000-0xfebfffff window] (subtractive decode) Nov 6 00:39:30.535305 kernel: pci 0000:00:11.0: bridge window [io 0x0000-0x0cf7 window] (subtractive decode) Nov 6 00:39:30.535387 kernel: pci 0000:00:11.0: bridge window [io 0x0d00-0xfeff window] (subtractive decode) Nov 6 00:39:30.535489 kernel: pci 0000:03:00.0: [15ad:07c0] type 00 class 0x010700 PCIe Endpoint Nov 6 00:39:30.535565 kernel: pci 0000:03:00.0: BAR 0 [io 0x4000-0x4007] Nov 6 00:39:30.535632 kernel: pci 0000:03:00.0: BAR 1 [mem 0xfd5f8000-0xfd5fffff 64bit] Nov 6 00:39:30.535698 kernel: pci 0000:03:00.0: ROM [mem 0x00000000-0x0000ffff pref] Nov 6 00:39:30.535763 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Nov 6 00:39:30.535848 kernel: pci 0000:03:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Nov 6 00:39:30.535918 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Nov 6 00:39:30.535987 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Nov 6 00:39:30.536054 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Nov 6 00:39:30.536121 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Nov 6 00:39:30.536187 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Nov 6 00:39:30.536253 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Nov 6 00:39:30.536321 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Nov 6 00:39:30.536390 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Nov 6 00:39:30.538749 kernel: pci 0000:0b:00.0: [15ad:07b0] type 00 class 0x020000 PCIe Endpoint Nov 6 00:39:30.538831 kernel: pci 0000:0b:00.0: BAR 0 [mem 0xfd4fc000-0xfd4fcfff] Nov 6 00:39:30.538913 kernel: pci 0000:0b:00.0: BAR 1 [mem 0xfd4fd000-0xfd4fdfff] Nov 6 00:39:30.538983 kernel: pci 0000:0b:00.0: BAR 2 [mem 0xfd4fe000-0xfd4fffff] Nov 6 00:39:30.539070 kernel: pci 0000:0b:00.0: BAR 3 [io 0x5000-0x500f] Nov 6 00:39:30.539141 kernel: pci 0000:0b:00.0: ROM [mem 0x00000000-0x0000ffff pref] Nov 6 00:39:30.539208 kernel: pci 0000:0b:00.0: supports D1 D2 Nov 6 00:39:30.539274 kernel: pci 0000:0b:00.0: PME# supported from D0 D1 D2 D3hot D3cold Nov 6 00:39:30.539340 kernel: pci 0000:0b:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Nov 6 00:39:30.539406 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Nov 6 00:39:30.539917 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Nov 6 00:39:30.539996 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Nov 6 00:39:30.540066 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Nov 6 00:39:30.540135 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Nov 6 00:39:30.540203 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Nov 6 00:39:30.540268 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Nov 6 00:39:30.540335 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Nov 6 00:39:30.540402 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Nov 6 00:39:30.540468 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Nov 6 00:39:30.541574 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Nov 6 00:39:30.541651 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Nov 6 00:39:30.541724 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Nov 6 00:39:30.541794 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Nov 6 00:39:30.541891 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Nov 6 00:39:30.541977 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Nov 6 00:39:30.542042 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Nov 6 00:39:30.542107 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Nov 6 00:39:30.542173 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Nov 6 00:39:30.542239 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Nov 6 00:39:30.542305 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Nov 6 00:39:30.542374 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Nov 6 00:39:30.542440 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Nov 6 00:39:30.543527 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Nov 6 00:39:30.543538 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 9 Nov 6 00:39:30.543545 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 0 Nov 6 00:39:30.543566 kernel: ACPI: PCI: Interrupt link LNKB disabled Nov 6 00:39:30.543575 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 6 00:39:30.543582 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 10 Nov 6 00:39:30.543588 kernel: iommu: Default domain type: Translated Nov 6 00:39:30.543595 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 6 00:39:30.543601 kernel: PCI: Using ACPI for IRQ routing Nov 6 00:39:30.543607 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 6 00:39:30.543614 kernel: e820: reserve RAM buffer [mem 0x0009ec00-0x0009ffff] Nov 6 00:39:30.543621 kernel: e820: reserve RAM buffer [mem 0x7fee0000-0x7fffffff] Nov 6 00:39:30.543688 kernel: pci 0000:00:0f.0: vgaarb: setting as boot VGA device Nov 6 00:39:30.543752 kernel: pci 0000:00:0f.0: vgaarb: bridge control possible Nov 6 00:39:30.543816 kernel: pci 0000:00:0f.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 6 00:39:30.543825 kernel: vgaarb: loaded Nov 6 00:39:30.543832 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 Nov 6 00:39:30.543839 kernel: hpet0: 16 comparators, 64-bit 14.318180 MHz counter Nov 6 00:39:30.543847 kernel: clocksource: Switched to clocksource tsc-early Nov 6 00:39:30.543854 kernel: VFS: Disk quotas dquot_6.6.0 Nov 6 00:39:30.543860 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 6 00:39:30.543867 kernel: pnp: PnP ACPI init Nov 6 00:39:30.543937 kernel: system 00:00: [io 0x1000-0x103f] has been reserved Nov 6 00:39:30.543999 kernel: system 00:00: [io 0x1040-0x104f] has been reserved Nov 6 00:39:30.544061 kernel: system 00:00: [io 0x0cf0-0x0cf1] has been reserved Nov 6 00:39:30.544125 kernel: system 00:04: [mem 0xfed00000-0xfed003ff] has been reserved Nov 6 00:39:30.544190 kernel: pnp 00:06: [dma 2] Nov 6 00:39:30.544255 kernel: system 00:07: [io 0xfce0-0xfcff] has been reserved Nov 6 00:39:30.544314 kernel: system 00:07: [mem 0xf0000000-0xf7ffffff] has been reserved Nov 6 00:39:30.544376 kernel: system 00:07: [mem 0xfe800000-0xfe9fffff] has been reserved Nov 6 00:39:30.544385 kernel: pnp: PnP ACPI: found 8 devices Nov 6 00:39:30.544392 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 6 00:39:30.544399 kernel: NET: Registered PF_INET protocol family Nov 6 00:39:30.544406 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 6 00:39:30.544413 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Nov 6 00:39:30.544419 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 6 00:39:30.544446 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Nov 6 00:39:30.544453 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Nov 6 00:39:30.544459 kernel: TCP: Hash tables configured (established 16384 bind 16384) Nov 6 00:39:30.544466 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Nov 6 00:39:30.544472 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Nov 6 00:39:30.544485 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 6 00:39:30.544507 kernel: NET: Registered PF_XDP protocol family Nov 6 00:39:30.545567 kernel: pci 0000:00:15.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Nov 6 00:39:30.545636 kernel: pci 0000:00:15.3: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Nov 6 00:39:30.545702 kernel: pci 0000:00:15.4: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Nov 6 00:39:30.545768 kernel: pci 0000:00:15.5: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Nov 6 00:39:30.545841 kernel: pci 0000:00:15.6: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Nov 6 00:39:30.545950 kernel: pci 0000:00:15.7: bridge window [io 0x1000-0x0fff] to [bus 0a] add_size 1000 Nov 6 00:39:30.546016 kernel: pci 0000:00:16.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0b] add_size 200000 add_align 100000 Nov 6 00:39:30.546085 kernel: pci 0000:00:16.3: bridge window [io 0x1000-0x0fff] to [bus 0e] add_size 1000 Nov 6 00:39:30.546164 kernel: pci 0000:00:16.4: bridge window [io 0x1000-0x0fff] to [bus 0f] add_size 1000 Nov 6 00:39:30.546231 kernel: pci 0000:00:16.5: bridge window [io 0x1000-0x0fff] to [bus 10] add_size 1000 Nov 6 00:39:30.546298 kernel: pci 0000:00:16.6: bridge window [io 0x1000-0x0fff] to [bus 11] add_size 1000 Nov 6 00:39:30.546363 kernel: pci 0000:00:16.7: bridge window [io 0x1000-0x0fff] to [bus 12] add_size 1000 Nov 6 00:39:30.546428 kernel: pci 0000:00:17.3: bridge window [io 0x1000-0x0fff] to [bus 16] add_size 1000 Nov 6 00:39:30.547510 kernel: pci 0000:00:17.4: bridge window [io 0x1000-0x0fff] to [bus 17] add_size 1000 Nov 6 00:39:30.547579 kernel: pci 0000:00:17.5: bridge window [io 0x1000-0x0fff] to [bus 18] add_size 1000 Nov 6 00:39:30.547645 kernel: pci 0000:00:17.6: bridge window [io 0x1000-0x0fff] to [bus 19] add_size 1000 Nov 6 00:39:30.547711 kernel: pci 0000:00:17.7: bridge window [io 0x1000-0x0fff] to [bus 1a] add_size 1000 Nov 6 00:39:30.547776 kernel: pci 0000:00:18.2: bridge window [io 0x1000-0x0fff] to [bus 1d] add_size 1000 Nov 6 00:39:30.547843 kernel: pci 0000:00:18.3: bridge window [io 0x1000-0x0fff] to [bus 1e] add_size 1000 Nov 6 00:39:30.547910 kernel: pci 0000:00:18.4: bridge window [io 0x1000-0x0fff] to [bus 1f] add_size 1000 Nov 6 00:39:30.547976 kernel: pci 0000:00:18.5: bridge window [io 0x1000-0x0fff] to [bus 20] add_size 1000 Nov 6 00:39:30.548040 kernel: pci 0000:00:18.6: bridge window [io 0x1000-0x0fff] to [bus 21] add_size 1000 Nov 6 00:39:30.548105 kernel: pci 0000:00:18.7: bridge window [io 0x1000-0x0fff] to [bus 22] add_size 1000 Nov 6 00:39:30.548170 kernel: pci 0000:00:15.0: bridge window [mem 0xc0000000-0xc01fffff 64bit pref]: assigned Nov 6 00:39:30.548234 kernel: pci 0000:00:16.0: bridge window [mem 0xc0200000-0xc03fffff 64bit pref]: assigned Nov 6 00:39:30.548310 kernel: pci 0000:00:15.3: bridge window [io size 0x1000]: can't assign; no space Nov 6 00:39:30.548377 kernel: pci 0000:00:15.3: bridge window [io size 0x1000]: failed to assign Nov 6 00:39:30.548441 kernel: pci 0000:00:15.4: bridge window [io size 0x1000]: can't assign; no space Nov 6 00:39:30.550047 kernel: pci 0000:00:15.4: bridge window [io size 0x1000]: failed to assign Nov 6 00:39:30.550117 kernel: pci 0000:00:15.5: bridge window [io size 0x1000]: can't assign; no space Nov 6 00:39:30.550183 kernel: pci 0000:00:15.5: bridge window [io size 0x1000]: failed to assign Nov 6 00:39:30.550249 kernel: pci 0000:00:15.6: bridge window [io size 0x1000]: can't assign; no space Nov 6 00:39:30.550319 kernel: pci 0000:00:15.6: bridge window [io size 0x1000]: failed to assign Nov 6 00:39:30.550385 kernel: pci 0000:00:15.7: bridge window [io size 0x1000]: can't assign; no space Nov 6 00:39:30.550450 kernel: pci 0000:00:15.7: bridge window [io size 0x1000]: failed to assign Nov 6 00:39:30.550555 kernel: pci 0000:00:16.3: bridge window [io size 0x1000]: can't assign; no space Nov 6 00:39:30.550637 kernel: pci 0000:00:16.3: bridge window [io size 0x1000]: failed to assign Nov 6 00:39:30.550703 kernel: pci 0000:00:16.4: bridge window [io size 0x1000]: can't assign; no space Nov 6 00:39:30.550770 kernel: pci 0000:00:16.4: bridge window [io size 0x1000]: failed to assign Nov 6 00:39:30.550834 kernel: pci 0000:00:16.5: bridge window [io size 0x1000]: can't assign; no space Nov 6 00:39:30.550897 kernel: pci 0000:00:16.5: bridge window [io size 0x1000]: failed to assign Nov 6 00:39:30.550962 kernel: pci 0000:00:16.6: bridge window [io size 0x1000]: can't assign; no space Nov 6 00:39:30.551035 kernel: pci 0000:00:16.6: bridge window [io size 0x1000]: failed to assign Nov 6 00:39:30.551101 kernel: pci 0000:00:16.7: bridge window [io size 0x1000]: can't assign; no space Nov 6 00:39:30.551166 kernel: pci 0000:00:16.7: bridge window [io size 0x1000]: failed to assign Nov 6 00:39:30.551239 kernel: pci 0000:00:17.3: bridge window [io size 0x1000]: can't assign; no space Nov 6 00:39:30.551304 kernel: pci 0000:00:17.3: bridge window [io size 0x1000]: failed to assign Nov 6 00:39:30.551369 kernel: pci 0000:00:17.4: bridge window [io size 0x1000]: can't assign; no space Nov 6 00:39:30.551439 kernel: pci 0000:00:17.4: bridge window [io size 0x1000]: failed to assign Nov 6 00:39:30.551775 kernel: pci 0000:00:17.5: bridge window [io size 0x1000]: can't assign; no space Nov 6 00:39:30.551846 kernel: pci 0000:00:17.5: bridge window [io size 0x1000]: failed to assign Nov 6 00:39:30.551934 kernel: pci 0000:00:17.6: bridge window [io size 0x1000]: can't assign; no space Nov 6 00:39:30.552029 kernel: pci 0000:00:17.6: bridge window [io size 0x1000]: failed to assign Nov 6 00:39:30.552096 kernel: pci 0000:00:17.7: bridge window [io size 0x1000]: can't assign; no space Nov 6 00:39:30.552161 kernel: pci 0000:00:17.7: bridge window [io size 0x1000]: failed to assign Nov 6 00:39:30.552227 kernel: pci 0000:00:18.2: bridge window [io size 0x1000]: can't assign; no space Nov 6 00:39:30.552291 kernel: pci 0000:00:18.2: bridge window [io size 0x1000]: failed to assign Nov 6 00:39:30.552359 kernel: pci 0000:00:18.3: bridge window [io size 0x1000]: can't assign; no space Nov 6 00:39:30.552423 kernel: pci 0000:00:18.3: bridge window [io size 0x1000]: failed to assign Nov 6 00:39:30.553738 kernel: pci 0000:00:18.4: bridge window [io size 0x1000]: can't assign; no space Nov 6 00:39:30.553810 kernel: pci 0000:00:18.4: bridge window [io size 0x1000]: failed to assign Nov 6 00:39:30.553876 kernel: pci 0000:00:18.5: bridge window [io size 0x1000]: can't assign; no space Nov 6 00:39:30.553939 kernel: pci 0000:00:18.5: bridge window [io size 0x1000]: failed to assign Nov 6 00:39:30.554003 kernel: pci 0000:00:18.6: bridge window [io size 0x1000]: can't assign; no space Nov 6 00:39:30.554069 kernel: pci 0000:00:18.6: bridge window [io size 0x1000]: failed to assign Nov 6 00:39:30.554147 kernel: pci 0000:00:18.7: bridge window [io size 0x1000]: can't assign; no space Nov 6 00:39:30.554211 kernel: pci 0000:00:18.7: bridge window [io size 0x1000]: failed to assign Nov 6 00:39:30.554273 kernel: pci 0000:00:18.7: bridge window [io size 0x1000]: can't assign; no space Nov 6 00:39:30.554348 kernel: pci 0000:00:18.7: bridge window [io size 0x1000]: failed to assign Nov 6 00:39:30.554413 kernel: pci 0000:00:18.6: bridge window [io size 0x1000]: can't assign; no space Nov 6 00:39:30.554709 kernel: pci 0000:00:18.6: bridge window [io size 0x1000]: failed to assign Nov 6 00:39:30.554788 kernel: pci 0000:00:18.5: bridge window [io size 0x1000]: can't assign; no space Nov 6 00:39:30.554886 kernel: pci 0000:00:18.5: bridge window [io size 0x1000]: failed to assign Nov 6 00:39:30.554993 kernel: pci 0000:00:18.4: bridge window [io size 0x1000]: can't assign; no space Nov 6 00:39:30.555059 kernel: pci 0000:00:18.4: bridge window [io size 0x1000]: failed to assign Nov 6 00:39:30.555122 kernel: pci 0000:00:18.3: bridge window [io size 0x1000]: can't assign; no space Nov 6 00:39:30.555185 kernel: pci 0000:00:18.3: bridge window [io size 0x1000]: failed to assign Nov 6 00:39:30.555260 kernel: pci 0000:00:18.2: bridge window [io size 0x1000]: can't assign; no space Nov 6 00:39:30.555340 kernel: pci 0000:00:18.2: bridge window [io size 0x1000]: failed to assign Nov 6 00:39:30.555420 kernel: pci 0000:00:17.7: bridge window [io size 0x1000]: can't assign; no space Nov 6 00:39:30.555533 kernel: pci 0000:00:17.7: bridge window [io size 0x1000]: failed to assign Nov 6 00:39:30.555599 kernel: pci 0000:00:17.6: bridge window [io size 0x1000]: can't assign; no space Nov 6 00:39:30.555663 kernel: pci 0000:00:17.6: bridge window [io size 0x1000]: failed to assign Nov 6 00:39:30.555726 kernel: pci 0000:00:17.5: bridge window [io size 0x1000]: can't assign; no space Nov 6 00:39:30.556501 kernel: pci 0000:00:17.5: bridge window [io size 0x1000]: failed to assign Nov 6 00:39:30.556592 kernel: pci 0000:00:17.4: bridge window [io size 0x1000]: can't assign; no space Nov 6 00:39:30.556664 kernel: pci 0000:00:17.4: bridge window [io size 0x1000]: failed to assign Nov 6 00:39:30.556732 kernel: pci 0000:00:17.3: bridge window [io size 0x1000]: can't assign; no space Nov 6 00:39:30.556911 kernel: pci 0000:00:17.3: bridge window [io size 0x1000]: failed to assign Nov 6 00:39:30.556980 kernel: pci 0000:00:16.7: bridge window [io size 0x1000]: can't assign; no space Nov 6 00:39:30.557047 kernel: pci 0000:00:16.7: bridge window [io size 0x1000]: failed to assign Nov 6 00:39:30.557113 kernel: pci 0000:00:16.6: bridge window [io size 0x1000]: can't assign; no space Nov 6 00:39:30.557179 kernel: pci 0000:00:16.6: bridge window [io size 0x1000]: failed to assign Nov 6 00:39:30.557249 kernel: pci 0000:00:16.5: bridge window [io size 0x1000]: can't assign; no space Nov 6 00:39:30.557316 kernel: pci 0000:00:16.5: bridge window [io size 0x1000]: failed to assign Nov 6 00:39:30.557382 kernel: pci 0000:00:16.4: bridge window [io size 0x1000]: can't assign; no space Nov 6 00:39:30.557448 kernel: pci 0000:00:16.4: bridge window [io size 0x1000]: failed to assign Nov 6 00:39:30.557529 kernel: pci 0000:00:16.3: bridge window [io size 0x1000]: can't assign; no space Nov 6 00:39:30.557596 kernel: pci 0000:00:16.3: bridge window [io size 0x1000]: failed to assign Nov 6 00:39:30.557665 kernel: pci 0000:00:15.7: bridge window [io size 0x1000]: can't assign; no space Nov 6 00:39:30.557731 kernel: pci 0000:00:15.7: bridge window [io size 0x1000]: failed to assign Nov 6 00:39:30.557796 kernel: pci 0000:00:15.6: bridge window [io size 0x1000]: can't assign; no space Nov 6 00:39:30.557861 kernel: pci 0000:00:15.6: bridge window [io size 0x1000]: failed to assign Nov 6 00:39:30.557939 kernel: pci 0000:00:15.5: bridge window [io size 0x1000]: can't assign; no space Nov 6 00:39:30.558005 kernel: pci 0000:00:15.5: bridge window [io size 0x1000]: failed to assign Nov 6 00:39:30.558088 kernel: pci 0000:00:15.4: bridge window [io size 0x1000]: can't assign; no space Nov 6 00:39:30.558171 kernel: pci 0000:00:15.4: bridge window [io size 0x1000]: failed to assign Nov 6 00:39:30.558253 kernel: pci 0000:00:15.3: bridge window [io size 0x1000]: can't assign; no space Nov 6 00:39:30.558317 kernel: pci 0000:00:15.3: bridge window [io size 0x1000]: failed to assign Nov 6 00:39:30.558383 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Nov 6 00:39:30.558449 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] Nov 6 00:39:30.558522 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Nov 6 00:39:30.558589 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Nov 6 00:39:30.558654 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Nov 6 00:39:30.559150 kernel: pci 0000:03:00.0: ROM [mem 0xfd500000-0xfd50ffff pref]: assigned Nov 6 00:39:30.559226 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Nov 6 00:39:30.559327 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Nov 6 00:39:30.559393 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Nov 6 00:39:30.559459 kernel: pci 0000:00:15.0: bridge window [mem 0xc0000000-0xc01fffff 64bit pref] Nov 6 00:39:30.559539 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Nov 6 00:39:30.559605 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Nov 6 00:39:30.559669 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Nov 6 00:39:30.559734 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Nov 6 00:39:30.559800 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Nov 6 00:39:30.559864 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Nov 6 00:39:30.559929 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Nov 6 00:39:30.559996 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Nov 6 00:39:30.560061 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Nov 6 00:39:30.560126 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Nov 6 00:39:30.560189 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Nov 6 00:39:30.560254 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Nov 6 00:39:30.560319 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Nov 6 00:39:30.560402 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Nov 6 00:39:30.560471 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Nov 6 00:39:30.560683 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Nov 6 00:39:30.560765 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Nov 6 00:39:30.560831 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Nov 6 00:39:30.560898 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Nov 6 00:39:30.560962 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Nov 6 00:39:30.561030 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Nov 6 00:39:30.561094 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Nov 6 00:39:30.561159 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Nov 6 00:39:30.561227 kernel: pci 0000:0b:00.0: ROM [mem 0xfd400000-0xfd40ffff pref]: assigned Nov 6 00:39:30.561293 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Nov 6 00:39:30.561357 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Nov 6 00:39:30.561428 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Nov 6 00:39:30.561516 kernel: pci 0000:00:16.0: bridge window [mem 0xc0200000-0xc03fffff 64bit pref] Nov 6 00:39:30.561584 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Nov 6 00:39:30.561648 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Nov 6 00:39:30.561731 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Nov 6 00:39:30.561813 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Nov 6 00:39:30.561889 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Nov 6 00:39:30.561959 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Nov 6 00:39:30.562024 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Nov 6 00:39:30.562089 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Nov 6 00:39:30.562154 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Nov 6 00:39:30.562219 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Nov 6 00:39:30.562283 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Nov 6 00:39:30.562348 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Nov 6 00:39:30.562416 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Nov 6 00:39:30.562490 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Nov 6 00:39:30.562557 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Nov 6 00:39:30.562621 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Nov 6 00:39:30.562686 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Nov 6 00:39:30.562752 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Nov 6 00:39:30.562819 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Nov 6 00:39:30.562912 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Nov 6 00:39:30.562979 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Nov 6 00:39:30.563045 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Nov 6 00:39:30.563112 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Nov 6 00:39:30.563179 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Nov 6 00:39:30.563248 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Nov 6 00:39:30.563313 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Nov 6 00:39:30.563379 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Nov 6 00:39:30.563446 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Nov 6 00:39:30.563524 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Nov 6 00:39:30.564338 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Nov 6 00:39:30.564408 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Nov 6 00:39:30.564488 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Nov 6 00:39:30.564559 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Nov 6 00:39:30.564627 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Nov 6 00:39:30.564694 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Nov 6 00:39:30.564761 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Nov 6 00:39:30.564827 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Nov 6 00:39:30.564892 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Nov 6 00:39:30.567864 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Nov 6 00:39:30.567947 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Nov 6 00:39:30.568015 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Nov 6 00:39:30.568083 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Nov 6 00:39:30.568150 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Nov 6 00:39:30.568215 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Nov 6 00:39:30.568287 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Nov 6 00:39:30.568352 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Nov 6 00:39:30.568420 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Nov 6 00:39:30.568541 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Nov 6 00:39:30.568612 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Nov 6 00:39:30.568678 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Nov 6 00:39:30.568751 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Nov 6 00:39:30.568840 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Nov 6 00:39:30.568907 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Nov 6 00:39:30.568977 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Nov 6 00:39:30.569467 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Nov 6 00:39:30.569548 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Nov 6 00:39:30.569617 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Nov 6 00:39:30.569684 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Nov 6 00:39:30.569755 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Nov 6 00:39:30.569821 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Nov 6 00:39:30.569888 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Nov 6 00:39:30.569956 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Nov 6 00:39:30.570022 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Nov 6 00:39:30.570087 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Nov 6 00:39:30.570157 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Nov 6 00:39:30.570224 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Nov 6 00:39:30.570291 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Nov 6 00:39:30.570359 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Nov 6 00:39:30.570426 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Nov 6 00:39:30.570506 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Nov 6 00:39:30.570580 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Nov 6 00:39:30.570646 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Nov 6 00:39:30.570712 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Nov 6 00:39:30.570780 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Nov 6 00:39:30.570867 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Nov 6 00:39:30.570951 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Nov 6 00:39:30.571019 kernel: pci_bus 0000:00: resource 4 [mem 0x000a0000-0x000bffff window] Nov 6 00:39:30.571094 kernel: pci_bus 0000:00: resource 5 [mem 0x000cc000-0x000dbfff window] Nov 6 00:39:30.571151 kernel: pci_bus 0000:00: resource 6 [mem 0xc0000000-0xfebfffff window] Nov 6 00:39:30.571209 kernel: pci_bus 0000:00: resource 7 [io 0x0000-0x0cf7 window] Nov 6 00:39:30.571266 kernel: pci_bus 0000:00: resource 8 [io 0x0d00-0xfeff window] Nov 6 00:39:30.571329 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x3fff] Nov 6 00:39:30.571391 kernel: pci_bus 0000:02: resource 1 [mem 0xfd600000-0xfdffffff] Nov 6 00:39:30.571450 kernel: pci_bus 0000:02: resource 2 [mem 0xe7b00000-0xe7ffffff 64bit pref] Nov 6 00:39:30.571518 kernel: pci_bus 0000:02: resource 4 [mem 0x000a0000-0x000bffff window] Nov 6 00:39:30.571577 kernel: pci_bus 0000:02: resource 5 [mem 0x000cc000-0x000dbfff window] Nov 6 00:39:30.571636 kernel: pci_bus 0000:02: resource 6 [mem 0xc0000000-0xfebfffff window] Nov 6 00:39:30.571695 kernel: pci_bus 0000:02: resource 7 [io 0x0000-0x0cf7 window] Nov 6 00:39:30.571757 kernel: pci_bus 0000:02: resource 8 [io 0x0d00-0xfeff window] Nov 6 00:39:30.571822 kernel: pci_bus 0000:03: resource 0 [io 0x4000-0x4fff] Nov 6 00:39:30.571882 kernel: pci_bus 0000:03: resource 1 [mem 0xfd500000-0xfd5fffff] Nov 6 00:39:30.571941 kernel: pci_bus 0000:03: resource 2 [mem 0xc0000000-0xc01fffff 64bit pref] Nov 6 00:39:30.572014 kernel: pci_bus 0000:04: resource 0 [io 0x8000-0x8fff] Nov 6 00:39:30.573367 kernel: pci_bus 0000:04: resource 1 [mem 0xfd100000-0xfd1fffff] Nov 6 00:39:30.573434 kernel: pci_bus 0000:04: resource 2 [mem 0xe7800000-0xe78fffff 64bit pref] Nov 6 00:39:30.573510 kernel: pci_bus 0000:05: resource 0 [io 0xc000-0xcfff] Nov 6 00:39:30.573572 kernel: pci_bus 0000:05: resource 1 [mem 0xfcd00000-0xfcdfffff] Nov 6 00:39:30.573632 kernel: pci_bus 0000:05: resource 2 [mem 0xe7400000-0xe74fffff 64bit pref] Nov 6 00:39:30.573696 kernel: pci_bus 0000:06: resource 1 [mem 0xfc900000-0xfc9fffff] Nov 6 00:39:30.573759 kernel: pci_bus 0000:06: resource 2 [mem 0xe7000000-0xe70fffff 64bit pref] Nov 6 00:39:30.573823 kernel: pci_bus 0000:07: resource 1 [mem 0xfc500000-0xfc5fffff] Nov 6 00:39:30.573883 kernel: pci_bus 0000:07: resource 2 [mem 0xe6c00000-0xe6cfffff 64bit pref] Nov 6 00:39:30.573947 kernel: pci_bus 0000:08: resource 1 [mem 0xfc100000-0xfc1fffff] Nov 6 00:39:30.574006 kernel: pci_bus 0000:08: resource 2 [mem 0xe6800000-0xe68fffff 64bit pref] Nov 6 00:39:30.574070 kernel: pci_bus 0000:09: resource 1 [mem 0xfbd00000-0xfbdfffff] Nov 6 00:39:30.574132 kernel: pci_bus 0000:09: resource 2 [mem 0xe6400000-0xe64fffff 64bit pref] Nov 6 00:39:30.574196 kernel: pci_bus 0000:0a: resource 1 [mem 0xfb900000-0xfb9fffff] Nov 6 00:39:30.574255 kernel: pci_bus 0000:0a: resource 2 [mem 0xe6000000-0xe60fffff 64bit pref] Nov 6 00:39:30.574320 kernel: pci_bus 0000:0b: resource 0 [io 0x5000-0x5fff] Nov 6 00:39:30.574382 kernel: pci_bus 0000:0b: resource 1 [mem 0xfd400000-0xfd4fffff] Nov 6 00:39:30.574442 kernel: pci_bus 0000:0b: resource 2 [mem 0xc0200000-0xc03fffff 64bit pref] Nov 6 00:39:30.574522 kernel: pci_bus 0000:0c: resource 0 [io 0x9000-0x9fff] Nov 6 00:39:30.574584 kernel: pci_bus 0000:0c: resource 1 [mem 0xfd000000-0xfd0fffff] Nov 6 00:39:30.574643 kernel: pci_bus 0000:0c: resource 2 [mem 0xe7700000-0xe77fffff 64bit pref] Nov 6 00:39:30.574707 kernel: pci_bus 0000:0d: resource 0 [io 0xd000-0xdfff] Nov 6 00:39:30.574770 kernel: pci_bus 0000:0d: resource 1 [mem 0xfcc00000-0xfccfffff] Nov 6 00:39:30.574829 kernel: pci_bus 0000:0d: resource 2 [mem 0xe7300000-0xe73fffff 64bit pref] Nov 6 00:39:30.574928 kernel: pci_bus 0000:0e: resource 1 [mem 0xfc800000-0xfc8fffff] Nov 6 00:39:30.574988 kernel: pci_bus 0000:0e: resource 2 [mem 0xe6f00000-0xe6ffffff 64bit pref] Nov 6 00:39:30.575051 kernel: pci_bus 0000:0f: resource 1 [mem 0xfc400000-0xfc4fffff] Nov 6 00:39:30.575114 kernel: pci_bus 0000:0f: resource 2 [mem 0xe6b00000-0xe6bfffff 64bit pref] Nov 6 00:39:30.575178 kernel: pci_bus 0000:10: resource 1 [mem 0xfc000000-0xfc0fffff] Nov 6 00:39:30.575238 kernel: pci_bus 0000:10: resource 2 [mem 0xe6700000-0xe67fffff 64bit pref] Nov 6 00:39:30.575301 kernel: pci_bus 0000:11: resource 1 [mem 0xfbc00000-0xfbcfffff] Nov 6 00:39:30.575361 kernel: pci_bus 0000:11: resource 2 [mem 0xe6300000-0xe63fffff 64bit pref] Nov 6 00:39:30.575427 kernel: pci_bus 0000:12: resource 1 [mem 0xfb800000-0xfb8fffff] Nov 6 00:39:30.575613 kernel: pci_bus 0000:12: resource 2 [mem 0xe5f00000-0xe5ffffff 64bit pref] Nov 6 00:39:30.575680 kernel: pci_bus 0000:13: resource 0 [io 0x6000-0x6fff] Nov 6 00:39:30.575742 kernel: pci_bus 0000:13: resource 1 [mem 0xfd300000-0xfd3fffff] Nov 6 00:39:30.575801 kernel: pci_bus 0000:13: resource 2 [mem 0xe7a00000-0xe7afffff 64bit pref] Nov 6 00:39:30.575870 kernel: pci_bus 0000:14: resource 0 [io 0xa000-0xafff] Nov 6 00:39:30.575934 kernel: pci_bus 0000:14: resource 1 [mem 0xfcf00000-0xfcffffff] Nov 6 00:39:30.575994 kernel: pci_bus 0000:14: resource 2 [mem 0xe7600000-0xe76fffff 64bit pref] Nov 6 00:39:30.576058 kernel: pci_bus 0000:15: resource 0 [io 0xe000-0xefff] Nov 6 00:39:30.576118 kernel: pci_bus 0000:15: resource 1 [mem 0xfcb00000-0xfcbfffff] Nov 6 00:39:30.576178 kernel: pci_bus 0000:15: resource 2 [mem 0xe7200000-0xe72fffff 64bit pref] Nov 6 00:39:30.576244 kernel: pci_bus 0000:16: resource 1 [mem 0xfc700000-0xfc7fffff] Nov 6 00:39:30.576304 kernel: pci_bus 0000:16: resource 2 [mem 0xe6e00000-0xe6efffff 64bit pref] Nov 6 00:39:30.576370 kernel: pci_bus 0000:17: resource 1 [mem 0xfc300000-0xfc3fffff] Nov 6 00:39:30.576430 kernel: pci_bus 0000:17: resource 2 [mem 0xe6a00000-0xe6afffff 64bit pref] Nov 6 00:39:30.576504 kernel: pci_bus 0000:18: resource 1 [mem 0xfbf00000-0xfbffffff] Nov 6 00:39:30.576565 kernel: pci_bus 0000:18: resource 2 [mem 0xe6600000-0xe66fffff 64bit pref] Nov 6 00:39:30.576631 kernel: pci_bus 0000:19: resource 1 [mem 0xfbb00000-0xfbbfffff] Nov 6 00:39:30.576706 kernel: pci_bus 0000:19: resource 2 [mem 0xe6200000-0xe62fffff 64bit pref] Nov 6 00:39:30.576768 kernel: pci_bus 0000:1a: resource 1 [mem 0xfb700000-0xfb7fffff] Nov 6 00:39:30.576827 kernel: pci_bus 0000:1a: resource 2 [mem 0xe5e00000-0xe5efffff 64bit pref] Nov 6 00:39:30.576889 kernel: pci_bus 0000:1b: resource 0 [io 0x7000-0x7fff] Nov 6 00:39:30.576950 kernel: pci_bus 0000:1b: resource 1 [mem 0xfd200000-0xfd2fffff] Nov 6 00:39:30.577008 kernel: pci_bus 0000:1b: resource 2 [mem 0xe7900000-0xe79fffff 64bit pref] Nov 6 00:39:30.577069 kernel: pci_bus 0000:1c: resource 0 [io 0xb000-0xbfff] Nov 6 00:39:30.577128 kernel: pci_bus 0000:1c: resource 1 [mem 0xfce00000-0xfcefffff] Nov 6 00:39:30.577185 kernel: pci_bus 0000:1c: resource 2 [mem 0xe7500000-0xe75fffff 64bit pref] Nov 6 00:39:30.577248 kernel: pci_bus 0000:1d: resource 1 [mem 0xfca00000-0xfcafffff] Nov 6 00:39:30.577309 kernel: pci_bus 0000:1d: resource 2 [mem 0xe7100000-0xe71fffff 64bit pref] Nov 6 00:39:30.577372 kernel: pci_bus 0000:1e: resource 1 [mem 0xfc600000-0xfc6fffff] Nov 6 00:39:30.577430 kernel: pci_bus 0000:1e: resource 2 [mem 0xe6d00000-0xe6dfffff 64bit pref] Nov 6 00:39:30.577710 kernel: pci_bus 0000:1f: resource 1 [mem 0xfc200000-0xfc2fffff] Nov 6 00:39:30.577776 kernel: pci_bus 0000:1f: resource 2 [mem 0xe6900000-0xe69fffff 64bit pref] Nov 6 00:39:30.577843 kernel: pci_bus 0000:20: resource 1 [mem 0xfbe00000-0xfbefffff] Nov 6 00:39:30.577903 kernel: pci_bus 0000:20: resource 2 [mem 0xe6500000-0xe65fffff 64bit pref] Nov 6 00:39:30.577968 kernel: pci_bus 0000:21: resource 1 [mem 0xfba00000-0xfbafffff] Nov 6 00:39:30.578028 kernel: pci_bus 0000:21: resource 2 [mem 0xe6100000-0xe61fffff 64bit pref] Nov 6 00:39:30.578122 kernel: pci_bus 0000:22: resource 1 [mem 0xfb600000-0xfb6fffff] Nov 6 00:39:30.578184 kernel: pci_bus 0000:22: resource 2 [mem 0xe5d00000-0xe5dfffff 64bit pref] Nov 6 00:39:30.578253 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Nov 6 00:39:30.578263 kernel: PCI: CLS 32 bytes, default 64 Nov 6 00:39:30.578270 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Nov 6 00:39:30.578276 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Nov 6 00:39:30.578283 kernel: clocksource: Switched to clocksource tsc Nov 6 00:39:30.578295 kernel: Initialise system trusted keyrings Nov 6 00:39:30.581385 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Nov 6 00:39:30.581395 kernel: Key type asymmetric registered Nov 6 00:39:30.581402 kernel: Asymmetric key parser 'x509' registered Nov 6 00:39:30.581408 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Nov 6 00:39:30.581415 kernel: io scheduler mq-deadline registered Nov 6 00:39:30.581421 kernel: io scheduler kyber registered Nov 6 00:39:30.581428 kernel: io scheduler bfq registered Nov 6 00:39:30.581523 kernel: pcieport 0000:00:15.0: PME: Signaling with IRQ 24 Nov 6 00:39:30.581594 kernel: pcieport 0000:00:15.0: pciehp: Slot #160 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 6 00:39:30.581662 kernel: pcieport 0000:00:15.1: PME: Signaling with IRQ 25 Nov 6 00:39:30.581728 kernel: pcieport 0000:00:15.1: pciehp: Slot #161 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 6 00:39:30.581795 kernel: pcieport 0000:00:15.2: PME: Signaling with IRQ 26 Nov 6 00:39:30.581862 kernel: pcieport 0000:00:15.2: pciehp: Slot #162 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 6 00:39:30.581928 kernel: pcieport 0000:00:15.3: PME: Signaling with IRQ 27 Nov 6 00:39:30.581993 kernel: pcieport 0000:00:15.3: pciehp: Slot #163 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 6 00:39:30.582092 kernel: pcieport 0000:00:15.4: PME: Signaling with IRQ 28 Nov 6 00:39:30.582156 kernel: pcieport 0000:00:15.4: pciehp: Slot #164 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 6 00:39:30.582221 kernel: pcieport 0000:00:15.5: PME: Signaling with IRQ 29 Nov 6 00:39:30.582289 kernel: pcieport 0000:00:15.5: pciehp: Slot #165 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 6 00:39:30.582354 kernel: pcieport 0000:00:15.6: PME: Signaling with IRQ 30 Nov 6 00:39:30.582420 kernel: pcieport 0000:00:15.6: pciehp: Slot #166 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 6 00:39:30.582499 kernel: pcieport 0000:00:15.7: PME: Signaling with IRQ 31 Nov 6 00:39:30.582565 kernel: pcieport 0000:00:15.7: pciehp: Slot #167 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 6 00:39:30.582629 kernel: pcieport 0000:00:16.0: PME: Signaling with IRQ 32 Nov 6 00:39:30.582693 kernel: pcieport 0000:00:16.0: pciehp: Slot #192 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 6 00:39:30.582761 kernel: pcieport 0000:00:16.1: PME: Signaling with IRQ 33 Nov 6 00:39:30.582824 kernel: pcieport 0000:00:16.1: pciehp: Slot #193 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 6 00:39:30.582936 kernel: pcieport 0000:00:16.2: PME: Signaling with IRQ 34 Nov 6 00:39:30.583001 kernel: pcieport 0000:00:16.2: pciehp: Slot #194 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 6 00:39:30.583067 kernel: pcieport 0000:00:16.3: PME: Signaling with IRQ 35 Nov 6 00:39:30.583136 kernel: pcieport 0000:00:16.3: pciehp: Slot #195 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 6 00:39:30.583210 kernel: pcieport 0000:00:16.4: PME: Signaling with IRQ 36 Nov 6 00:39:30.583279 kernel: pcieport 0000:00:16.4: pciehp: Slot #196 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 6 00:39:30.583345 kernel: pcieport 0000:00:16.5: PME: Signaling with IRQ 37 Nov 6 00:39:30.583412 kernel: pcieport 0000:00:16.5: pciehp: Slot #197 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 6 00:39:30.583486 kernel: pcieport 0000:00:16.6: PME: Signaling with IRQ 38 Nov 6 00:39:30.583573 kernel: pcieport 0000:00:16.6: pciehp: Slot #198 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 6 00:39:30.583639 kernel: pcieport 0000:00:16.7: PME: Signaling with IRQ 39 Nov 6 00:39:30.583704 kernel: pcieport 0000:00:16.7: pciehp: Slot #199 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 6 00:39:30.584785 kernel: pcieport 0000:00:17.0: PME: Signaling with IRQ 40 Nov 6 00:39:30.584893 kernel: pcieport 0000:00:17.0: pciehp: Slot #224 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 6 00:39:30.584962 kernel: pcieport 0000:00:17.1: PME: Signaling with IRQ 41 Nov 6 00:39:30.585031 kernel: pcieport 0000:00:17.1: pciehp: Slot #225 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 6 00:39:30.585104 kernel: pcieport 0000:00:17.2: PME: Signaling with IRQ 42 Nov 6 00:39:30.585169 kernel: pcieport 0000:00:17.2: pciehp: Slot #226 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 6 00:39:30.585238 kernel: pcieport 0000:00:17.3: PME: Signaling with IRQ 43 Nov 6 00:39:30.592228 kernel: pcieport 0000:00:17.3: pciehp: Slot #227 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 6 00:39:30.592314 kernel: pcieport 0000:00:17.4: PME: Signaling with IRQ 44 Nov 6 00:39:30.592385 kernel: pcieport 0000:00:17.4: pciehp: Slot #228 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 6 00:39:30.592458 kernel: pcieport 0000:00:17.5: PME: Signaling with IRQ 45 Nov 6 00:39:30.592538 kernel: pcieport 0000:00:17.5: pciehp: Slot #229 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 6 00:39:30.592606 kernel: pcieport 0000:00:17.6: PME: Signaling with IRQ 46 Nov 6 00:39:30.592674 kernel: pcieport 0000:00:17.6: pciehp: Slot #230 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 6 00:39:30.592741 kernel: pcieport 0000:00:17.7: PME: Signaling with IRQ 47 Nov 6 00:39:30.592809 kernel: pcieport 0000:00:17.7: pciehp: Slot #231 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 6 00:39:30.592920 kernel: pcieport 0000:00:18.0: PME: Signaling with IRQ 48 Nov 6 00:39:30.592987 kernel: pcieport 0000:00:18.0: pciehp: Slot #256 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 6 00:39:30.593053 kernel: pcieport 0000:00:18.1: PME: Signaling with IRQ 49 Nov 6 00:39:30.593119 kernel: pcieport 0000:00:18.1: pciehp: Slot #257 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 6 00:39:30.593186 kernel: pcieport 0000:00:18.2: PME: Signaling with IRQ 50 Nov 6 00:39:30.593251 kernel: pcieport 0000:00:18.2: pciehp: Slot #258 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 6 00:39:30.593322 kernel: pcieport 0000:00:18.3: PME: Signaling with IRQ 51 Nov 6 00:39:30.595376 kernel: pcieport 0000:00:18.3: pciehp: Slot #259 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 6 00:39:30.595455 kernel: pcieport 0000:00:18.4: PME: Signaling with IRQ 52 Nov 6 00:39:30.595551 kernel: pcieport 0000:00:18.4: pciehp: Slot #260 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 6 00:39:30.595622 kernel: pcieport 0000:00:18.5: PME: Signaling with IRQ 53 Nov 6 00:39:30.595690 kernel: pcieport 0000:00:18.5: pciehp: Slot #261 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 6 00:39:30.595761 kernel: pcieport 0000:00:18.6: PME: Signaling with IRQ 54 Nov 6 00:39:30.595829 kernel: pcieport 0000:00:18.6: pciehp: Slot #262 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 6 00:39:30.595897 kernel: pcieport 0000:00:18.7: PME: Signaling with IRQ 55 Nov 6 00:39:30.595964 kernel: pcieport 0000:00:18.7: pciehp: Slot #263 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 6 00:39:30.595977 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 6 00:39:30.595984 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 6 00:39:30.595992 kernel: 00:05: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 6 00:39:30.596000 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBC,PNP0f13:MOUS] at 0x60,0x64 irq 1,12 Nov 6 00:39:30.596006 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 6 00:39:30.596013 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 6 00:39:30.596081 kernel: rtc_cmos 00:01: registered as rtc0 Nov 6 00:39:30.596144 kernel: rtc_cmos 00:01: setting system clock to 2025-11-06T00:39:29 UTC (1762389569) Nov 6 00:39:30.596155 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Nov 6 00:39:30.596215 kernel: rtc_cmos 00:01: alarms up to one month, y3k, 114 bytes nvram Nov 6 00:39:30.596224 kernel: intel_pstate: CPU model not supported Nov 6 00:39:30.596231 kernel: NET: Registered PF_INET6 protocol family Nov 6 00:39:30.596238 kernel: Segment Routing with IPv6 Nov 6 00:39:30.596245 kernel: In-situ OAM (IOAM) with IPv6 Nov 6 00:39:30.596252 kernel: NET: Registered PF_PACKET protocol family Nov 6 00:39:30.596260 kernel: Key type dns_resolver registered Nov 6 00:39:30.596267 kernel: IPI shorthand broadcast: enabled Nov 6 00:39:30.596274 kernel: sched_clock: Marking stable (1403151305, 167976015)->(1585624179, -14496859) Nov 6 00:39:30.596281 kernel: registered taskstats version 1 Nov 6 00:39:30.596288 kernel: Loading compiled-in X.509 certificates Nov 6 00:39:30.596295 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.54-flatcar: 92154d1aa04a8c1424f65981683e67110e07d121' Nov 6 00:39:30.596302 kernel: Demotion targets for Node 0: null Nov 6 00:39:30.596309 kernel: Key type .fscrypt registered Nov 6 00:39:30.596317 kernel: Key type fscrypt-provisioning registered Nov 6 00:39:30.596323 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 6 00:39:30.596330 kernel: ima: Allocated hash algorithm: sha1 Nov 6 00:39:30.596337 kernel: ima: No architecture policies found Nov 6 00:39:30.596343 kernel: clk: Disabling unused clocks Nov 6 00:39:30.596350 kernel: Freeing unused kernel image (initmem) memory: 15936K Nov 6 00:39:30.596358 kernel: Write protecting the kernel read-only data: 40960k Nov 6 00:39:30.596365 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Nov 6 00:39:30.596372 kernel: Run /init as init process Nov 6 00:39:30.596379 kernel: with arguments: Nov 6 00:39:30.596387 kernel: /init Nov 6 00:39:30.596394 kernel: with environment: Nov 6 00:39:30.596400 kernel: HOME=/ Nov 6 00:39:30.596408 kernel: TERM=linux Nov 6 00:39:30.596414 kernel: SCSI subsystem initialized Nov 6 00:39:30.596421 kernel: VMware PVSCSI driver - version 1.0.7.0-k Nov 6 00:39:30.596428 kernel: vmw_pvscsi: using 64bit dma Nov 6 00:39:30.596435 kernel: vmw_pvscsi: max_id: 16 Nov 6 00:39:30.596442 kernel: vmw_pvscsi: setting ring_pages to 8 Nov 6 00:39:30.596448 kernel: vmw_pvscsi: enabling reqCallThreshold Nov 6 00:39:30.596455 kernel: vmw_pvscsi: driver-based request coalescing enabled Nov 6 00:39:30.596462 kernel: vmw_pvscsi: using MSI-X Nov 6 00:39:30.597430 kernel: scsi host0: VMware PVSCSI storage adapter rev 2, req/cmp/msg rings: 8/8/1 pages, cmd_per_lun=254 Nov 6 00:39:30.597528 kernel: vmw_pvscsi 0000:03:00.0: VMware PVSCSI rev 2 host #0 Nov 6 00:39:30.597612 kernel: scsi 0:0:0:0: Direct-Access VMware Virtual disk 2.0 PQ: 0 ANSI: 6 Nov 6 00:39:30.597686 kernel: sd 0:0:0:0: [sda] 25804800 512-byte logical blocks: (13.2 GB/12.3 GiB) Nov 6 00:39:30.597758 kernel: sd 0:0:0:0: [sda] Write Protect is off Nov 6 00:39:30.597832 kernel: sd 0:0:0:0: [sda] Mode Sense: 31 00 00 00 Nov 6 00:39:30.597920 kernel: sd 0:0:0:0: [sda] Cache data unavailable Nov 6 00:39:30.598016 kernel: sd 0:0:0:0: [sda] Assuming drive cache: write through Nov 6 00:39:30.598026 kernel: libata version 3.00 loaded. Nov 6 00:39:30.598033 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 6 00:39:30.598103 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Nov 6 00:39:30.598177 kernel: ata_piix 0000:00:07.1: version 2.13 Nov 6 00:39:30.598250 kernel: scsi host1: ata_piix Nov 6 00:39:30.598322 kernel: scsi host2: ata_piix Nov 6 00:39:30.598333 kernel: ata1: PATA max UDMA/33 cmd 0x1f0 ctl 0x3f6 bmdma 0x1060 irq 14 lpm-pol 0 Nov 6 00:39:30.598341 kernel: ata2: PATA max UDMA/33 cmd 0x170 ctl 0x376 bmdma 0x1068 irq 15 lpm-pol 0 Nov 6 00:39:30.598347 kernel: ata2.00: ATAPI: VMware Virtual IDE CDROM Drive, 00000001, max UDMA/33 Nov 6 00:39:30.598427 kernel: scsi 2:0:0:0: CD-ROM NECVMWar VMware IDE CDR10 1.00 PQ: 0 ANSI: 5 Nov 6 00:39:30.598515 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 1x/1x writer dvd-ram cd/rw xa/form2 cdda tray Nov 6 00:39:30.598526 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Nov 6 00:39:30.598533 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 6 00:39:30.598540 kernel: device-mapper: uevent: version 1.0.3 Nov 6 00:39:30.598548 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Nov 6 00:39:30.598618 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Nov 6 00:39:30.598630 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Nov 6 00:39:30.598637 kernel: raid6: avx2x4 gen() 47463 MB/s Nov 6 00:39:30.598644 kernel: raid6: avx2x2 gen() 52256 MB/s Nov 6 00:39:30.598651 kernel: raid6: avx2x1 gen() 45259 MB/s Nov 6 00:39:30.598658 kernel: raid6: using algorithm avx2x2 gen() 52256 MB/s Nov 6 00:39:30.598665 kernel: raid6: .... xor() 32662 MB/s, rmw enabled Nov 6 00:39:30.598671 kernel: raid6: using avx2x2 recovery algorithm Nov 6 00:39:30.598679 kernel: xor: automatically using best checksumming function avx Nov 6 00:39:30.598686 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 6 00:39:30.598694 kernel: BTRFS: device fsid 4dd99ff0-78f7-441c-acc1-7ff3d924a9b4 devid 1 transid 38 /dev/mapper/usr (254:0) scanned by mount (196) Nov 6 00:39:30.598701 kernel: BTRFS info (device dm-0): first mount of filesystem 4dd99ff0-78f7-441c-acc1-7ff3d924a9b4 Nov 6 00:39:30.598708 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 6 00:39:30.598715 kernel: BTRFS info (device dm-0): enabling ssd optimizations Nov 6 00:39:30.598722 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 6 00:39:30.598729 kernel: BTRFS info (device dm-0): enabling free space tree Nov 6 00:39:30.598736 kernel: loop: module loaded Nov 6 00:39:30.598743 kernel: loop0: detected capacity change from 0 to 100120 Nov 6 00:39:30.598750 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 6 00:39:30.598758 systemd[1]: Successfully made /usr/ read-only. Nov 6 00:39:30.598768 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 6 00:39:30.598776 systemd[1]: Detected virtualization vmware. Nov 6 00:39:30.598784 systemd[1]: Detected architecture x86-64. Nov 6 00:39:30.598791 systemd[1]: Running in initrd. Nov 6 00:39:30.598797 systemd[1]: No hostname configured, using default hostname. Nov 6 00:39:30.598805 systemd[1]: Hostname set to . Nov 6 00:39:30.598813 systemd[1]: Initializing machine ID from random generator. Nov 6 00:39:30.598820 systemd[1]: Queued start job for default target initrd.target. Nov 6 00:39:30.598827 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Nov 6 00:39:30.598834 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 6 00:39:30.598842 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 6 00:39:30.598849 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 6 00:39:30.598856 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 6 00:39:30.598865 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 6 00:39:30.598872 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 6 00:39:30.598880 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 6 00:39:30.598887 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 6 00:39:30.598894 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Nov 6 00:39:30.598900 systemd[1]: Reached target paths.target - Path Units. Nov 6 00:39:30.598909 systemd[1]: Reached target slices.target - Slice Units. Nov 6 00:39:30.598916 systemd[1]: Reached target swap.target - Swaps. Nov 6 00:39:30.598923 systemd[1]: Reached target timers.target - Timer Units. Nov 6 00:39:30.598929 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 6 00:39:30.598936 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 6 00:39:30.598943 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 6 00:39:30.598950 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Nov 6 00:39:30.598958 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 6 00:39:30.598965 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 6 00:39:30.598973 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 6 00:39:30.598980 systemd[1]: Reached target sockets.target - Socket Units. Nov 6 00:39:30.598987 systemd[1]: Starting afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments... Nov 6 00:39:30.598994 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 6 00:39:30.599001 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 6 00:39:30.599009 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 6 00:39:30.599017 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Nov 6 00:39:30.599024 systemd[1]: Starting systemd-fsck-usr.service... Nov 6 00:39:30.599031 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 6 00:39:30.599038 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 6 00:39:30.599045 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 6 00:39:30.599053 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 6 00:39:30.599061 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 6 00:39:30.599068 systemd[1]: Finished systemd-fsck-usr.service. Nov 6 00:39:30.599075 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 6 00:39:30.599097 systemd-journald[332]: Collecting audit messages is disabled. Nov 6 00:39:30.599114 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 6 00:39:30.599122 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 6 00:39:30.599131 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 6 00:39:30.599139 kernel: Bridge firewalling registered Nov 6 00:39:30.599145 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 6 00:39:30.599153 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 6 00:39:30.599161 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 6 00:39:30.599168 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 6 00:39:30.599175 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 6 00:39:30.599183 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 6 00:39:30.599191 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 6 00:39:30.599198 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 6 00:39:30.599205 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 6 00:39:30.599212 systemd[1]: Finished afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Nov 6 00:39:30.599220 systemd-journald[332]: Journal started Nov 6 00:39:30.599237 systemd-journald[332]: Runtime Journal (/run/log/journal/58fa4908d004433bb3f6e6b624ea7d1a) is 4.8M, max 38.5M, 33.7M free. Nov 6 00:39:30.538264 systemd-modules-load[335]: Inserted module 'br_netfilter' Nov 6 00:39:30.601072 systemd[1]: Started systemd-journald.service - Journal Service. Nov 6 00:39:30.606730 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 6 00:39:30.612533 dracut-cmdline[360]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 ip=139.178.70.101::139.178.70.97:28::ens192:off:1.1.1.1:1.0.0.1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=5a467f58ff1d38830572ea713da04924778847a98299b0cfa25690713b346f38 Nov 6 00:39:30.619661 systemd-tmpfiles[381]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Nov 6 00:39:30.623950 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 6 00:39:30.638408 systemd-resolved[355]: Positive Trust Anchors: Nov 6 00:39:30.638416 systemd-resolved[355]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 6 00:39:30.638418 systemd-resolved[355]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Nov 6 00:39:30.638439 systemd-resolved[355]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 6 00:39:30.656241 systemd-resolved[355]: Defaulting to hostname 'linux'. Nov 6 00:39:30.656851 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 6 00:39:30.657070 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 6 00:39:30.687500 kernel: Loading iSCSI transport class v2.0-870. Nov 6 00:39:30.698488 kernel: iscsi: registered transport (tcp) Nov 6 00:39:30.723807 kernel: iscsi: registered transport (qla4xxx) Nov 6 00:39:30.723855 kernel: QLogic iSCSI HBA Driver Nov 6 00:39:30.739908 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 6 00:39:30.750812 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 6 00:39:30.751610 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 6 00:39:30.773927 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 6 00:39:30.775039 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 6 00:39:30.776532 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 6 00:39:30.794513 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 6 00:39:30.795603 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 6 00:39:30.813549 systemd-udevd[610]: Using default interface naming scheme 'v257'. Nov 6 00:39:30.820049 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 6 00:39:30.821087 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 6 00:39:30.838674 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 6 00:39:30.839123 dracut-pre-trigger[685]: rd.md=0: removing MD RAID activation Nov 6 00:39:30.841426 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 6 00:39:30.853771 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 6 00:39:30.855577 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 6 00:39:30.870993 systemd-networkd[727]: lo: Link UP Nov 6 00:39:30.871013 systemd-networkd[727]: lo: Gained carrier Nov 6 00:39:30.871348 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 6 00:39:30.871493 systemd[1]: Reached target network.target - Network. Nov 6 00:39:30.937693 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 6 00:39:30.938596 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 6 00:39:31.009342 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_disk ROOT. Nov 6 00:39:31.017626 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_disk EFI-SYSTEM. Nov 6 00:39:31.025764 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. Nov 6 00:39:31.036840 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_disk USR-A. Nov 6 00:39:31.038340 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 6 00:39:31.081492 kernel: VMware vmxnet3 virtual NIC driver - version 1.9.0.0-k-NAPI Nov 6 00:39:31.084188 kernel: vmxnet3 0000:0b:00.0: # of Tx queues : 2, # of Rx queues : 2 Nov 6 00:39:31.085088 kernel: vmxnet3 0000:0b:00.0 eth0: NIC Link is Up 10000 Mbps Nov 6 00:39:31.110499 kernel: cryptd: max_cpu_qlen set to 1000 Nov 6 00:39:31.123946 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input2 Nov 6 00:39:31.127241 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 6 00:39:31.127431 (udev-worker)[758]: id: Truncating stdout of 'dmi_memory_id' up to 16384 byte. Nov 6 00:39:31.127934 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 6 00:39:31.129630 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 6 00:39:31.130592 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 6 00:39:31.134776 kernel: vmxnet3 0000:0b:00.0 ens192: renamed from eth0 Nov 6 00:39:31.135667 systemd-networkd[727]: eth0: Interface name change detected, renamed to ens192. Nov 6 00:39:31.137488 kernel: AES CTR mode by8 optimization enabled Nov 6 00:39:31.152458 systemd-networkd[727]: ens192: Configuring with /etc/systemd/network/10-dracut-cmdline-99.network. Nov 6 00:39:31.155084 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Nov 6 00:39:31.155217 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Nov 6 00:39:31.156359 systemd-networkd[727]: ens192: Link UP Nov 6 00:39:31.156363 systemd-networkd[727]: ens192: Gained carrier Nov 6 00:39:31.174568 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 6 00:39:31.206927 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 6 00:39:31.207272 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 6 00:39:31.207399 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 6 00:39:31.207610 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 6 00:39:31.208292 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 6 00:39:31.227085 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 6 00:39:32.118300 disk-uuid[787]: Warning: The kernel is still using the old partition table. Nov 6 00:39:32.118300 disk-uuid[787]: The new table will be used at the next reboot or after you Nov 6 00:39:32.118300 disk-uuid[787]: run partprobe(8) or kpartx(8) Nov 6 00:39:32.118300 disk-uuid[787]: The operation has completed successfully. Nov 6 00:39:32.125037 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 6 00:39:32.125108 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 6 00:39:32.125854 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 6 00:39:32.153490 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (878) Nov 6 00:39:32.155537 kernel: BTRFS info (device sda6): first mount of filesystem 1bec9db2-3d02-49a5-a8a3-33baf5dbb552 Nov 6 00:39:32.155565 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 6 00:39:32.160635 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 6 00:39:32.160668 kernel: BTRFS info (device sda6): enabling free space tree Nov 6 00:39:32.164487 kernel: BTRFS info (device sda6): last unmount of filesystem 1bec9db2-3d02-49a5-a8a3-33baf5dbb552 Nov 6 00:39:32.165119 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 6 00:39:32.165935 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 6 00:39:32.260617 systemd-networkd[727]: ens192: Gained IPv6LL Nov 6 00:39:32.290105 ignition[897]: Ignition 2.22.0 Nov 6 00:39:32.290115 ignition[897]: Stage: fetch-offline Nov 6 00:39:32.290138 ignition[897]: no configs at "/usr/lib/ignition/base.d" Nov 6 00:39:32.290145 ignition[897]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Nov 6 00:39:32.290197 ignition[897]: parsed url from cmdline: "" Nov 6 00:39:32.290200 ignition[897]: no config URL provided Nov 6 00:39:32.290203 ignition[897]: reading system config file "/usr/lib/ignition/user.ign" Nov 6 00:39:32.290208 ignition[897]: no config at "/usr/lib/ignition/user.ign" Nov 6 00:39:32.290589 ignition[897]: config successfully fetched Nov 6 00:39:32.290607 ignition[897]: parsing config with SHA512: 788e51cd080a0501e370cae863471250eb0c744f2da02e75c0efc3b1b0b4137a0932631748b418cb0ae8f45bae78d1ca192b14c29fd93439390ca3fb4b59907f Nov 6 00:39:32.293265 unknown[897]: fetched base config from "system" Nov 6 00:39:32.293278 unknown[897]: fetched user config from "vmware" Nov 6 00:39:32.293612 ignition[897]: fetch-offline: fetch-offline passed Nov 6 00:39:32.293651 ignition[897]: Ignition finished successfully Nov 6 00:39:32.294610 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 6 00:39:32.294829 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Nov 6 00:39:32.295306 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 6 00:39:32.314796 ignition[904]: Ignition 2.22.0 Nov 6 00:39:32.314804 ignition[904]: Stage: kargs Nov 6 00:39:32.314915 ignition[904]: no configs at "/usr/lib/ignition/base.d" Nov 6 00:39:32.314920 ignition[904]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Nov 6 00:39:32.315419 ignition[904]: kargs: kargs passed Nov 6 00:39:32.315445 ignition[904]: Ignition finished successfully Nov 6 00:39:32.316526 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 6 00:39:32.317339 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 6 00:39:32.333355 ignition[910]: Ignition 2.22.0 Nov 6 00:39:32.333362 ignition[910]: Stage: disks Nov 6 00:39:32.333446 ignition[910]: no configs at "/usr/lib/ignition/base.d" Nov 6 00:39:32.333451 ignition[910]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Nov 6 00:39:32.334054 ignition[910]: disks: disks passed Nov 6 00:39:32.334080 ignition[910]: Ignition finished successfully Nov 6 00:39:32.335416 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 6 00:39:32.335774 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 6 00:39:32.336002 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 6 00:39:32.336230 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 6 00:39:32.336435 systemd[1]: Reached target sysinit.target - System Initialization. Nov 6 00:39:32.336642 systemd[1]: Reached target basic.target - Basic System. Nov 6 00:39:32.337348 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 6 00:39:32.365758 systemd-fsck[918]: ROOT: clean, 15/1631200 files, 112378/1617920 blocks Nov 6 00:39:32.367056 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 6 00:39:32.368126 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 6 00:39:32.451937 kernel: EXT4-fs (sda9): mounted filesystem d1cfc077-cc9a-4d2c-97de-8a87792eb8cf r/w with ordered data mode. Quota mode: none. Nov 6 00:39:32.451443 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 6 00:39:32.451782 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 6 00:39:32.452947 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 6 00:39:32.454523 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 6 00:39:32.454908 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 6 00:39:32.455112 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 6 00:39:32.455301 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 6 00:39:32.458890 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 6 00:39:32.459891 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 6 00:39:32.463500 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (927) Nov 6 00:39:32.465848 kernel: BTRFS info (device sda6): first mount of filesystem 1bec9db2-3d02-49a5-a8a3-33baf5dbb552 Nov 6 00:39:32.465867 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 6 00:39:32.470485 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 6 00:39:32.470512 kernel: BTRFS info (device sda6): enabling free space tree Nov 6 00:39:32.471551 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 6 00:39:32.497760 initrd-setup-root[951]: cut: /sysroot/etc/passwd: No such file or directory Nov 6 00:39:32.500681 initrd-setup-root[958]: cut: /sysroot/etc/group: No such file or directory Nov 6 00:39:32.502980 initrd-setup-root[965]: cut: /sysroot/etc/shadow: No such file or directory Nov 6 00:39:32.505254 initrd-setup-root[972]: cut: /sysroot/etc/gshadow: No such file or directory Nov 6 00:39:32.561034 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 6 00:39:32.562066 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 6 00:39:32.563535 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 6 00:39:32.571263 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 6 00:39:32.573533 kernel: BTRFS info (device sda6): last unmount of filesystem 1bec9db2-3d02-49a5-a8a3-33baf5dbb552 Nov 6 00:39:32.586081 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 6 00:39:32.593657 ignition[1041]: INFO : Ignition 2.22.0 Nov 6 00:39:32.593657 ignition[1041]: INFO : Stage: mount Nov 6 00:39:32.593974 ignition[1041]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 6 00:39:32.593974 ignition[1041]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Nov 6 00:39:32.594192 ignition[1041]: INFO : mount: mount passed Nov 6 00:39:32.594192 ignition[1041]: INFO : Ignition finished successfully Nov 6 00:39:32.595139 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 6 00:39:32.596020 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 6 00:39:32.604352 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 6 00:39:32.621316 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (1052) Nov 6 00:39:32.621343 kernel: BTRFS info (device sda6): first mount of filesystem 1bec9db2-3d02-49a5-a8a3-33baf5dbb552 Nov 6 00:39:32.621353 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 6 00:39:32.625861 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 6 00:39:32.625889 kernel: BTRFS info (device sda6): enabling free space tree Nov 6 00:39:32.626929 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 6 00:39:32.645462 ignition[1068]: INFO : Ignition 2.22.0 Nov 6 00:39:32.645462 ignition[1068]: INFO : Stage: files Nov 6 00:39:32.645807 ignition[1068]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 6 00:39:32.645807 ignition[1068]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Nov 6 00:39:32.647388 ignition[1068]: DEBUG : files: compiled without relabeling support, skipping Nov 6 00:39:32.647939 ignition[1068]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 6 00:39:32.648119 ignition[1068]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 6 00:39:32.650629 ignition[1068]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 6 00:39:32.650855 ignition[1068]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 6 00:39:32.651092 unknown[1068]: wrote ssh authorized keys file for user: core Nov 6 00:39:32.651332 ignition[1068]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 6 00:39:32.653252 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 6 00:39:32.653252 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Nov 6 00:39:32.696137 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 6 00:39:32.765516 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 6 00:39:32.765516 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 6 00:39:32.766004 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 6 00:39:32.766004 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 6 00:39:32.766004 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 6 00:39:32.766004 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 6 00:39:32.766004 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 6 00:39:32.766004 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 6 00:39:32.766004 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 6 00:39:32.767423 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 6 00:39:32.767423 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 6 00:39:32.767423 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 6 00:39:32.769581 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 6 00:39:32.769581 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 6 00:39:32.770030 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-x86-64.raw: attempt #1 Nov 6 00:39:33.230875 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 6 00:39:33.517517 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 6 00:39:33.517517 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/systemd/network/00-vmware.network" Nov 6 00:39:33.518542 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/systemd/network/00-vmware.network" Nov 6 00:39:33.518542 ignition[1068]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Nov 6 00:39:33.519030 ignition[1068]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 6 00:39:33.519501 ignition[1068]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 6 00:39:33.519501 ignition[1068]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Nov 6 00:39:33.519501 ignition[1068]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Nov 6 00:39:33.519501 ignition[1068]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 6 00:39:33.519501 ignition[1068]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 6 00:39:33.519501 ignition[1068]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Nov 6 00:39:33.519501 ignition[1068]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Nov 6 00:39:33.542508 ignition[1068]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Nov 6 00:39:33.544791 ignition[1068]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Nov 6 00:39:33.544976 ignition[1068]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Nov 6 00:39:33.544976 ignition[1068]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Nov 6 00:39:33.544976 ignition[1068]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Nov 6 00:39:33.544976 ignition[1068]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 6 00:39:33.545519 ignition[1068]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 6 00:39:33.545519 ignition[1068]: INFO : files: files passed Nov 6 00:39:33.545519 ignition[1068]: INFO : Ignition finished successfully Nov 6 00:39:33.546436 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 6 00:39:33.547097 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 6 00:39:33.548540 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 6 00:39:33.558757 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 6 00:39:33.559005 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 6 00:39:33.562825 initrd-setup-root-after-ignition[1102]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 6 00:39:33.563107 initrd-setup-root-after-ignition[1102]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 6 00:39:33.563924 initrd-setup-root-after-ignition[1106]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 6 00:39:33.564716 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 6 00:39:33.565235 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 6 00:39:33.565985 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 6 00:39:33.592651 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 6 00:39:33.592728 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 6 00:39:33.593013 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 6 00:39:33.593140 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 6 00:39:33.593449 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 6 00:39:33.593935 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 6 00:39:33.603821 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 6 00:39:33.604590 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 6 00:39:33.615198 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Nov 6 00:39:33.615289 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 6 00:39:33.615511 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 6 00:39:33.615749 systemd[1]: Stopped target timers.target - Timer Units. Nov 6 00:39:33.615969 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 6 00:39:33.616033 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 6 00:39:33.616379 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 6 00:39:33.616546 systemd[1]: Stopped target basic.target - Basic System. Nov 6 00:39:33.616723 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 6 00:39:33.616954 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 6 00:39:33.617154 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 6 00:39:33.617363 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Nov 6 00:39:33.617586 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 6 00:39:33.617791 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 6 00:39:33.617998 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 6 00:39:33.618209 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 6 00:39:33.618395 systemd[1]: Stopped target swap.target - Swaps. Nov 6 00:39:33.618569 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 6 00:39:33.618635 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 6 00:39:33.618950 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 6 00:39:33.619241 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 6 00:39:33.619429 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 6 00:39:33.619528 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 6 00:39:33.619682 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 6 00:39:33.619745 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 6 00:39:33.620023 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 6 00:39:33.620089 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 6 00:39:33.620309 systemd[1]: Stopped target paths.target - Path Units. Nov 6 00:39:33.620450 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 6 00:39:33.620502 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 6 00:39:33.620699 systemd[1]: Stopped target slices.target - Slice Units. Nov 6 00:39:33.620898 systemd[1]: Stopped target sockets.target - Socket Units. Nov 6 00:39:33.621070 systemd[1]: iscsid.socket: Deactivated successfully. Nov 6 00:39:33.621117 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 6 00:39:33.621275 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 6 00:39:33.621318 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 6 00:39:33.621504 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 6 00:39:33.621581 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 6 00:39:33.621838 systemd[1]: ignition-files.service: Deactivated successfully. Nov 6 00:39:33.621899 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 6 00:39:33.622564 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 6 00:39:33.623929 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 6 00:39:33.624041 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 6 00:39:33.624111 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 6 00:39:33.625928 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 6 00:39:33.626003 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 6 00:39:33.626218 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 6 00:39:33.626280 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 6 00:39:33.630690 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 6 00:39:33.630742 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 6 00:39:33.641512 ignition[1127]: INFO : Ignition 2.22.0 Nov 6 00:39:33.641512 ignition[1127]: INFO : Stage: umount Nov 6 00:39:33.641512 ignition[1127]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 6 00:39:33.641512 ignition[1127]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Nov 6 00:39:33.641512 ignition[1127]: INFO : umount: umount passed Nov 6 00:39:33.641512 ignition[1127]: INFO : Ignition finished successfully Nov 6 00:39:33.640713 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 6 00:39:33.643054 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 6 00:39:33.643123 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 6 00:39:33.643599 systemd[1]: Stopped target network.target - Network. Nov 6 00:39:33.643970 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 6 00:39:33.644105 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 6 00:39:33.644343 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 6 00:39:33.644369 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 6 00:39:33.644664 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 6 00:39:33.644690 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 6 00:39:33.644923 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 6 00:39:33.644946 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 6 00:39:33.645347 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 6 00:39:33.645637 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 6 00:39:33.648576 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 6 00:39:33.648639 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 6 00:39:33.653315 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 6 00:39:33.653375 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 6 00:39:33.654566 systemd[1]: Stopped target network-pre.target - Preparation for Network. Nov 6 00:39:33.654707 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 6 00:39:33.654728 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 6 00:39:33.655289 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 6 00:39:33.655390 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 6 00:39:33.655417 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 6 00:39:33.655601 systemd[1]: afterburn-network-kargs.service: Deactivated successfully. Nov 6 00:39:33.655624 systemd[1]: Stopped afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Nov 6 00:39:33.655775 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 6 00:39:33.655796 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 6 00:39:33.655947 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 6 00:39:33.655968 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 6 00:39:33.658063 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 6 00:39:33.663388 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 6 00:39:33.663567 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 6 00:39:33.664005 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 6 00:39:33.664029 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 6 00:39:33.664145 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 6 00:39:33.664162 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 6 00:39:33.664266 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 6 00:39:33.664291 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 6 00:39:33.664437 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 6 00:39:33.664461 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 6 00:39:33.664609 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 6 00:39:33.664634 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 6 00:39:33.666373 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 6 00:39:33.666652 systemd[1]: systemd-network-generator.service: Deactivated successfully. Nov 6 00:39:33.666791 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Nov 6 00:39:33.667094 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 6 00:39:33.667233 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 6 00:39:33.667537 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Nov 6 00:39:33.667680 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 6 00:39:33.667984 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 6 00:39:33.668112 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 6 00:39:33.668390 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 6 00:39:33.668579 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 6 00:39:33.674157 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 6 00:39:33.674369 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 6 00:39:33.701310 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 6 00:39:33.701395 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 6 00:39:33.720787 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 6 00:39:33.720847 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 6 00:39:33.721220 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 6 00:39:33.721336 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 6 00:39:33.721370 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 6 00:39:33.722004 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 6 00:39:33.737773 systemd[1]: Switching root. Nov 6 00:39:33.780001 systemd-journald[332]: Journal stopped Nov 6 00:39:34.863337 systemd-journald[332]: Received SIGTERM from PID 1 (systemd). Nov 6 00:39:34.863367 kernel: SELinux: policy capability network_peer_controls=1 Nov 6 00:39:34.863376 kernel: SELinux: policy capability open_perms=1 Nov 6 00:39:34.863383 kernel: SELinux: policy capability extended_socket_class=1 Nov 6 00:39:34.863390 kernel: SELinux: policy capability always_check_network=0 Nov 6 00:39:34.863396 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 6 00:39:34.863404 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 6 00:39:34.863411 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 6 00:39:34.863418 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 6 00:39:34.863424 kernel: SELinux: policy capability userspace_initial_context=0 Nov 6 00:39:34.863432 systemd[1]: Successfully loaded SELinux policy in 43.946ms. Nov 6 00:39:34.863441 kernel: audit: type=1403 audit(1762389574.298:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 6 00:39:34.863448 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 5.513ms. Nov 6 00:39:34.863456 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 6 00:39:34.863464 systemd[1]: Detected virtualization vmware. Nov 6 00:39:34.863479 systemd[1]: Detected architecture x86-64. Nov 6 00:39:34.864578 systemd[1]: Detected first boot. Nov 6 00:39:34.864590 systemd[1]: Initializing machine ID from random generator. Nov 6 00:39:34.864598 zram_generator::config[1174]: No configuration found. Nov 6 00:39:34.864708 kernel: vmw_vmci 0000:00:07.7: Using capabilities 0xc Nov 6 00:39:34.864724 kernel: Guest personality initialized and is active Nov 6 00:39:34.864732 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Nov 6 00:39:34.864739 kernel: Initialized host personality Nov 6 00:39:34.864746 kernel: NET: Registered PF_VSOCK protocol family Nov 6 00:39:34.864754 systemd[1]: Populated /etc with preset unit settings. Nov 6 00:39:34.864763 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Nov 6 00:39:34.864774 systemd[1]: COREOS_CUSTOM_PUBLIC_IPV4=$(ip addr show ens192 | grep -v "inet 10." | grep -Po "inet \K[\d.]+")" > ${OUTPUT}" Nov 6 00:39:34.864782 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 6 00:39:34.864789 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 6 00:39:34.864797 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 6 00:39:34.864805 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 6 00:39:34.864813 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 6 00:39:34.864822 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 6 00:39:34.864833 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 6 00:39:34.864843 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 6 00:39:34.864851 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 6 00:39:34.864859 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 6 00:39:34.864867 systemd[1]: Created slice user.slice - User and Session Slice. Nov 6 00:39:34.864876 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 6 00:39:34.864884 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 6 00:39:34.864894 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 6 00:39:34.864902 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 6 00:39:34.864911 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 6 00:39:34.864919 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 6 00:39:34.864927 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 6 00:39:34.864936 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 6 00:39:34.864944 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 6 00:39:34.864952 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 6 00:39:34.864960 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 6 00:39:34.864968 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 6 00:39:34.864975 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 6 00:39:34.864985 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 6 00:39:34.864993 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 6 00:39:34.865001 systemd[1]: Reached target slices.target - Slice Units. Nov 6 00:39:34.865009 systemd[1]: Reached target swap.target - Swaps. Nov 6 00:39:34.865016 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 6 00:39:34.865024 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 6 00:39:34.865034 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Nov 6 00:39:34.865042 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 6 00:39:34.865050 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 6 00:39:34.865059 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 6 00:39:34.865068 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 6 00:39:34.865076 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 6 00:39:34.865084 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 6 00:39:34.865093 systemd[1]: Mounting media.mount - External Media Directory... Nov 6 00:39:34.865101 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 00:39:34.865109 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 6 00:39:34.865117 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 6 00:39:34.865126 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 6 00:39:34.865134 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 6 00:39:34.865142 systemd[1]: Reached target machines.target - Containers. Nov 6 00:39:34.865150 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 6 00:39:34.865158 systemd[1]: Starting ignition-delete-config.service - Ignition (delete config)... Nov 6 00:39:34.865166 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 6 00:39:34.865174 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 6 00:39:34.865183 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 6 00:39:34.865191 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 6 00:39:34.865199 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 6 00:39:34.865207 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 6 00:39:34.865215 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 6 00:39:34.865224 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 6 00:39:34.865233 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 6 00:39:34.865242 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 6 00:39:34.865250 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 6 00:39:34.865258 systemd[1]: Stopped systemd-fsck-usr.service. Nov 6 00:39:34.865266 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 6 00:39:34.865275 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 6 00:39:34.865284 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 6 00:39:34.865293 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 6 00:39:34.865302 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 6 00:39:34.865310 kernel: fuse: init (API version 7.41) Nov 6 00:39:34.865317 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Nov 6 00:39:34.865325 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 6 00:39:34.865334 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 00:39:34.865342 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 6 00:39:34.865351 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 6 00:39:34.865360 systemd[1]: Mounted media.mount - External Media Directory. Nov 6 00:39:34.865367 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 6 00:39:34.865376 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 6 00:39:34.865384 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 6 00:39:34.865391 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 6 00:39:34.865401 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 6 00:39:34.865409 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 6 00:39:34.865416 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 6 00:39:34.865425 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 6 00:39:34.865433 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 6 00:39:34.865441 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 6 00:39:34.865449 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 6 00:39:34.865458 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 6 00:39:34.865466 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 6 00:39:34.866056 kernel: ACPI: bus type drm_connector registered Nov 6 00:39:34.866071 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 6 00:39:34.866080 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 6 00:39:34.866088 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 6 00:39:34.866096 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 6 00:39:34.866107 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 6 00:39:34.866116 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Nov 6 00:39:34.866144 systemd-journald[1267]: Collecting audit messages is disabled. Nov 6 00:39:34.866168 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Nov 6 00:39:34.866177 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 6 00:39:34.866187 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 6 00:39:34.866195 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Nov 6 00:39:34.866204 systemd-journald[1267]: Journal started Nov 6 00:39:34.866221 systemd-journald[1267]: Runtime Journal (/run/log/journal/1c7036fe114840459aedaf34f9bbf300) is 4.8M, max 38.5M, 33.7M free. Nov 6 00:39:34.649020 systemd[1]: Queued start job for default target multi-user.target. Nov 6 00:39:34.655423 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Nov 6 00:39:34.655704 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 6 00:39:34.866779 jq[1244]: true Nov 6 00:39:34.867346 jq[1274]: true Nov 6 00:39:34.868641 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 6 00:39:34.875537 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 6 00:39:34.877525 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 6 00:39:34.877539 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 6 00:39:34.877549 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 6 00:39:34.880490 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 6 00:39:34.887366 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 6 00:39:34.890531 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 6 00:39:34.890574 systemd[1]: Started systemd-journald.service - Journal Service. Nov 6 00:39:34.893573 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 6 00:39:34.896906 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 6 00:39:34.898944 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 6 00:39:34.914020 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 6 00:39:34.914339 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 6 00:39:34.916603 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 6 00:39:34.919779 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Nov 6 00:39:34.927771 kernel: loop1: detected capacity change from 0 to 2960 Nov 6 00:39:34.941047 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 6 00:39:34.951666 systemd-journald[1267]: Time spent on flushing to /var/log/journal/1c7036fe114840459aedaf34f9bbf300 is 45.037ms for 1748 entries. Nov 6 00:39:34.951666 systemd-journald[1267]: System Journal (/var/log/journal/1c7036fe114840459aedaf34f9bbf300) is 8M, max 588.1M, 580.1M free. Nov 6 00:39:35.002938 systemd-journald[1267]: Received client request to flush runtime journal. Nov 6 00:39:35.002965 kernel: loop2: detected capacity change from 0 to 128048 Nov 6 00:39:34.954212 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Nov 6 00:39:34.978625 ignition[1297]: Ignition 2.22.0 Nov 6 00:39:34.962369 systemd-tmpfiles[1296]: ACLs are not supported, ignoring. Nov 6 00:39:34.978777 ignition[1297]: deleting config from guestinfo properties Nov 6 00:39:34.962379 systemd-tmpfiles[1296]: ACLs are not supported, ignoring. Nov 6 00:39:34.985939 ignition[1297]: Successfully deleted config Nov 6 00:39:34.972492 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 6 00:39:34.974703 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 6 00:39:34.988636 systemd[1]: Finished ignition-delete-config.service - Ignition (delete config). Nov 6 00:39:35.005843 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 6 00:39:35.024503 kernel: loop3: detected capacity change from 0 to 110976 Nov 6 00:39:35.025513 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 6 00:39:35.039205 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 6 00:39:35.042553 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 6 00:39:35.045564 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 6 00:39:35.052050 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 6 00:39:35.064492 kernel: loop4: detected capacity change from 0 to 219144 Nov 6 00:39:35.066332 systemd-tmpfiles[1342]: ACLs are not supported, ignoring. Nov 6 00:39:35.066343 systemd-tmpfiles[1342]: ACLs are not supported, ignoring. Nov 6 00:39:35.070615 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 6 00:39:35.090156 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 6 00:39:35.094652 kernel: loop5: detected capacity change from 0 to 2960 Nov 6 00:39:35.106488 kernel: loop6: detected capacity change from 0 to 128048 Nov 6 00:39:35.119513 kernel: loop7: detected capacity change from 0 to 110976 Nov 6 00:39:35.138486 kernel: loop1: detected capacity change from 0 to 219144 Nov 6 00:39:35.146637 systemd-resolved[1341]: Positive Trust Anchors: Nov 6 00:39:35.146819 systemd-resolved[1341]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 6 00:39:35.146852 systemd-resolved[1341]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Nov 6 00:39:35.146916 systemd-resolved[1341]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 6 00:39:35.149734 systemd-resolved[1341]: Defaulting to hostname 'linux'. Nov 6 00:39:35.150625 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 6 00:39:35.150809 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 6 00:39:35.158807 (sd-merge)[1352]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw', 'oem-vmware.raw'. Nov 6 00:39:35.162003 (sd-merge)[1352]: Merged extensions into '/usr'. Nov 6 00:39:35.166548 systemd[1]: Reload requested from client PID 1292 ('systemd-sysext') (unit systemd-sysext.service)... Nov 6 00:39:35.166559 systemd[1]: Reloading... Nov 6 00:39:35.206525 zram_generator::config[1378]: No configuration found. Nov 6 00:39:35.293825 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Nov 6 00:39:35.343928 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 6 00:39:35.344181 systemd[1]: Reloading finished in 177 ms. Nov 6 00:39:35.355119 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 6 00:39:35.363577 systemd[1]: Starting ensure-sysext.service... Nov 6 00:39:35.364573 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 6 00:39:35.382580 systemd[1]: Reload requested from client PID 1437 ('systemctl') (unit ensure-sysext.service)... Nov 6 00:39:35.382589 systemd[1]: Reloading... Nov 6 00:39:35.396259 systemd-tmpfiles[1438]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Nov 6 00:39:35.396288 systemd-tmpfiles[1438]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Nov 6 00:39:35.396489 systemd-tmpfiles[1438]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 6 00:39:35.396672 systemd-tmpfiles[1438]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 6 00:39:35.397186 systemd-tmpfiles[1438]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 6 00:39:35.397351 systemd-tmpfiles[1438]: ACLs are not supported, ignoring. Nov 6 00:39:35.397386 systemd-tmpfiles[1438]: ACLs are not supported, ignoring. Nov 6 00:39:35.419528 zram_generator::config[1464]: No configuration found. Nov 6 00:39:35.475793 systemd-tmpfiles[1438]: Detected autofs mount point /boot during canonicalization of boot. Nov 6 00:39:35.475801 systemd-tmpfiles[1438]: Skipping /boot Nov 6 00:39:35.480263 systemd-tmpfiles[1438]: Detected autofs mount point /boot during canonicalization of boot. Nov 6 00:39:35.480331 systemd-tmpfiles[1438]: Skipping /boot Nov 6 00:39:35.509946 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Nov 6 00:39:35.560298 systemd[1]: Reloading finished in 177 ms. Nov 6 00:39:35.570549 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 6 00:39:35.578584 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 6 00:39:35.583553 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 6 00:39:35.586261 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 6 00:39:35.589666 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 6 00:39:35.591075 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 6 00:39:35.592181 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 6 00:39:35.597258 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 6 00:39:35.601083 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 6 00:39:35.604950 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 6 00:39:35.605784 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 6 00:39:35.608195 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 6 00:39:35.611106 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 6 00:39:35.611274 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 6 00:39:35.611354 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 6 00:39:35.612196 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 6 00:39:35.612448 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 6 00:39:35.625430 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 6 00:39:35.625627 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 6 00:39:35.625733 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 6 00:39:35.631418 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 6 00:39:35.631860 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 6 00:39:35.631939 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 6 00:39:35.633629 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 6 00:39:35.634045 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 6 00:39:35.634172 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 6 00:39:35.634608 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 6 00:39:35.635613 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 6 00:39:35.636150 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 6 00:39:35.636278 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 6 00:39:35.638847 systemd[1]: Finished ensure-sysext.service. Nov 6 00:39:35.644233 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 6 00:39:35.644582 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 6 00:39:35.644899 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 6 00:39:35.645048 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 6 00:39:35.645307 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 6 00:39:35.645423 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 6 00:39:35.647734 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 6 00:39:35.647784 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 6 00:39:35.649898 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 6 00:39:35.656955 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 00:39:35.659170 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 6 00:39:35.660169 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 6 00:39:35.662204 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 00:39:35.669449 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 6 00:39:35.674102 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 6 00:39:35.723370 systemd-udevd[1530]: Using default interface naming scheme 'v257'. Nov 6 00:39:35.734461 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 6 00:39:35.765807 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 6 00:39:35.765987 systemd[1]: Reached target time-set.target - System Time Set. Nov 6 00:39:35.798714 augenrules[1576]: No rules Nov 6 00:39:35.799359 systemd[1]: audit-rules.service: Deactivated successfully. Nov 6 00:39:35.799548 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 6 00:39:36.042876 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 6 00:39:36.044981 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 6 00:39:36.089014 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Nov 6 00:39:36.126503 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Nov 6 00:39:36.132486 kernel: ACPI: button: Power Button [PWRF] Nov 6 00:39:36.135493 kernel: mousedev: PS/2 mouse device common for all mice Nov 6 00:39:36.148690 systemd-networkd[1585]: lo: Link UP Nov 6 00:39:36.149249 systemd-networkd[1585]: lo: Gained carrier Nov 6 00:39:36.150712 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 6 00:39:36.151108 systemd[1]: Reached target network.target - Network. Nov 6 00:39:36.151278 systemd-networkd[1585]: ens192: Configuring with /etc/systemd/network/00-vmware.network. Nov 6 00:39:36.152626 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Nov 6 00:39:36.155487 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Nov 6 00:39:36.155639 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Nov 6 00:39:36.155550 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 6 00:39:36.158046 systemd-networkd[1585]: ens192: Link UP Nov 6 00:39:36.158171 systemd-networkd[1585]: ens192: Gained carrier Nov 6 00:39:36.161113 systemd-timesyncd[1554]: Network configuration changed, trying to establish connection. Nov 6 00:39:36.188164 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Nov 6 00:39:36.216527 kernel: piix4_smbus 0000:00:07.3: SMBus Host Controller not enabled! Nov 6 00:39:36.277581 (udev-worker)[1586]: id: Truncating stdout of 'dmi_memory_id' up to 16384 byte. Nov 6 00:39:36.298523 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 6 00:39:36.362140 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. Nov 6 00:39:36.364454 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 6 00:39:36.407946 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 6 00:39:36.426364 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 6 00:39:36.426608 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 6 00:39:36.454236 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 6 00:39:36.727387 ldconfig[1528]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 6 00:39:36.729610 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 6 00:39:36.730958 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 6 00:39:36.744119 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 6 00:39:36.744367 systemd[1]: Reached target sysinit.target - System Initialization. Nov 6 00:39:36.744533 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 6 00:39:36.744656 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 6 00:39:36.744770 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Nov 6 00:39:36.744954 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 6 00:39:36.745094 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 6 00:39:36.745205 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 6 00:39:36.745315 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 6 00:39:36.745337 systemd[1]: Reached target paths.target - Path Units. Nov 6 00:39:36.745424 systemd[1]: Reached target timers.target - Timer Units. Nov 6 00:39:36.746232 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 6 00:39:36.747376 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 6 00:39:36.748864 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Nov 6 00:39:36.749057 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Nov 6 00:39:36.749176 systemd[1]: Reached target ssh-access.target - SSH Access Available. Nov 6 00:39:36.752529 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 6 00:39:36.752807 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Nov 6 00:39:36.753299 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 6 00:39:36.753900 systemd[1]: Reached target sockets.target - Socket Units. Nov 6 00:39:36.753999 systemd[1]: Reached target basic.target - Basic System. Nov 6 00:39:36.754124 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 6 00:39:36.754142 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 6 00:39:36.754931 systemd[1]: Starting containerd.service - containerd container runtime... Nov 6 00:39:36.757562 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 6 00:39:36.758736 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 6 00:39:36.761546 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 6 00:39:36.769769 jq[1649]: false Nov 6 00:39:36.770574 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 6 00:39:36.770704 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 6 00:39:36.771652 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Nov 6 00:39:36.772458 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 6 00:39:36.774618 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 6 00:39:36.777338 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 6 00:39:36.781668 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 6 00:39:36.784699 extend-filesystems[1650]: Found /dev/sda6 Nov 6 00:39:36.786737 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 6 00:39:36.786860 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 6 00:39:36.787351 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 6 00:39:36.789674 systemd[1]: Starting update-engine.service - Update Engine... Nov 6 00:39:36.791562 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 6 00:39:36.796147 extend-filesystems[1650]: Found /dev/sda9 Nov 6 00:39:36.798146 google_oslogin_nss_cache[1651]: oslogin_cache_refresh[1651]: Refreshing passwd entry cache Nov 6 00:39:36.798150 oslogin_cache_refresh[1651]: Refreshing passwd entry cache Nov 6 00:39:36.799124 systemd[1]: Starting vgauthd.service - VGAuth Service for open-vm-tools... Nov 6 00:39:36.802190 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 6 00:39:36.802456 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 6 00:39:36.802888 extend-filesystems[1650]: Checking size of /dev/sda9 Nov 6 00:39:36.804501 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 6 00:39:36.805701 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 6 00:39:36.805841 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 6 00:39:36.812622 google_oslogin_nss_cache[1651]: oslogin_cache_refresh[1651]: Failure getting users, quitting Nov 6 00:39:36.812615 oslogin_cache_refresh[1651]: Failure getting users, quitting Nov 6 00:39:36.812693 google_oslogin_nss_cache[1651]: oslogin_cache_refresh[1651]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Nov 6 00:39:36.812628 oslogin_cache_refresh[1651]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Nov 6 00:39:36.813900 google_oslogin_nss_cache[1651]: oslogin_cache_refresh[1651]: Refreshing group entry cache Nov 6 00:39:36.813211 oslogin_cache_refresh[1651]: Refreshing group entry cache Nov 6 00:39:36.815139 extend-filesystems[1650]: Resized partition /dev/sda9 Nov 6 00:39:36.819596 google_oslogin_nss_cache[1651]: oslogin_cache_refresh[1651]: Failure getting groups, quitting Nov 6 00:39:36.819596 google_oslogin_nss_cache[1651]: oslogin_cache_refresh[1651]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Nov 6 00:39:36.819588 oslogin_cache_refresh[1651]: Failure getting groups, quitting Nov 6 00:39:36.819597 oslogin_cache_refresh[1651]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Nov 6 00:39:36.824145 jq[1663]: true Nov 6 00:39:36.824730 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Nov 6 00:39:36.824890 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Nov 6 00:39:36.832605 extend-filesystems[1686]: resize2fs 1.47.3 (8-Jul-2025) Nov 6 00:39:36.840779 (ntainerd)[1685]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 6 00:39:36.842549 dbus-daemon[1647]: [system] SELinux support is enabled Nov 6 00:39:36.842892 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 6 00:39:36.856066 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 6 00:39:36.856090 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 6 00:39:36.856254 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 6 00:39:36.856266 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 6 00:39:36.856580 update_engine[1661]: I20251106 00:39:36.856526 1661 main.cc:92] Flatcar Update Engine starting Nov 6 00:39:36.859815 systemd[1]: motdgen.service: Deactivated successfully. Nov 6 00:39:36.859984 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 6 00:39:36.862494 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 1635323 blocks Nov 6 00:39:36.867596 kernel: EXT4-fs (sda9): resized filesystem to 1635323 Nov 6 00:39:36.867621 update_engine[1661]: I20251106 00:39:36.863601 1661 update_check_scheduler.cc:74] Next update check in 6m2s Nov 6 00:39:36.863414 systemd[1]: Started update-engine.service - Update Engine. Nov 6 00:39:36.868925 extend-filesystems[1686]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Nov 6 00:39:36.868925 extend-filesystems[1686]: old_desc_blocks = 1, new_desc_blocks = 1 Nov 6 00:39:36.868925 extend-filesystems[1686]: The filesystem on /dev/sda9 is now 1635323 (4k) blocks long. Nov 6 00:39:36.873449 jq[1692]: true Nov 6 00:39:36.873606 extend-filesystems[1650]: Resized filesystem in /dev/sda9 Nov 6 00:39:36.876600 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 6 00:39:36.877106 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 6 00:39:36.877252 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 6 00:39:36.883501 tar[1673]: linux-amd64/LICENSE Nov 6 00:39:36.883501 tar[1673]: linux-amd64/helm Nov 6 00:39:36.883176 systemd[1]: Started vgauthd.service - VGAuth Service for open-vm-tools. Nov 6 00:39:36.889708 systemd[1]: Starting vmtoolsd.service - Service for virtual machines hosted on VMware... Nov 6 00:39:36.915903 systemd[1]: Started vmtoolsd.service - Service for virtual machines hosted on VMware. Nov 6 00:39:36.918114 sshd_keygen[1672]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 6 00:39:36.948013 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 6 00:39:36.951707 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 6 00:39:36.956714 unknown[1706]: Pref_Init: Using '/etc/vmware-tools/vgauth.conf' as preferences filepath Nov 6 00:39:36.957520 unknown[1706]: Core dump limit set to -1 Nov 6 00:39:36.996940 systemd[1]: issuegen.service: Deactivated successfully. Nov 6 00:39:36.997534 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 6 00:39:37.026822 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 6 00:39:37.041853 systemd-logind[1659]: Watching system buttons on /dev/input/event2 (Power Button) Nov 6 00:39:37.041874 systemd-logind[1659]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 6 00:39:37.042326 systemd-logind[1659]: New seat seat0. Nov 6 00:39:37.045658 bash[1746]: Updated "/home/core/.ssh/authorized_keys" Nov 6 00:39:37.047917 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 6 00:39:37.054931 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Nov 6 00:39:37.055993 systemd[1]: Started systemd-logind.service - User Login Management. Nov 6 00:39:37.061973 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 6 00:39:37.068062 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 6 00:39:37.070711 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 6 00:39:37.070928 systemd[1]: Reached target getty.target - Login Prompts. Nov 6 00:39:37.111892 locksmithd[1699]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 6 00:39:37.182359 containerd[1685]: time="2025-11-06T00:39:37Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Nov 6 00:39:37.182940 containerd[1685]: time="2025-11-06T00:39:37.182923557Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Nov 6 00:39:37.188049 containerd[1685]: time="2025-11-06T00:39:37.188023535Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="5.767µs" Nov 6 00:39:37.188122 containerd[1685]: time="2025-11-06T00:39:37.188112895Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Nov 6 00:39:37.188160 containerd[1685]: time="2025-11-06T00:39:37.188152738Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Nov 6 00:39:37.188285 containerd[1685]: time="2025-11-06T00:39:37.188275675Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Nov 6 00:39:37.188320 containerd[1685]: time="2025-11-06T00:39:37.188313539Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Nov 6 00:39:37.188371 containerd[1685]: time="2025-11-06T00:39:37.188363384Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 6 00:39:37.188435 containerd[1685]: time="2025-11-06T00:39:37.188425217Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 6 00:39:37.188465 containerd[1685]: time="2025-11-06T00:39:37.188458968Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 6 00:39:37.188631 containerd[1685]: time="2025-11-06T00:39:37.188619706Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 6 00:39:37.189165 containerd[1685]: time="2025-11-06T00:39:37.188657859Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 6 00:39:37.189165 containerd[1685]: time="2025-11-06T00:39:37.188668334Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 6 00:39:37.189165 containerd[1685]: time="2025-11-06T00:39:37.188673085Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Nov 6 00:39:37.189165 containerd[1685]: time="2025-11-06T00:39:37.188719720Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Nov 6 00:39:37.189165 containerd[1685]: time="2025-11-06T00:39:37.188839313Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 6 00:39:37.189165 containerd[1685]: time="2025-11-06T00:39:37.188854995Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 6 00:39:37.189165 containerd[1685]: time="2025-11-06T00:39:37.188860777Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Nov 6 00:39:37.189165 containerd[1685]: time="2025-11-06T00:39:37.188880061Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Nov 6 00:39:37.189165 containerd[1685]: time="2025-11-06T00:39:37.189020921Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Nov 6 00:39:37.189165 containerd[1685]: time="2025-11-06T00:39:37.189063078Z" level=info msg="metadata content store policy set" policy=shared Nov 6 00:39:37.195885 containerd[1685]: time="2025-11-06T00:39:37.195856613Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Nov 6 00:39:37.195994 containerd[1685]: time="2025-11-06T00:39:37.195985823Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Nov 6 00:39:37.196038 containerd[1685]: time="2025-11-06T00:39:37.196023764Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Nov 6 00:39:37.196091 containerd[1685]: time="2025-11-06T00:39:37.196082569Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Nov 6 00:39:37.196125 containerd[1685]: time="2025-11-06T00:39:37.196118636Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Nov 6 00:39:37.196169 containerd[1685]: time="2025-11-06T00:39:37.196161043Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Nov 6 00:39:37.196209 containerd[1685]: time="2025-11-06T00:39:37.196201178Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Nov 6 00:39:37.196241 containerd[1685]: time="2025-11-06T00:39:37.196234668Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Nov 6 00:39:37.196272 containerd[1685]: time="2025-11-06T00:39:37.196265848Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Nov 6 00:39:37.196303 containerd[1685]: time="2025-11-06T00:39:37.196296617Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Nov 6 00:39:37.196333 containerd[1685]: time="2025-11-06T00:39:37.196326584Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Nov 6 00:39:37.196365 containerd[1685]: time="2025-11-06T00:39:37.196359350Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Nov 6 00:39:37.196493 containerd[1685]: time="2025-11-06T00:39:37.196470904Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Nov 6 00:39:37.196532 containerd[1685]: time="2025-11-06T00:39:37.196524872Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Nov 6 00:39:37.196568 containerd[1685]: time="2025-11-06T00:39:37.196561755Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Nov 6 00:39:37.196606 containerd[1685]: time="2025-11-06T00:39:37.196598824Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Nov 6 00:39:37.196636 containerd[1685]: time="2025-11-06T00:39:37.196630608Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Nov 6 00:39:37.196666 containerd[1685]: time="2025-11-06T00:39:37.196660570Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Nov 6 00:39:37.196808 containerd[1685]: time="2025-11-06T00:39:37.196701433Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Nov 6 00:39:37.196808 containerd[1685]: time="2025-11-06T00:39:37.196711027Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Nov 6 00:39:37.196808 containerd[1685]: time="2025-11-06T00:39:37.196717651Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Nov 6 00:39:37.196808 containerd[1685]: time="2025-11-06T00:39:37.196727148Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Nov 6 00:39:37.196808 containerd[1685]: time="2025-11-06T00:39:37.196733702Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Nov 6 00:39:37.196992 containerd[1685]: time="2025-11-06T00:39:37.196774319Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Nov 6 00:39:37.197031 containerd[1685]: time="2025-11-06T00:39:37.197024083Z" level=info msg="Start snapshots syncer" Nov 6 00:39:37.197579 containerd[1685]: time="2025-11-06T00:39:37.197071505Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Nov 6 00:39:37.197579 containerd[1685]: time="2025-11-06T00:39:37.197294456Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Nov 6 00:39:37.197669 containerd[1685]: time="2025-11-06T00:39:37.197330894Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Nov 6 00:39:37.199457 containerd[1685]: time="2025-11-06T00:39:37.199433535Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Nov 6 00:39:37.199600 containerd[1685]: time="2025-11-06T00:39:37.199589293Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Nov 6 00:39:37.199651 containerd[1685]: time="2025-11-06T00:39:37.199642445Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Nov 6 00:39:37.199690 containerd[1685]: time="2025-11-06T00:39:37.199682736Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Nov 6 00:39:37.199724 containerd[1685]: time="2025-11-06T00:39:37.199715836Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Nov 6 00:39:37.199759 containerd[1685]: time="2025-11-06T00:39:37.199752661Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Nov 6 00:39:37.199803 containerd[1685]: time="2025-11-06T00:39:37.199794998Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Nov 6 00:39:37.199837 containerd[1685]: time="2025-11-06T00:39:37.199830852Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Nov 6 00:39:37.199887 containerd[1685]: time="2025-11-06T00:39:37.199879090Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Nov 6 00:39:37.200095 containerd[1685]: time="2025-11-06T00:39:37.200078836Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Nov 6 00:39:37.200118 containerd[1685]: time="2025-11-06T00:39:37.200098220Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Nov 6 00:39:37.200133 containerd[1685]: time="2025-11-06T00:39:37.200118400Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 6 00:39:37.200133 containerd[1685]: time="2025-11-06T00:39:37.200127941Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 6 00:39:37.200164 containerd[1685]: time="2025-11-06T00:39:37.200133701Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 6 00:39:37.200164 containerd[1685]: time="2025-11-06T00:39:37.200139701Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 6 00:39:37.200164 containerd[1685]: time="2025-11-06T00:39:37.200144366Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Nov 6 00:39:37.200164 containerd[1685]: time="2025-11-06T00:39:37.200149602Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Nov 6 00:39:37.200164 containerd[1685]: time="2025-11-06T00:39:37.200155663Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Nov 6 00:39:37.200226 containerd[1685]: time="2025-11-06T00:39:37.200166891Z" level=info msg="runtime interface created" Nov 6 00:39:37.200226 containerd[1685]: time="2025-11-06T00:39:37.200170450Z" level=info msg="created NRI interface" Nov 6 00:39:37.200226 containerd[1685]: time="2025-11-06T00:39:37.200174911Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Nov 6 00:39:37.200226 containerd[1685]: time="2025-11-06T00:39:37.200183438Z" level=info msg="Connect containerd service" Nov 6 00:39:37.200226 containerd[1685]: time="2025-11-06T00:39:37.200205038Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 6 00:39:37.200673 containerd[1685]: time="2025-11-06T00:39:37.200657272Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 6 00:39:37.276762 tar[1673]: linux-amd64/README.md Nov 6 00:39:37.284984 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 6 00:39:37.306098 containerd[1685]: time="2025-11-06T00:39:37.306067762Z" level=info msg="Start subscribing containerd event" Nov 6 00:39:37.306174 containerd[1685]: time="2025-11-06T00:39:37.306098088Z" level=info msg="Start recovering state" Nov 6 00:39:37.306196 containerd[1685]: time="2025-11-06T00:39:37.306177621Z" level=info msg="Start event monitor" Nov 6 00:39:37.306196 containerd[1685]: time="2025-11-06T00:39:37.306187281Z" level=info msg="Start cni network conf syncer for default" Nov 6 00:39:37.306196 containerd[1685]: time="2025-11-06T00:39:37.306191591Z" level=info msg="Start streaming server" Nov 6 00:39:37.306234 containerd[1685]: time="2025-11-06T00:39:37.306196997Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Nov 6 00:39:37.306234 containerd[1685]: time="2025-11-06T00:39:37.306201734Z" level=info msg="runtime interface starting up..." Nov 6 00:39:37.306234 containerd[1685]: time="2025-11-06T00:39:37.306205017Z" level=info msg="starting plugins..." Nov 6 00:39:37.306234 containerd[1685]: time="2025-11-06T00:39:37.306224094Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Nov 6 00:39:37.306637 containerd[1685]: time="2025-11-06T00:39:37.306337351Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 6 00:39:37.306637 containerd[1685]: time="2025-11-06T00:39:37.306367636Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 6 00:39:37.306637 containerd[1685]: time="2025-11-06T00:39:37.306626613Z" level=info msg="containerd successfully booted in 0.124485s" Nov 6 00:39:37.306508 systemd[1]: Started containerd.service - containerd container runtime. Nov 6 00:39:38.084605 systemd-networkd[1585]: ens192: Gained IPv6LL Nov 6 00:39:38.085119 systemd-timesyncd[1554]: Network configuration changed, trying to establish connection. Nov 6 00:39:38.090657 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 6 00:39:38.091187 systemd[1]: Reached target network-online.target - Network is Online. Nov 6 00:39:38.092522 systemd[1]: Starting coreos-metadata.service - VMware metadata agent... Nov 6 00:39:38.095707 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 00:39:38.108008 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 6 00:39:38.130952 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 6 00:39:38.141213 systemd[1]: coreos-metadata.service: Deactivated successfully. Nov 6 00:39:38.141394 systemd[1]: Finished coreos-metadata.service - VMware metadata agent. Nov 6 00:39:38.142123 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 6 00:39:39.722071 systemd-timesyncd[1554]: Network configuration changed, trying to establish connection. Nov 6 00:39:39.809282 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 00:39:39.810502 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 6 00:39:39.810892 systemd[1]: Startup finished in 2.144s (kernel) + 4.006s (initrd) + 5.555s (userspace) = 11.706s. Nov 6 00:39:39.824034 (kubelet)[1851]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 6 00:39:39.923963 login[1758]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Nov 6 00:39:39.935503 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 6 00:39:39.937135 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 6 00:39:39.939190 systemd-logind[1659]: New session 1 of user core. Nov 6 00:39:39.958449 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 6 00:39:39.962006 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 6 00:39:39.976241 (systemd)[1856]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 6 00:39:39.977791 systemd-logind[1659]: New session c1 of user core. Nov 6 00:39:40.080997 systemd[1856]: Queued start job for default target default.target. Nov 6 00:39:40.087310 systemd[1856]: Created slice app.slice - User Application Slice. Nov 6 00:39:40.087429 systemd[1856]: Reached target paths.target - Paths. Nov 6 00:39:40.087504 systemd[1856]: Reached target timers.target - Timers. Nov 6 00:39:40.088255 systemd[1856]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 6 00:39:40.095230 systemd[1856]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 6 00:39:40.095268 systemd[1856]: Reached target sockets.target - Sockets. Nov 6 00:39:40.095293 systemd[1856]: Reached target basic.target - Basic System. Nov 6 00:39:40.095314 systemd[1856]: Reached target default.target - Main User Target. Nov 6 00:39:40.095331 systemd[1856]: Startup finished in 113ms. Nov 6 00:39:40.095362 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 6 00:39:40.099299 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 6 00:39:40.170159 login[1759]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Nov 6 00:39:40.173685 systemd-logind[1659]: New session 2 of user core. Nov 6 00:39:40.187655 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 6 00:39:40.823823 kubelet[1851]: E1106 00:39:40.823781 1851 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 6 00:39:40.825338 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 6 00:39:40.825462 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 6 00:39:40.825820 systemd[1]: kubelet.service: Consumed 676ms CPU time, 256.6M memory peak. Nov 6 00:39:51.075860 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 6 00:39:51.076972 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 00:39:51.391715 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 00:39:51.409789 (kubelet)[1901]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 6 00:39:51.451401 kubelet[1901]: E1106 00:39:51.451376 1901 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 6 00:39:51.453408 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 6 00:39:51.453508 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 6 00:39:51.453702 systemd[1]: kubelet.service: Consumed 98ms CPU time, 110.7M memory peak. Nov 6 00:40:01.498674 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 6 00:40:01.500062 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 00:40:01.830865 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 00:40:01.836744 (kubelet)[1916]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 6 00:40:01.861116 kubelet[1916]: E1106 00:40:01.861086 1916 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 6 00:40:01.862588 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 6 00:40:01.862739 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 6 00:40:01.863122 systemd[1]: kubelet.service: Consumed 103ms CPU time, 108.2M memory peak. Nov 6 00:40:06.982348 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 6 00:40:06.983772 systemd[1]: Started sshd@0-139.178.70.101:22-139.178.89.65:56428.service - OpenSSH per-connection server daemon (139.178.89.65:56428). Nov 6 00:40:07.042032 sshd[1924]: Accepted publickey for core from 139.178.89.65 port 56428 ssh2: RSA SHA256:aeJ0iyVFFcS2Dq9U+AYxygIIK9Bveb16u++2F2avGlc Nov 6 00:40:07.042554 sshd-session[1924]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:40:07.046721 systemd-logind[1659]: New session 3 of user core. Nov 6 00:40:07.061645 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 6 00:40:07.118666 systemd[1]: Started sshd@1-139.178.70.101:22-139.178.89.65:56442.service - OpenSSH per-connection server daemon (139.178.89.65:56442). Nov 6 00:40:07.155137 sshd[1930]: Accepted publickey for core from 139.178.89.65 port 56442 ssh2: RSA SHA256:aeJ0iyVFFcS2Dq9U+AYxygIIK9Bveb16u++2F2avGlc Nov 6 00:40:07.156032 sshd-session[1930]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:40:07.159891 systemd-logind[1659]: New session 4 of user core. Nov 6 00:40:07.164578 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 6 00:40:07.214444 sshd[1933]: Connection closed by 139.178.89.65 port 56442 Nov 6 00:40:07.214150 sshd-session[1930]: pam_unix(sshd:session): session closed for user core Nov 6 00:40:07.221650 systemd[1]: sshd@1-139.178.70.101:22-139.178.89.65:56442.service: Deactivated successfully. Nov 6 00:40:07.222689 systemd[1]: session-4.scope: Deactivated successfully. Nov 6 00:40:07.223399 systemd-logind[1659]: Session 4 logged out. Waiting for processes to exit. Nov 6 00:40:07.224447 systemd-logind[1659]: Removed session 4. Nov 6 00:40:07.225431 systemd[1]: Started sshd@2-139.178.70.101:22-139.178.89.65:56450.service - OpenSSH per-connection server daemon (139.178.89.65:56450). Nov 6 00:40:07.266909 sshd[1939]: Accepted publickey for core from 139.178.89.65 port 56450 ssh2: RSA SHA256:aeJ0iyVFFcS2Dq9U+AYxygIIK9Bveb16u++2F2avGlc Nov 6 00:40:07.267720 sshd-session[1939]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:40:07.271149 systemd-logind[1659]: New session 5 of user core. Nov 6 00:40:07.280656 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 6 00:40:07.328033 sshd[1942]: Connection closed by 139.178.89.65 port 56450 Nov 6 00:40:07.327945 sshd-session[1939]: pam_unix(sshd:session): session closed for user core Nov 6 00:40:07.335133 systemd[1]: sshd@2-139.178.70.101:22-139.178.89.65:56450.service: Deactivated successfully. Nov 6 00:40:07.336221 systemd[1]: session-5.scope: Deactivated successfully. Nov 6 00:40:07.336872 systemd-logind[1659]: Session 5 logged out. Waiting for processes to exit. Nov 6 00:40:07.338152 systemd[1]: Started sshd@3-139.178.70.101:22-139.178.89.65:56464.service - OpenSSH per-connection server daemon (139.178.89.65:56464). Nov 6 00:40:07.339074 systemd-logind[1659]: Removed session 5. Nov 6 00:40:07.387795 sshd[1948]: Accepted publickey for core from 139.178.89.65 port 56464 ssh2: RSA SHA256:aeJ0iyVFFcS2Dq9U+AYxygIIK9Bveb16u++2F2avGlc Nov 6 00:40:07.388582 sshd-session[1948]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:40:07.391745 systemd-logind[1659]: New session 6 of user core. Nov 6 00:40:07.401572 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 6 00:40:07.449865 sshd[1951]: Connection closed by 139.178.89.65 port 56464 Nov 6 00:40:07.450187 sshd-session[1948]: pam_unix(sshd:session): session closed for user core Nov 6 00:40:07.461318 systemd[1]: sshd@3-139.178.70.101:22-139.178.89.65:56464.service: Deactivated successfully. Nov 6 00:40:07.462168 systemd[1]: session-6.scope: Deactivated successfully. Nov 6 00:40:07.462849 systemd-logind[1659]: Session 6 logged out. Waiting for processes to exit. Nov 6 00:40:07.463780 systemd-logind[1659]: Removed session 6. Nov 6 00:40:07.464631 systemd[1]: Started sshd@4-139.178.70.101:22-139.178.89.65:56466.service - OpenSSH per-connection server daemon (139.178.89.65:56466). Nov 6 00:40:07.507346 sshd[1957]: Accepted publickey for core from 139.178.89.65 port 56466 ssh2: RSA SHA256:aeJ0iyVFFcS2Dq9U+AYxygIIK9Bveb16u++2F2avGlc Nov 6 00:40:07.508199 sshd-session[1957]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:40:07.511658 systemd-logind[1659]: New session 7 of user core. Nov 6 00:40:07.517580 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 6 00:40:07.574975 sudo[1961]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 6 00:40:07.575119 sudo[1961]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 6 00:40:07.589005 sudo[1961]: pam_unix(sudo:session): session closed for user root Nov 6 00:40:07.590303 sshd[1960]: Connection closed by 139.178.89.65 port 56466 Nov 6 00:40:07.590762 sshd-session[1957]: pam_unix(sshd:session): session closed for user core Nov 6 00:40:07.601051 systemd[1]: sshd@4-139.178.70.101:22-139.178.89.65:56466.service: Deactivated successfully. Nov 6 00:40:07.602564 systemd[1]: session-7.scope: Deactivated successfully. Nov 6 00:40:07.603181 systemd-logind[1659]: Session 7 logged out. Waiting for processes to exit. Nov 6 00:40:07.605055 systemd[1]: Started sshd@5-139.178.70.101:22-139.178.89.65:38898.service - OpenSSH per-connection server daemon (139.178.89.65:38898). Nov 6 00:40:07.605857 systemd-logind[1659]: Removed session 7. Nov 6 00:40:07.641241 sshd[1967]: Accepted publickey for core from 139.178.89.65 port 38898 ssh2: RSA SHA256:aeJ0iyVFFcS2Dq9U+AYxygIIK9Bveb16u++2F2avGlc Nov 6 00:40:07.642011 sshd-session[1967]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:40:07.644727 systemd-logind[1659]: New session 8 of user core. Nov 6 00:40:07.651672 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 6 00:40:07.700169 sudo[1972]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 6 00:40:07.700521 sudo[1972]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 6 00:40:07.702930 sudo[1972]: pam_unix(sudo:session): session closed for user root Nov 6 00:40:07.706569 sudo[1971]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Nov 6 00:40:07.706718 sudo[1971]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 6 00:40:07.712681 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 6 00:40:07.735744 augenrules[1994]: No rules Nov 6 00:40:07.736436 systemd[1]: audit-rules.service: Deactivated successfully. Nov 6 00:40:07.736616 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 6 00:40:07.737835 sudo[1971]: pam_unix(sudo:session): session closed for user root Nov 6 00:40:07.739529 sshd[1970]: Connection closed by 139.178.89.65 port 38898 Nov 6 00:40:07.738795 sshd-session[1967]: pam_unix(sshd:session): session closed for user core Nov 6 00:40:07.744607 systemd[1]: sshd@5-139.178.70.101:22-139.178.89.65:38898.service: Deactivated successfully. Nov 6 00:40:07.745332 systemd[1]: session-8.scope: Deactivated successfully. Nov 6 00:40:07.745753 systemd-logind[1659]: Session 8 logged out. Waiting for processes to exit. Nov 6 00:40:07.746906 systemd[1]: Started sshd@6-139.178.70.101:22-139.178.89.65:38902.service - OpenSSH per-connection server daemon (139.178.89.65:38902). Nov 6 00:40:07.748672 systemd-logind[1659]: Removed session 8. Nov 6 00:40:07.783700 sshd[2003]: Accepted publickey for core from 139.178.89.65 port 38902 ssh2: RSA SHA256:aeJ0iyVFFcS2Dq9U+AYxygIIK9Bveb16u++2F2avGlc Nov 6 00:40:07.784340 sshd-session[2003]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:40:07.787523 systemd-logind[1659]: New session 9 of user core. Nov 6 00:40:07.794644 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 6 00:40:07.842951 sudo[2007]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 6 00:40:07.843097 sudo[2007]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 6 00:40:08.177232 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 6 00:40:08.185669 (dockerd)[2024]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 6 00:40:08.426786 dockerd[2024]: time="2025-11-06T00:40:08.426745981Z" level=info msg="Starting up" Nov 6 00:40:08.427224 dockerd[2024]: time="2025-11-06T00:40:08.427209649Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Nov 6 00:40:08.433096 dockerd[2024]: time="2025-11-06T00:40:08.433040234Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Nov 6 00:40:08.460201 dockerd[2024]: time="2025-11-06T00:40:08.460055194Z" level=info msg="Loading containers: start." Nov 6 00:40:08.466531 kernel: Initializing XFRM netlink socket Nov 6 00:40:08.597929 systemd-timesyncd[1554]: Network configuration changed, trying to establish connection. Nov 6 00:40:08.621771 systemd-networkd[1585]: docker0: Link UP Nov 6 00:40:08.624237 dockerd[2024]: time="2025-11-06T00:40:08.624218315Z" level=info msg="Loading containers: done." Nov 6 00:40:08.634059 dockerd[2024]: time="2025-11-06T00:40:08.634035045Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 6 00:40:08.634143 dockerd[2024]: time="2025-11-06T00:40:08.634092819Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Nov 6 00:40:08.634143 dockerd[2024]: time="2025-11-06T00:40:08.634135347Z" level=info msg="Initializing buildkit" Nov 6 00:40:08.643679 dockerd[2024]: time="2025-11-06T00:40:08.643653609Z" level=info msg="Completed buildkit initialization" Nov 6 00:40:08.649686 dockerd[2024]: time="2025-11-06T00:40:08.649375397Z" level=info msg="Daemon has completed initialization" Nov 6 00:40:08.649686 dockerd[2024]: time="2025-11-06T00:40:08.649616943Z" level=info msg="API listen on /run/docker.sock" Nov 6 00:40:08.649763 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 6 00:41:47.151190 systemd-resolved[1341]: Clock change detected. Flushing caches. Nov 6 00:41:47.151344 systemd-timesyncd[1554]: Contacted time server 66.244.16.123:123 (2.flatcar.pool.ntp.org). Nov 6 00:41:47.151984 systemd-timesyncd[1554]: Initial clock synchronization to Thu 2025-11-06 00:41:47.150903 UTC. Nov 6 00:41:47.852314 containerd[1685]: time="2025-11-06T00:41:47.852269024Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\"" Nov 6 00:41:48.510656 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount580180406.mount: Deactivated successfully. Nov 6 00:41:49.541857 containerd[1685]: time="2025-11-06T00:41:49.541519085Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:41:49.542242 containerd[1685]: time="2025-11-06T00:41:49.542230440Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.1: active requests=0, bytes read=27065392" Nov 6 00:41:49.542639 containerd[1685]: time="2025-11-06T00:41:49.542629218Z" level=info msg="ImageCreate event name:\"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:41:49.544014 containerd[1685]: time="2025-11-06T00:41:49.543997215Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:41:49.544584 containerd[1685]: time="2025-11-06T00:41:49.544572518Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.1\" with image id \"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.1\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902\", size \"27061991\" in 1.692282223s" Nov 6 00:41:49.544720 containerd[1685]: time="2025-11-06T00:41:49.544623969Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\" returns image reference \"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\"" Nov 6 00:41:49.545113 containerd[1685]: time="2025-11-06T00:41:49.545092575Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\"" Nov 6 00:41:50.466927 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Nov 6 00:41:50.468226 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 00:41:50.643108 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 00:41:50.649326 (kubelet)[2307]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 6 00:41:50.729978 kubelet[2307]: E1106 00:41:50.729765 2307 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 6 00:41:50.733396 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 6 00:41:50.733515 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 6 00:41:50.733770 systemd[1]: kubelet.service: Consumed 105ms CPU time, 110.6M memory peak. Nov 6 00:41:50.952599 containerd[1685]: time="2025-11-06T00:41:50.952564249Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:41:50.958616 containerd[1685]: time="2025-11-06T00:41:50.958590509Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.1: active requests=0, bytes read=21159757" Nov 6 00:41:50.966171 containerd[1685]: time="2025-11-06T00:41:50.966148553Z" level=info msg="ImageCreate event name:\"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:41:50.971673 containerd[1685]: time="2025-11-06T00:41:50.971497129Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:41:50.972273 containerd[1685]: time="2025-11-06T00:41:50.972047339Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.1\" with image id \"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.1\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89\", size \"22820214\" in 1.426872153s" Nov 6 00:41:50.972273 containerd[1685]: time="2025-11-06T00:41:50.972214702Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\" returns image reference \"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\"" Nov 6 00:41:50.972506 containerd[1685]: time="2025-11-06T00:41:50.972488020Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\"" Nov 6 00:41:52.337898 containerd[1685]: time="2025-11-06T00:41:52.337578192Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:41:52.338450 containerd[1685]: time="2025-11-06T00:41:52.338336178Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.1: active requests=0, bytes read=15725093" Nov 6 00:41:52.338725 containerd[1685]: time="2025-11-06T00:41:52.338707831Z" level=info msg="ImageCreate event name:\"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:41:52.340594 containerd[1685]: time="2025-11-06T00:41:52.340575795Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:41:52.341343 containerd[1685]: time="2025-11-06T00:41:52.341327376Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.1\" with image id \"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.1\", repo digest \"registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500\", size \"17385568\" in 1.368822948s" Nov 6 00:41:52.341401 containerd[1685]: time="2025-11-06T00:41:52.341392878Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\" returns image reference \"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\"" Nov 6 00:41:52.341973 containerd[1685]: time="2025-11-06T00:41:52.341920712Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\"" Nov 6 00:41:53.348794 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3157422470.mount: Deactivated successfully. Nov 6 00:41:53.620433 containerd[1685]: time="2025-11-06T00:41:53.620043564Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:41:53.620958 containerd[1685]: time="2025-11-06T00:41:53.620947690Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.1: active requests=0, bytes read=25964699" Nov 6 00:41:53.621343 containerd[1685]: time="2025-11-06T00:41:53.621330269Z" level=info msg="ImageCreate event name:\"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:41:53.622875 containerd[1685]: time="2025-11-06T00:41:53.622738927Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:41:53.623369 containerd[1685]: time="2025-11-06T00:41:53.623342677Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.1\" with image id \"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\", repo tag \"registry.k8s.io/kube-proxy:v1.34.1\", repo digest \"registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a\", size \"25963718\" in 1.281393961s" Nov 6 00:41:53.623431 containerd[1685]: time="2025-11-06T00:41:53.623419743Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\" returns image reference \"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\"" Nov 6 00:41:53.623753 containerd[1685]: time="2025-11-06T00:41:53.623737388Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Nov 6 00:41:54.316196 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1754329634.mount: Deactivated successfully. Nov 6 00:41:55.711716 containerd[1685]: time="2025-11-06T00:41:55.711674125Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:41:55.718318 containerd[1685]: time="2025-11-06T00:41:55.717883421Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22388007" Nov 6 00:41:55.723891 containerd[1685]: time="2025-11-06T00:41:55.723874410Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:41:55.729372 containerd[1685]: time="2025-11-06T00:41:55.729336081Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:41:55.730492 containerd[1685]: time="2025-11-06T00:41:55.730270216Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 2.106513667s" Nov 6 00:41:55.730492 containerd[1685]: time="2025-11-06T00:41:55.730293921Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Nov 6 00:41:55.730555 containerd[1685]: time="2025-11-06T00:41:55.730548590Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Nov 6 00:41:56.306398 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4015397279.mount: Deactivated successfully. Nov 6 00:41:56.308622 containerd[1685]: time="2025-11-06T00:41:56.308592955Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:41:56.309240 containerd[1685]: time="2025-11-06T00:41:56.309218414Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321218" Nov 6 00:41:56.309813 containerd[1685]: time="2025-11-06T00:41:56.309771022Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:41:56.311118 containerd[1685]: time="2025-11-06T00:41:56.311094711Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:41:56.312001 containerd[1685]: time="2025-11-06T00:41:56.311977368Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 581.398724ms" Nov 6 00:41:56.312034 containerd[1685]: time="2025-11-06T00:41:56.311999816Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Nov 6 00:41:56.312383 containerd[1685]: time="2025-11-06T00:41:56.312310014Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Nov 6 00:42:00.573874 update_engine[1661]: I20251106 00:42:00.573753 1661 update_attempter.cc:509] Updating boot flags... Nov 6 00:42:00.871448 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Nov 6 00:42:00.873410 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 00:42:01.218674 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 00:42:01.227257 (kubelet)[2457]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 6 00:42:01.231455 containerd[1685]: time="2025-11-06T00:42:01.231136518Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:42:01.236135 containerd[1685]: time="2025-11-06T00:42:01.236109840Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=73514593" Nov 6 00:42:01.244873 containerd[1685]: time="2025-11-06T00:42:01.244237188Z" level=info msg="ImageCreate event name:\"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:42:01.251263 containerd[1685]: time="2025-11-06T00:42:01.251240458Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:42:01.252007 containerd[1685]: time="2025-11-06T00:42:01.251989239Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"74311308\" in 4.939603738s" Nov 6 00:42:01.252074 containerd[1685]: time="2025-11-06T00:42:01.252064105Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\"" Nov 6 00:42:01.385516 kubelet[2457]: E1106 00:42:01.385481 2457 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 6 00:42:01.386928 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 6 00:42:01.387079 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 6 00:42:01.387383 systemd[1]: kubelet.service: Consumed 105ms CPU time, 109.4M memory peak. Nov 6 00:42:04.328921 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 00:42:04.329282 systemd[1]: kubelet.service: Consumed 105ms CPU time, 109.4M memory peak. Nov 6 00:42:04.330763 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 00:42:04.349539 systemd[1]: Reload requested from client PID 2488 ('systemctl') (unit session-9.scope)... Nov 6 00:42:04.349548 systemd[1]: Reloading... Nov 6 00:42:04.425884 zram_generator::config[2536]: No configuration found. Nov 6 00:42:04.477628 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Nov 6 00:42:04.546014 systemd[1]: Reloading finished in 196 ms. Nov 6 00:42:04.568071 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 6 00:42:04.568186 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 6 00:42:04.568393 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 00:42:04.569858 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 00:42:04.937689 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 00:42:04.945174 (kubelet)[2600]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 6 00:42:04.978611 kubelet[2600]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 6 00:42:04.978833 kubelet[2600]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 6 00:42:04.979314 kubelet[2600]: I1106 00:42:04.979284 2600 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 6 00:42:05.144073 kubelet[2600]: I1106 00:42:05.144036 2600 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Nov 6 00:42:05.144194 kubelet[2600]: I1106 00:42:05.144156 2600 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 6 00:42:05.145878 kubelet[2600]: I1106 00:42:05.145692 2600 watchdog_linux.go:95] "Systemd watchdog is not enabled" Nov 6 00:42:05.145878 kubelet[2600]: I1106 00:42:05.145701 2600 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 6 00:42:05.145878 kubelet[2600]: I1106 00:42:05.145837 2600 server.go:956] "Client rotation is on, will bootstrap in background" Nov 6 00:42:05.158115 kubelet[2600]: E1106 00:42:05.157853 2600 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://139.178.70.101:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 139.178.70.101:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 6 00:42:05.160228 kubelet[2600]: I1106 00:42:05.160151 2600 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 6 00:42:05.183124 kubelet[2600]: I1106 00:42:05.183009 2600 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 6 00:42:05.227606 kubelet[2600]: I1106 00:42:05.227473 2600 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Nov 6 00:42:05.234414 kubelet[2600]: I1106 00:42:05.232312 2600 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 6 00:42:05.251144 kubelet[2600]: I1106 00:42:05.232332 2600 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 6 00:42:05.251144 kubelet[2600]: I1106 00:42:05.237343 2600 topology_manager.go:138] "Creating topology manager with none policy" Nov 6 00:42:05.251144 kubelet[2600]: I1106 00:42:05.237350 2600 container_manager_linux.go:306] "Creating device plugin manager" Nov 6 00:42:05.251144 kubelet[2600]: I1106 00:42:05.237406 2600 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Nov 6 00:42:05.253603 kubelet[2600]: I1106 00:42:05.253569 2600 state_mem.go:36] "Initialized new in-memory state store" Nov 6 00:42:05.266512 kubelet[2600]: I1106 00:42:05.264059 2600 kubelet.go:475] "Attempting to sync node with API server" Nov 6 00:42:05.266512 kubelet[2600]: I1106 00:42:05.264072 2600 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 6 00:42:05.266512 kubelet[2600]: I1106 00:42:05.264087 2600 kubelet.go:387] "Adding apiserver pod source" Nov 6 00:42:05.266512 kubelet[2600]: I1106 00:42:05.264099 2600 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 6 00:42:05.289124 kubelet[2600]: E1106 00:42:05.289030 2600 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://139.178.70.101:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.70.101:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 6 00:42:05.289262 kubelet[2600]: E1106 00:42:05.289238 2600 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://139.178.70.101:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 139.178.70.101:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 6 00:42:05.289316 kubelet[2600]: I1106 00:42:05.289302 2600 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Nov 6 00:42:05.303015 kubelet[2600]: I1106 00:42:05.302997 2600 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 6 00:42:05.303015 kubelet[2600]: I1106 00:42:05.303020 2600 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Nov 6 00:42:05.516898 kubelet[2600]: W1106 00:42:05.323926 2600 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 6 00:42:05.516898 kubelet[2600]: I1106 00:42:05.364877 2600 server.go:1262] "Started kubelet" Nov 6 00:42:05.516898 kubelet[2600]: I1106 00:42:05.382174 2600 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 6 00:42:05.516898 kubelet[2600]: I1106 00:42:05.386958 2600 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 6 00:42:05.516898 kubelet[2600]: I1106 00:42:05.405130 2600 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 6 00:42:05.516898 kubelet[2600]: I1106 00:42:05.406535 2600 volume_manager.go:313] "Starting Kubelet Volume Manager" Nov 6 00:42:05.516898 kubelet[2600]: E1106 00:42:05.406626 2600 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 6 00:42:05.516898 kubelet[2600]: I1106 00:42:05.410750 2600 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 6 00:42:05.516898 kubelet[2600]: I1106 00:42:05.410777 2600 server_v1.go:49] "podresources" method="list" useActivePods=true Nov 6 00:42:05.516898 kubelet[2600]: I1106 00:42:05.410920 2600 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 6 00:42:05.516898 kubelet[2600]: I1106 00:42:05.423738 2600 server.go:310] "Adding debug handlers to kubelet server" Nov 6 00:42:05.516898 kubelet[2600]: I1106 00:42:05.429494 2600 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 6 00:42:05.516898 kubelet[2600]: I1106 00:42:05.429521 2600 reconciler.go:29] "Reconciler: start to sync state" Nov 6 00:42:05.517344 kubelet[2600]: E1106 00:42:05.424180 2600 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://139.178.70.101:6443/api/v1/namespaces/default/events\": dial tcp 139.178.70.101:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1875441e202b7874 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-11-06 00:42:05.36484466 +0000 UTC m=+0.417413049,LastTimestamp:2025-11-06 00:42:05.36484466 +0000 UTC m=+0.417413049,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Nov 6 00:42:05.517344 kubelet[2600]: E1106 00:42:05.432132 2600 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.101:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.101:6443: connect: connection refused" interval="200ms" Nov 6 00:42:05.517344 kubelet[2600]: E1106 00:42:05.433323 2600 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 6 00:42:05.517344 kubelet[2600]: I1106 00:42:05.433393 2600 factory.go:223] Registration of the containerd container factory successfully Nov 6 00:42:05.517344 kubelet[2600]: I1106 00:42:05.433400 2600 factory.go:223] Registration of the systemd container factory successfully Nov 6 00:42:05.517344 kubelet[2600]: I1106 00:42:05.433438 2600 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 6 00:42:05.517525 kubelet[2600]: E1106 00:42:05.437664 2600 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://139.178.70.101:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.70.101:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 6 00:42:05.517525 kubelet[2600]: I1106 00:42:05.461168 2600 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Nov 6 00:42:05.517525 kubelet[2600]: I1106 00:42:05.461819 2600 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Nov 6 00:42:05.517525 kubelet[2600]: I1106 00:42:05.461826 2600 status_manager.go:244] "Starting to sync pod status with apiserver" Nov 6 00:42:05.517525 kubelet[2600]: I1106 00:42:05.461838 2600 kubelet.go:2427] "Starting kubelet main sync loop" Nov 6 00:42:05.517525 kubelet[2600]: E1106 00:42:05.461860 2600 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 6 00:42:05.517525 kubelet[2600]: E1106 00:42:05.464012 2600 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://139.178.70.101:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 139.178.70.101:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 6 00:42:05.517525 kubelet[2600]: I1106 00:42:05.464544 2600 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 6 00:42:05.517525 kubelet[2600]: I1106 00:42:05.464551 2600 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 6 00:42:05.517525 kubelet[2600]: I1106 00:42:05.464560 2600 state_mem.go:36] "Initialized new in-memory state store" Nov 6 00:42:05.517525 kubelet[2600]: E1106 00:42:05.507101 2600 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 6 00:42:05.522204 kubelet[2600]: I1106 00:42:05.522058 2600 policy_none.go:49] "None policy: Start" Nov 6 00:42:05.522204 kubelet[2600]: I1106 00:42:05.522071 2600 memory_manager.go:187] "Starting memorymanager" policy="None" Nov 6 00:42:05.522204 kubelet[2600]: I1106 00:42:05.522078 2600 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Nov 6 00:42:05.530270 kubelet[2600]: I1106 00:42:05.530203 2600 policy_none.go:47] "Start" Nov 6 00:42:05.542122 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 6 00:42:05.554408 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 6 00:42:05.556618 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 6 00:42:05.562755 kubelet[2600]: E1106 00:42:05.562738 2600 kubelet.go:2451] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 6 00:42:05.566455 kubelet[2600]: E1106 00:42:05.566336 2600 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 6 00:42:05.566640 kubelet[2600]: I1106 00:42:05.566632 2600 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 6 00:42:05.566667 kubelet[2600]: I1106 00:42:05.566640 2600 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 6 00:42:05.566802 kubelet[2600]: I1106 00:42:05.566795 2600 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 6 00:42:05.568029 kubelet[2600]: E1106 00:42:05.568018 2600 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 6 00:42:05.568060 kubelet[2600]: E1106 00:42:05.568056 2600 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Nov 6 00:42:05.632506 kubelet[2600]: E1106 00:42:05.632475 2600 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.101:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.101:6443: connect: connection refused" interval="400ms" Nov 6 00:42:05.668805 kubelet[2600]: I1106 00:42:05.668716 2600 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 6 00:42:05.669046 kubelet[2600]: E1106 00:42:05.669015 2600 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.70.101:6443/api/v1/nodes\": dial tcp 139.178.70.101:6443: connect: connection refused" node="localhost" Nov 6 00:42:05.772136 systemd[1]: Created slice kubepods-burstable-pode57bbf031c6f27c793c5b7d754ae6750.slice - libcontainer container kubepods-burstable-pode57bbf031c6f27c793c5b7d754ae6750.slice. Nov 6 00:42:05.789002 kubelet[2600]: E1106 00:42:05.788983 2600 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 6 00:42:05.791978 systemd[1]: Created slice kubepods-burstable-pod72ae43bf624d285361487631af8a6ba6.slice - libcontainer container kubepods-burstable-pod72ae43bf624d285361487631af8a6ba6.slice. Nov 6 00:42:05.798723 kubelet[2600]: E1106 00:42:05.798642 2600 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 6 00:42:05.800318 systemd[1]: Created slice kubepods-burstable-podce161b3b11c90b0b844f2e4f86b4e8cd.slice - libcontainer container kubepods-burstable-podce161b3b11c90b0b844f2e4f86b4e8cd.slice. Nov 6 00:42:05.801741 kubelet[2600]: E1106 00:42:05.801727 2600 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 6 00:42:05.832262 kubelet[2600]: I1106 00:42:05.832208 2600 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 6 00:42:05.832262 kubelet[2600]: I1106 00:42:05.832233 2600 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 6 00:42:05.832262 kubelet[2600]: I1106 00:42:05.832244 2600 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 6 00:42:05.832262 kubelet[2600]: I1106 00:42:05.832270 2600 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e57bbf031c6f27c793c5b7d754ae6750-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"e57bbf031c6f27c793c5b7d754ae6750\") " pod="kube-system/kube-apiserver-localhost" Nov 6 00:42:05.832453 kubelet[2600]: I1106 00:42:05.832281 2600 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e57bbf031c6f27c793c5b7d754ae6750-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"e57bbf031c6f27c793c5b7d754ae6750\") " pod="kube-system/kube-apiserver-localhost" Nov 6 00:42:05.832453 kubelet[2600]: I1106 00:42:05.832289 2600 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 6 00:42:05.832453 kubelet[2600]: I1106 00:42:05.832298 2600 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 6 00:42:05.832453 kubelet[2600]: I1106 00:42:05.832319 2600 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72ae43bf624d285361487631af8a6ba6-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72ae43bf624d285361487631af8a6ba6\") " pod="kube-system/kube-scheduler-localhost" Nov 6 00:42:05.832453 kubelet[2600]: I1106 00:42:05.832334 2600 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e57bbf031c6f27c793c5b7d754ae6750-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"e57bbf031c6f27c793c5b7d754ae6750\") " pod="kube-system/kube-apiserver-localhost" Nov 6 00:42:05.870150 kubelet[2600]: I1106 00:42:05.870122 2600 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 6 00:42:05.870452 kubelet[2600]: E1106 00:42:05.870437 2600 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.70.101:6443/api/v1/nodes\": dial tcp 139.178.70.101:6443: connect: connection refused" node="localhost" Nov 6 00:42:06.032861 kubelet[2600]: E1106 00:42:06.032792 2600 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.101:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.101:6443: connect: connection refused" interval="800ms" Nov 6 00:42:06.092699 containerd[1685]: time="2025-11-06T00:42:06.092669719Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:e57bbf031c6f27c793c5b7d754ae6750,Namespace:kube-system,Attempt:0,}" Nov 6 00:42:06.103175 containerd[1685]: time="2025-11-06T00:42:06.103024314Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72ae43bf624d285361487631af8a6ba6,Namespace:kube-system,Attempt:0,}" Nov 6 00:42:06.103751 containerd[1685]: time="2025-11-06T00:42:06.103646409Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:ce161b3b11c90b0b844f2e4f86b4e8cd,Namespace:kube-system,Attempt:0,}" Nov 6 00:42:06.271971 kubelet[2600]: I1106 00:42:06.271936 2600 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 6 00:42:06.272196 kubelet[2600]: E1106 00:42:06.272180 2600 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.70.101:6443/api/v1/nodes\": dial tcp 139.178.70.101:6443: connect: connection refused" node="localhost" Nov 6 00:42:06.279599 kubelet[2600]: E1106 00:42:06.279573 2600 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://139.178.70.101:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.70.101:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 6 00:42:06.382288 kubelet[2600]: E1106 00:42:06.382205 2600 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://139.178.70.101:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.70.101:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 6 00:42:06.498220 kubelet[2600]: E1106 00:42:06.498187 2600 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://139.178.70.101:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 139.178.70.101:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 6 00:42:06.558785 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2095896197.mount: Deactivated successfully. Nov 6 00:42:06.591243 containerd[1685]: time="2025-11-06T00:42:06.591196775Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 6 00:42:06.627907 containerd[1685]: time="2025-11-06T00:42:06.627876910Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Nov 6 00:42:06.628939 containerd[1685]: time="2025-11-06T00:42:06.628916941Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 6 00:42:06.630260 containerd[1685]: time="2025-11-06T00:42:06.630223943Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Nov 6 00:42:06.630326 containerd[1685]: time="2025-11-06T00:42:06.630261644Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 6 00:42:06.630988 containerd[1685]: time="2025-11-06T00:42:06.630971739Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 6 00:42:06.631336 containerd[1685]: time="2025-11-06T00:42:06.631318281Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Nov 6 00:42:06.631614 containerd[1685]: time="2025-11-06T00:42:06.631594650Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 6 00:42:06.631874 containerd[1685]: time="2025-11-06T00:42:06.631854466Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 538.253093ms" Nov 6 00:42:06.634585 containerd[1685]: time="2025-11-06T00:42:06.634494168Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 530.779045ms" Nov 6 00:42:06.635260 containerd[1685]: time="2025-11-06T00:42:06.635229861Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 530.884382ms" Nov 6 00:42:06.724795 containerd[1685]: time="2025-11-06T00:42:06.724743793Z" level=info msg="connecting to shim dc5d64110c4dd664784c05978fd27cec9dab120642f91d4375f45b3d226c8270" address="unix:///run/containerd/s/17a093de0c8c383f583db7e4b8fb8f370d82036a7d86ebca072ce7f54aa6456d" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:42:06.729144 containerd[1685]: time="2025-11-06T00:42:06.729073085Z" level=info msg="connecting to shim 886a6c29acf8511e8f2b4877d984117d925818c87bad367ac52298fc52592272" address="unix:///run/containerd/s/b5f616e1307063b0bddf30d8b0914d8033b1563b59d655a98dff5bcab3d0e4b8" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:42:06.729573 containerd[1685]: time="2025-11-06T00:42:06.729507464Z" level=info msg="connecting to shim 9bddd81c76327008613c91253f9712376e53c223e30895a28a3945627d4e4a1f" address="unix:///run/containerd/s/5a645ab51ea5f817cc8bec0055a41b8d97e86834b528992ae619785950fe2b90" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:42:06.795994 systemd[1]: Started cri-containerd-9bddd81c76327008613c91253f9712376e53c223e30895a28a3945627d4e4a1f.scope - libcontainer container 9bddd81c76327008613c91253f9712376e53c223e30895a28a3945627d4e4a1f. Nov 6 00:42:06.799111 systemd[1]: Started cri-containerd-886a6c29acf8511e8f2b4877d984117d925818c87bad367ac52298fc52592272.scope - libcontainer container 886a6c29acf8511e8f2b4877d984117d925818c87bad367ac52298fc52592272. Nov 6 00:42:06.801078 systemd[1]: Started cri-containerd-dc5d64110c4dd664784c05978fd27cec9dab120642f91d4375f45b3d226c8270.scope - libcontainer container dc5d64110c4dd664784c05978fd27cec9dab120642f91d4375f45b3d226c8270. Nov 6 00:42:06.838203 kubelet[2600]: E1106 00:42:06.838170 2600 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.101:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.101:6443: connect: connection refused" interval="1.6s" Nov 6 00:42:06.857094 containerd[1685]: time="2025-11-06T00:42:06.857060341Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:ce161b3b11c90b0b844f2e4f86b4e8cd,Namespace:kube-system,Attempt:0,} returns sandbox id \"886a6c29acf8511e8f2b4877d984117d925818c87bad367ac52298fc52592272\"" Nov 6 00:42:06.863555 containerd[1685]: time="2025-11-06T00:42:06.863341017Z" level=info msg="CreateContainer within sandbox \"886a6c29acf8511e8f2b4877d984117d925818c87bad367ac52298fc52592272\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 6 00:42:06.869385 containerd[1685]: time="2025-11-06T00:42:06.869353399Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:e57bbf031c6f27c793c5b7d754ae6750,Namespace:kube-system,Attempt:0,} returns sandbox id \"9bddd81c76327008613c91253f9712376e53c223e30895a28a3945627d4e4a1f\"" Nov 6 00:42:06.872453 kubelet[2600]: E1106 00:42:06.871741 2600 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://139.178.70.101:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 139.178.70.101:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 6 00:42:06.873676 containerd[1685]: time="2025-11-06T00:42:06.873193412Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72ae43bf624d285361487631af8a6ba6,Namespace:kube-system,Attempt:0,} returns sandbox id \"dc5d64110c4dd664784c05978fd27cec9dab120642f91d4375f45b3d226c8270\"" Nov 6 00:42:06.874257 containerd[1685]: time="2025-11-06T00:42:06.874211233Z" level=info msg="CreateContainer within sandbox \"9bddd81c76327008613c91253f9712376e53c223e30895a28a3945627d4e4a1f\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 6 00:42:06.875590 containerd[1685]: time="2025-11-06T00:42:06.875572072Z" level=info msg="CreateContainer within sandbox \"dc5d64110c4dd664784c05978fd27cec9dab120642f91d4375f45b3d226c8270\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 6 00:42:06.876342 containerd[1685]: time="2025-11-06T00:42:06.876297240Z" level=info msg="Container 691879eb86f6b9f8b6b6af62bb4049e1efd343484e5544f4f82dcc088adf7d88: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:42:06.884042 containerd[1685]: time="2025-11-06T00:42:06.884021952Z" level=info msg="Container 6206a146ca1efa9fb22eebd9bbb1004556234cb6cf9abcc3f083b9c82053b9a3: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:42:06.884492 containerd[1685]: time="2025-11-06T00:42:06.884481724Z" level=info msg="Container 7960b79a2b6e1d2ad7105019632555518f873eaa2c3c3bc8c7ef47065a1f25d3: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:42:06.890472 containerd[1685]: time="2025-11-06T00:42:06.890414660Z" level=info msg="CreateContainer within sandbox \"886a6c29acf8511e8f2b4877d984117d925818c87bad367ac52298fc52592272\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"691879eb86f6b9f8b6b6af62bb4049e1efd343484e5544f4f82dcc088adf7d88\"" Nov 6 00:42:06.891198 containerd[1685]: time="2025-11-06T00:42:06.891158028Z" level=info msg="CreateContainer within sandbox \"9bddd81c76327008613c91253f9712376e53c223e30895a28a3945627d4e4a1f\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"7960b79a2b6e1d2ad7105019632555518f873eaa2c3c3bc8c7ef47065a1f25d3\"" Nov 6 00:42:06.891409 containerd[1685]: time="2025-11-06T00:42:06.891350370Z" level=info msg="CreateContainer within sandbox \"dc5d64110c4dd664784c05978fd27cec9dab120642f91d4375f45b3d226c8270\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"6206a146ca1efa9fb22eebd9bbb1004556234cb6cf9abcc3f083b9c82053b9a3\"" Nov 6 00:42:06.891479 containerd[1685]: time="2025-11-06T00:42:06.891360183Z" level=info msg="StartContainer for \"691879eb86f6b9f8b6b6af62bb4049e1efd343484e5544f4f82dcc088adf7d88\"" Nov 6 00:42:06.891578 containerd[1685]: time="2025-11-06T00:42:06.891562659Z" level=info msg="StartContainer for \"7960b79a2b6e1d2ad7105019632555518f873eaa2c3c3bc8c7ef47065a1f25d3\"" Nov 6 00:42:06.892195 containerd[1685]: time="2025-11-06T00:42:06.892182324Z" level=info msg="connecting to shim 691879eb86f6b9f8b6b6af62bb4049e1efd343484e5544f4f82dcc088adf7d88" address="unix:///run/containerd/s/b5f616e1307063b0bddf30d8b0914d8033b1563b59d655a98dff5bcab3d0e4b8" protocol=ttrpc version=3 Nov 6 00:42:06.892342 containerd[1685]: time="2025-11-06T00:42:06.892328570Z" level=info msg="StartContainer for \"6206a146ca1efa9fb22eebd9bbb1004556234cb6cf9abcc3f083b9c82053b9a3\"" Nov 6 00:42:06.892412 containerd[1685]: time="2025-11-06T00:42:06.892230257Z" level=info msg="connecting to shim 7960b79a2b6e1d2ad7105019632555518f873eaa2c3c3bc8c7ef47065a1f25d3" address="unix:///run/containerd/s/5a645ab51ea5f817cc8bec0055a41b8d97e86834b528992ae619785950fe2b90" protocol=ttrpc version=3 Nov 6 00:42:06.893422 containerd[1685]: time="2025-11-06T00:42:06.893387620Z" level=info msg="connecting to shim 6206a146ca1efa9fb22eebd9bbb1004556234cb6cf9abcc3f083b9c82053b9a3" address="unix:///run/containerd/s/17a093de0c8c383f583db7e4b8fb8f370d82036a7d86ebca072ce7f54aa6456d" protocol=ttrpc version=3 Nov 6 00:42:06.908979 systemd[1]: Started cri-containerd-691879eb86f6b9f8b6b6af62bb4049e1efd343484e5544f4f82dcc088adf7d88.scope - libcontainer container 691879eb86f6b9f8b6b6af62bb4049e1efd343484e5544f4f82dcc088adf7d88. Nov 6 00:42:06.912451 systemd[1]: Started cri-containerd-7960b79a2b6e1d2ad7105019632555518f873eaa2c3c3bc8c7ef47065a1f25d3.scope - libcontainer container 7960b79a2b6e1d2ad7105019632555518f873eaa2c3c3bc8c7ef47065a1f25d3. Nov 6 00:42:06.923391 systemd[1]: Started cri-containerd-6206a146ca1efa9fb22eebd9bbb1004556234cb6cf9abcc3f083b9c82053b9a3.scope - libcontainer container 6206a146ca1efa9fb22eebd9bbb1004556234cb6cf9abcc3f083b9c82053b9a3. Nov 6 00:42:06.970454 containerd[1685]: time="2025-11-06T00:42:06.970425775Z" level=info msg="StartContainer for \"7960b79a2b6e1d2ad7105019632555518f873eaa2c3c3bc8c7ef47065a1f25d3\" returns successfully" Nov 6 00:42:06.986285 containerd[1685]: time="2025-11-06T00:42:06.986259940Z" level=info msg="StartContainer for \"691879eb86f6b9f8b6b6af62bb4049e1efd343484e5544f4f82dcc088adf7d88\" returns successfully" Nov 6 00:42:06.989213 containerd[1685]: time="2025-11-06T00:42:06.989188780Z" level=info msg="StartContainer for \"6206a146ca1efa9fb22eebd9bbb1004556234cb6cf9abcc3f083b9c82053b9a3\" returns successfully" Nov 6 00:42:07.074913 kubelet[2600]: I1106 00:42:07.074893 2600 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 6 00:42:07.075168 kubelet[2600]: E1106 00:42:07.075152 2600 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.70.101:6443/api/v1/nodes\": dial tcp 139.178.70.101:6443: connect: connection refused" node="localhost" Nov 6 00:42:07.241392 kubelet[2600]: E1106 00:42:07.241368 2600 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://139.178.70.101:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 139.178.70.101:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 6 00:42:07.477954 kubelet[2600]: E1106 00:42:07.477936 2600 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 6 00:42:07.478666 kubelet[2600]: E1106 00:42:07.478648 2600 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 6 00:42:07.479425 kubelet[2600]: E1106 00:42:07.479416 2600 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 6 00:42:08.481157 kubelet[2600]: E1106 00:42:08.481133 2600 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 6 00:42:08.482875 kubelet[2600]: E1106 00:42:08.481526 2600 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 6 00:42:08.482875 kubelet[2600]: E1106 00:42:08.481886 2600 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 6 00:42:08.676402 kubelet[2600]: I1106 00:42:08.676382 2600 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 6 00:42:08.688706 kubelet[2600]: E1106 00:42:08.688683 2600 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Nov 6 00:42:08.793245 kubelet[2600]: I1106 00:42:08.792564 2600 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 6 00:42:08.793245 kubelet[2600]: E1106 00:42:08.792588 2600 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Nov 6 00:42:08.799663 kubelet[2600]: E1106 00:42:08.799643 2600 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 6 00:42:08.912425 kubelet[2600]: I1106 00:42:08.912375 2600 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 6 00:42:08.916825 kubelet[2600]: E1106 00:42:08.916787 2600 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Nov 6 00:42:08.916825 kubelet[2600]: I1106 00:42:08.916806 2600 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 6 00:42:08.917949 kubelet[2600]: E1106 00:42:08.917931 2600 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Nov 6 00:42:08.917949 kubelet[2600]: I1106 00:42:08.917947 2600 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 6 00:42:08.918926 kubelet[2600]: E1106 00:42:08.918912 2600 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Nov 6 00:42:09.281836 kubelet[2600]: I1106 00:42:09.281680 2600 apiserver.go:52] "Watching apiserver" Nov 6 00:42:09.330449 kubelet[2600]: I1106 00:42:09.330412 2600 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 6 00:42:10.408169 systemd[1]: Reload requested from client PID 2878 ('systemctl') (unit session-9.scope)... Nov 6 00:42:10.408186 systemd[1]: Reloading... Nov 6 00:42:10.470877 zram_generator::config[2926]: No configuration found. Nov 6 00:42:10.524039 kubelet[2600]: I1106 00:42:10.524005 2600 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 6 00:42:10.556541 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Nov 6 00:42:10.636837 systemd[1]: Reloading finished in 228 ms. Nov 6 00:42:10.656119 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 00:42:10.678622 systemd[1]: kubelet.service: Deactivated successfully. Nov 6 00:42:10.678814 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 00:42:10.678853 systemd[1]: kubelet.service: Consumed 471ms CPU time, 121.8M memory peak. Nov 6 00:42:10.680128 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 00:42:11.158002 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 00:42:11.169066 (kubelet)[2990]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 6 00:42:11.243646 kubelet[2990]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 6 00:42:11.243646 kubelet[2990]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 6 00:42:11.244712 kubelet[2990]: I1106 00:42:11.244233 2990 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 6 00:42:11.248427 kubelet[2990]: I1106 00:42:11.248409 2990 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Nov 6 00:42:11.248488 kubelet[2990]: I1106 00:42:11.248475 2990 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 6 00:42:11.248532 kubelet[2990]: I1106 00:42:11.248527 2990 watchdog_linux.go:95] "Systemd watchdog is not enabled" Nov 6 00:42:11.248578 kubelet[2990]: I1106 00:42:11.248571 2990 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 6 00:42:11.248704 kubelet[2990]: I1106 00:42:11.248698 2990 server.go:956] "Client rotation is on, will bootstrap in background" Nov 6 00:42:11.249380 kubelet[2990]: I1106 00:42:11.249372 2990 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Nov 6 00:42:11.251626 kubelet[2990]: I1106 00:42:11.251617 2990 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 6 00:42:11.260442 kubelet[2990]: I1106 00:42:11.260426 2990 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 6 00:42:11.262127 kubelet[2990]: I1106 00:42:11.262115 2990 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Nov 6 00:42:11.262250 kubelet[2990]: I1106 00:42:11.262235 2990 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 6 00:42:11.262356 kubelet[2990]: I1106 00:42:11.262250 2990 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 6 00:42:11.262425 kubelet[2990]: I1106 00:42:11.262360 2990 topology_manager.go:138] "Creating topology manager with none policy" Nov 6 00:42:11.262425 kubelet[2990]: I1106 00:42:11.262366 2990 container_manager_linux.go:306] "Creating device plugin manager" Nov 6 00:42:11.262425 kubelet[2990]: I1106 00:42:11.262382 2990 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Nov 6 00:42:11.262836 kubelet[2990]: I1106 00:42:11.262817 2990 state_mem.go:36] "Initialized new in-memory state store" Nov 6 00:42:11.262981 kubelet[2990]: I1106 00:42:11.262969 2990 kubelet.go:475] "Attempting to sync node with API server" Nov 6 00:42:11.262981 kubelet[2990]: I1106 00:42:11.262980 2990 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 6 00:42:11.263172 kubelet[2990]: I1106 00:42:11.262996 2990 kubelet.go:387] "Adding apiserver pod source" Nov 6 00:42:11.263172 kubelet[2990]: I1106 00:42:11.263006 2990 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 6 00:42:11.267741 kubelet[2990]: I1106 00:42:11.267729 2990 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Nov 6 00:42:11.268559 kubelet[2990]: I1106 00:42:11.268174 2990 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 6 00:42:11.268696 kubelet[2990]: I1106 00:42:11.268690 2990 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Nov 6 00:42:11.270613 kubelet[2990]: I1106 00:42:11.270605 2990 server.go:1262] "Started kubelet" Nov 6 00:42:11.273581 kubelet[2990]: I1106 00:42:11.273514 2990 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 6 00:42:11.273737 kubelet[2990]: I1106 00:42:11.273724 2990 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 6 00:42:11.273795 kubelet[2990]: I1106 00:42:11.273789 2990 server_v1.go:49] "podresources" method="list" useActivePods=true Nov 6 00:42:11.274286 kubelet[2990]: I1106 00:42:11.274278 2990 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 6 00:42:11.275562 kubelet[2990]: I1106 00:42:11.275348 2990 server.go:310] "Adding debug handlers to kubelet server" Nov 6 00:42:11.275962 kubelet[2990]: I1106 00:42:11.275951 2990 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 6 00:42:11.278407 kubelet[2990]: I1106 00:42:11.277660 2990 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 6 00:42:11.282941 kubelet[2990]: I1106 00:42:11.282929 2990 volume_manager.go:313] "Starting Kubelet Volume Manager" Nov 6 00:42:11.283468 kubelet[2990]: I1106 00:42:11.283461 2990 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 6 00:42:11.283570 kubelet[2990]: I1106 00:42:11.283564 2990 reconciler.go:29] "Reconciler: start to sync state" Nov 6 00:42:11.284580 kubelet[2990]: E1106 00:42:11.284189 2990 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 6 00:42:11.285253 kubelet[2990]: I1106 00:42:11.284954 2990 factory.go:223] Registration of the systemd container factory successfully Nov 6 00:42:11.285678 kubelet[2990]: I1106 00:42:11.285481 2990 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 6 00:42:11.285803 kubelet[2990]: I1106 00:42:11.285141 2990 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Nov 6 00:42:11.287136 kubelet[2990]: I1106 00:42:11.287102 2990 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Nov 6 00:42:11.287136 kubelet[2990]: I1106 00:42:11.287128 2990 status_manager.go:244] "Starting to sync pod status with apiserver" Nov 6 00:42:11.287207 kubelet[2990]: I1106 00:42:11.287142 2990 kubelet.go:2427] "Starting kubelet main sync loop" Nov 6 00:42:11.287207 kubelet[2990]: E1106 00:42:11.287163 2990 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 6 00:42:11.287755 kubelet[2990]: I1106 00:42:11.287746 2990 factory.go:223] Registration of the containerd container factory successfully Nov 6 00:42:11.313762 kubelet[2990]: I1106 00:42:11.313745 2990 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 6 00:42:11.313762 kubelet[2990]: I1106 00:42:11.313754 2990 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 6 00:42:11.313762 kubelet[2990]: I1106 00:42:11.313766 2990 state_mem.go:36] "Initialized new in-memory state store" Nov 6 00:42:11.313902 kubelet[2990]: I1106 00:42:11.313845 2990 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 6 00:42:11.313902 kubelet[2990]: I1106 00:42:11.313851 2990 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 6 00:42:11.313902 kubelet[2990]: I1106 00:42:11.313861 2990 policy_none.go:49] "None policy: Start" Nov 6 00:42:11.313902 kubelet[2990]: I1106 00:42:11.313878 2990 memory_manager.go:187] "Starting memorymanager" policy="None" Nov 6 00:42:11.313902 kubelet[2990]: I1106 00:42:11.313885 2990 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Nov 6 00:42:11.314012 kubelet[2990]: I1106 00:42:11.313935 2990 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Nov 6 00:42:11.314012 kubelet[2990]: I1106 00:42:11.313940 2990 policy_none.go:47] "Start" Nov 6 00:42:11.317982 kubelet[2990]: E1106 00:42:11.317881 2990 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 6 00:42:11.320054 kubelet[2990]: I1106 00:42:11.318340 2990 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 6 00:42:11.320054 kubelet[2990]: I1106 00:42:11.318349 2990 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 6 00:42:11.320054 kubelet[2990]: I1106 00:42:11.318964 2990 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 6 00:42:11.320436 kubelet[2990]: E1106 00:42:11.320422 2990 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 6 00:42:11.395798 kubelet[2990]: I1106 00:42:11.395766 2990 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 6 00:42:11.396002 kubelet[2990]: I1106 00:42:11.395788 2990 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 6 00:42:11.396047 kubelet[2990]: I1106 00:42:11.395854 2990 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 6 00:42:11.416709 kubelet[2990]: E1106 00:42:11.416644 2990 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Nov 6 00:42:11.423841 kubelet[2990]: I1106 00:42:11.423815 2990 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 6 00:42:11.440761 kubelet[2990]: I1106 00:42:11.440722 2990 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Nov 6 00:42:11.440861 kubelet[2990]: I1106 00:42:11.440774 2990 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 6 00:42:11.484880 kubelet[2990]: I1106 00:42:11.484848 2990 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 6 00:42:11.484880 kubelet[2990]: I1106 00:42:11.484890 2990 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e57bbf031c6f27c793c5b7d754ae6750-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"e57bbf031c6f27c793c5b7d754ae6750\") " pod="kube-system/kube-apiserver-localhost" Nov 6 00:42:11.485016 kubelet[2990]: I1106 00:42:11.484912 2990 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e57bbf031c6f27c793c5b7d754ae6750-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"e57bbf031c6f27c793c5b7d754ae6750\") " pod="kube-system/kube-apiserver-localhost" Nov 6 00:42:11.485016 kubelet[2990]: I1106 00:42:11.484931 2990 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 6 00:42:11.485016 kubelet[2990]: I1106 00:42:11.484946 2990 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 6 00:42:11.485016 kubelet[2990]: I1106 00:42:11.484958 2990 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 6 00:42:11.485016 kubelet[2990]: I1106 00:42:11.484969 2990 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72ae43bf624d285361487631af8a6ba6-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72ae43bf624d285361487631af8a6ba6\") " pod="kube-system/kube-scheduler-localhost" Nov 6 00:42:11.485130 kubelet[2990]: I1106 00:42:11.484980 2990 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e57bbf031c6f27c793c5b7d754ae6750-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"e57bbf031c6f27c793c5b7d754ae6750\") " pod="kube-system/kube-apiserver-localhost" Nov 6 00:42:11.485130 kubelet[2990]: I1106 00:42:11.484991 2990 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 6 00:42:12.263583 kubelet[2990]: I1106 00:42:12.263552 2990 apiserver.go:52] "Watching apiserver" Nov 6 00:42:12.284887 kubelet[2990]: I1106 00:42:12.283566 2990 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 6 00:42:12.296203 kubelet[2990]: I1106 00:42:12.295968 2990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.295956072 podStartE2EDuration="1.295956072s" podCreationTimestamp="2025-11-06 00:42:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 00:42:12.294302104 +0000 UTC m=+1.087975792" watchObservedRunningTime="2025-11-06 00:42:12.295956072 +0000 UTC m=+1.089629750" Nov 6 00:42:12.303565 kubelet[2990]: I1106 00:42:12.303352 2990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.303339522 podStartE2EDuration="2.303339522s" podCreationTimestamp="2025-11-06 00:42:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 00:42:12.298785834 +0000 UTC m=+1.092459524" watchObservedRunningTime="2025-11-06 00:42:12.303339522 +0000 UTC m=+1.097013211" Nov 6 00:42:12.308358 kubelet[2990]: I1106 00:42:12.307715 2990 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 6 00:42:12.314796 kubelet[2990]: E1106 00:42:12.313386 2990 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Nov 6 00:42:12.314977 kubelet[2990]: I1106 00:42:12.314953 2990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.31494393 podStartE2EDuration="1.31494393s" podCreationTimestamp="2025-11-06 00:42:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 00:42:12.304393149 +0000 UTC m=+1.098066826" watchObservedRunningTime="2025-11-06 00:42:12.31494393 +0000 UTC m=+1.108617613" Nov 6 00:42:17.160248 kubelet[2990]: I1106 00:42:17.160214 2990 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 6 00:42:17.161300 containerd[1685]: time="2025-11-06T00:42:17.160735754Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 6 00:42:17.161498 kubelet[2990]: I1106 00:42:17.160859 2990 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 6 00:42:17.860763 systemd[1]: Created slice kubepods-besteffort-pod503d745e_3d67_43c1_9aae_0dbcfed97905.slice - libcontainer container kubepods-besteffort-pod503d745e_3d67_43c1_9aae_0dbcfed97905.slice. Nov 6 00:42:17.929732 kubelet[2990]: I1106 00:42:17.929696 2990 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/503d745e-3d67-43c1-9aae-0dbcfed97905-kube-proxy\") pod \"kube-proxy-b5kmg\" (UID: \"503d745e-3d67-43c1-9aae-0dbcfed97905\") " pod="kube-system/kube-proxy-b5kmg" Nov 6 00:42:17.929732 kubelet[2990]: I1106 00:42:17.929720 2990 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/503d745e-3d67-43c1-9aae-0dbcfed97905-xtables-lock\") pod \"kube-proxy-b5kmg\" (UID: \"503d745e-3d67-43c1-9aae-0dbcfed97905\") " pod="kube-system/kube-proxy-b5kmg" Nov 6 00:42:17.929732 kubelet[2990]: I1106 00:42:17.929729 2990 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/503d745e-3d67-43c1-9aae-0dbcfed97905-lib-modules\") pod \"kube-proxy-b5kmg\" (UID: \"503d745e-3d67-43c1-9aae-0dbcfed97905\") " pod="kube-system/kube-proxy-b5kmg" Nov 6 00:42:17.929732 kubelet[2990]: I1106 00:42:17.929739 2990 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhn4n\" (UniqueName: \"kubernetes.io/projected/503d745e-3d67-43c1-9aae-0dbcfed97905-kube-api-access-fhn4n\") pod \"kube-proxy-b5kmg\" (UID: \"503d745e-3d67-43c1-9aae-0dbcfed97905\") " pod="kube-system/kube-proxy-b5kmg" Nov 6 00:42:18.141663 systemd[1]: Created slice kubepods-besteffort-pod4f774ce9_67b7_442c_b324_084c2c8d3b15.slice - libcontainer container kubepods-besteffort-pod4f774ce9_67b7_442c_b324_084c2c8d3b15.slice. Nov 6 00:42:18.186590 containerd[1685]: time="2025-11-06T00:42:18.186562934Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-b5kmg,Uid:503d745e-3d67-43c1-9aae-0dbcfed97905,Namespace:kube-system,Attempt:0,}" Nov 6 00:42:18.230965 kubelet[2990]: I1106 00:42:18.230911 2990 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2svsd\" (UniqueName: \"kubernetes.io/projected/4f774ce9-67b7-442c-b324-084c2c8d3b15-kube-api-access-2svsd\") pod \"tigera-operator-65cdcdfd6d-49w5q\" (UID: \"4f774ce9-67b7-442c-b324-084c2c8d3b15\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-49w5q" Nov 6 00:42:18.233746 kubelet[2990]: I1106 00:42:18.230949 2990 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/4f774ce9-67b7-442c-b324-084c2c8d3b15-var-lib-calico\") pod \"tigera-operator-65cdcdfd6d-49w5q\" (UID: \"4f774ce9-67b7-442c-b324-084c2c8d3b15\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-49w5q" Nov 6 00:42:18.258491 containerd[1685]: time="2025-11-06T00:42:18.258155144Z" level=info msg="connecting to shim 0157b460b5c00e8c77bd025425c9903cf4aacf5b68e2e644ce9761711d624dc5" address="unix:///run/containerd/s/3287ca54e3fc7f737822d49631acda9ccd46ceb88117f1d06fa7ec3c1e8c9ad4" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:42:18.278103 systemd[1]: Started cri-containerd-0157b460b5c00e8c77bd025425c9903cf4aacf5b68e2e644ce9761711d624dc5.scope - libcontainer container 0157b460b5c00e8c77bd025425c9903cf4aacf5b68e2e644ce9761711d624dc5. Nov 6 00:42:18.393384 containerd[1685]: time="2025-11-06T00:42:18.392900243Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-b5kmg,Uid:503d745e-3d67-43c1-9aae-0dbcfed97905,Namespace:kube-system,Attempt:0,} returns sandbox id \"0157b460b5c00e8c77bd025425c9903cf4aacf5b68e2e644ce9761711d624dc5\"" Nov 6 00:42:18.397573 containerd[1685]: time="2025-11-06T00:42:18.397542374Z" level=info msg="CreateContainer within sandbox \"0157b460b5c00e8c77bd025425c9903cf4aacf5b68e2e644ce9761711d624dc5\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 6 00:42:18.406539 containerd[1685]: time="2025-11-06T00:42:18.405947719Z" level=info msg="Container dd3acec4bab89029677af5aba7bf6aa492fb9a1d7163fd20d243443fc045cd3e: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:42:18.435896 containerd[1685]: time="2025-11-06T00:42:18.435859015Z" level=info msg="CreateContainer within sandbox \"0157b460b5c00e8c77bd025425c9903cf4aacf5b68e2e644ce9761711d624dc5\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"dd3acec4bab89029677af5aba7bf6aa492fb9a1d7163fd20d243443fc045cd3e\"" Nov 6 00:42:18.437280 containerd[1685]: time="2025-11-06T00:42:18.437236552Z" level=info msg="StartContainer for \"dd3acec4bab89029677af5aba7bf6aa492fb9a1d7163fd20d243443fc045cd3e\"" Nov 6 00:42:18.441092 containerd[1685]: time="2025-11-06T00:42:18.441070616Z" level=info msg="connecting to shim dd3acec4bab89029677af5aba7bf6aa492fb9a1d7163fd20d243443fc045cd3e" address="unix:///run/containerd/s/3287ca54e3fc7f737822d49631acda9ccd46ceb88117f1d06fa7ec3c1e8c9ad4" protocol=ttrpc version=3 Nov 6 00:42:18.453672 containerd[1685]: time="2025-11-06T00:42:18.453606382Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-49w5q,Uid:4f774ce9-67b7-442c-b324-084c2c8d3b15,Namespace:tigera-operator,Attempt:0,}" Nov 6 00:42:18.462091 systemd[1]: Started cri-containerd-dd3acec4bab89029677af5aba7bf6aa492fb9a1d7163fd20d243443fc045cd3e.scope - libcontainer container dd3acec4bab89029677af5aba7bf6aa492fb9a1d7163fd20d243443fc045cd3e. Nov 6 00:42:18.493783 containerd[1685]: time="2025-11-06T00:42:18.493761427Z" level=info msg="StartContainer for \"dd3acec4bab89029677af5aba7bf6aa492fb9a1d7163fd20d243443fc045cd3e\" returns successfully" Nov 6 00:42:18.505745 containerd[1685]: time="2025-11-06T00:42:18.505713293Z" level=info msg="connecting to shim 5ad74114ec6e15b853ffd2fbc803c6b8e2d5f2530156822e39d1d94e80f98e83" address="unix:///run/containerd/s/069f4ed2e40e706deb8f866af49f1f1c73f565091fc4863098130d5684f6c92b" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:42:18.526027 systemd[1]: Started cri-containerd-5ad74114ec6e15b853ffd2fbc803c6b8e2d5f2530156822e39d1d94e80f98e83.scope - libcontainer container 5ad74114ec6e15b853ffd2fbc803c6b8e2d5f2530156822e39d1d94e80f98e83. Nov 6 00:42:18.567437 containerd[1685]: time="2025-11-06T00:42:18.567410213Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-49w5q,Uid:4f774ce9-67b7-442c-b324-084c2c8d3b15,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"5ad74114ec6e15b853ffd2fbc803c6b8e2d5f2530156822e39d1d94e80f98e83\"" Nov 6 00:42:18.568565 containerd[1685]: time="2025-11-06T00:42:18.568542781Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Nov 6 00:42:19.043166 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2207437932.mount: Deactivated successfully. Nov 6 00:42:19.451621 kubelet[2990]: I1106 00:42:19.451441 2990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-b5kmg" podStartSLOduration=2.45143004 podStartE2EDuration="2.45143004s" podCreationTimestamp="2025-11-06 00:42:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 00:42:19.365060325 +0000 UTC m=+8.158734011" watchObservedRunningTime="2025-11-06 00:42:19.45143004 +0000 UTC m=+8.245103729" Nov 6 00:42:20.292009 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1369362649.mount: Deactivated successfully. Nov 6 00:42:20.896411 containerd[1685]: time="2025-11-06T00:42:20.896373097Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:42:20.906045 containerd[1685]: time="2025-11-06T00:42:20.906019252Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Nov 6 00:42:20.917712 containerd[1685]: time="2025-11-06T00:42:20.917643834Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:42:20.930189 containerd[1685]: time="2025-11-06T00:42:20.930130818Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:42:20.930901 containerd[1685]: time="2025-11-06T00:42:20.930680782Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 2.36211283s" Nov 6 00:42:20.930901 containerd[1685]: time="2025-11-06T00:42:20.930812580Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Nov 6 00:42:20.938995 containerd[1685]: time="2025-11-06T00:42:20.938944491Z" level=info msg="CreateContainer within sandbox \"5ad74114ec6e15b853ffd2fbc803c6b8e2d5f2530156822e39d1d94e80f98e83\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 6 00:42:21.151834 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1890075726.mount: Deactivated successfully. Nov 6 00:42:21.154125 containerd[1685]: time="2025-11-06T00:42:21.153966424Z" level=info msg="Container 82879fc3371366ab02ac82139c4152dbffa000415b89cee0b4c40cba451eb2cf: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:42:21.178428 containerd[1685]: time="2025-11-06T00:42:21.178397664Z" level=info msg="CreateContainer within sandbox \"5ad74114ec6e15b853ffd2fbc803c6b8e2d5f2530156822e39d1d94e80f98e83\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"82879fc3371366ab02ac82139c4152dbffa000415b89cee0b4c40cba451eb2cf\"" Nov 6 00:42:21.179780 containerd[1685]: time="2025-11-06T00:42:21.178875862Z" level=info msg="StartContainer for \"82879fc3371366ab02ac82139c4152dbffa000415b89cee0b4c40cba451eb2cf\"" Nov 6 00:42:21.180492 containerd[1685]: time="2025-11-06T00:42:21.180462699Z" level=info msg="connecting to shim 82879fc3371366ab02ac82139c4152dbffa000415b89cee0b4c40cba451eb2cf" address="unix:///run/containerd/s/069f4ed2e40e706deb8f866af49f1f1c73f565091fc4863098130d5684f6c92b" protocol=ttrpc version=3 Nov 6 00:42:21.202066 systemd[1]: Started cri-containerd-82879fc3371366ab02ac82139c4152dbffa000415b89cee0b4c40cba451eb2cf.scope - libcontainer container 82879fc3371366ab02ac82139c4152dbffa000415b89cee0b4c40cba451eb2cf. Nov 6 00:42:21.242294 containerd[1685]: time="2025-11-06T00:42:21.242268856Z" level=info msg="StartContainer for \"82879fc3371366ab02ac82139c4152dbffa000415b89cee0b4c40cba451eb2cf\" returns successfully" Nov 6 00:42:21.487016 kubelet[2990]: I1106 00:42:21.486689 2990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-65cdcdfd6d-49w5q" podStartSLOduration=1.123382036 podStartE2EDuration="3.486674596s" podCreationTimestamp="2025-11-06 00:42:18 +0000 UTC" firstStartedPulling="2025-11-06 00:42:18.568287348 +0000 UTC m=+7.361961025" lastFinishedPulling="2025-11-06 00:42:20.931579903 +0000 UTC m=+9.725253585" observedRunningTime="2025-11-06 00:42:21.363219297 +0000 UTC m=+10.156892987" watchObservedRunningTime="2025-11-06 00:42:21.486674596 +0000 UTC m=+10.280348285" Nov 6 00:42:26.876927 sudo[2007]: pam_unix(sudo:session): session closed for user root Nov 6 00:42:26.878200 sshd[2006]: Connection closed by 139.178.89.65 port 38902 Nov 6 00:42:26.883776 sshd-session[2003]: pam_unix(sshd:session): session closed for user core Nov 6 00:42:26.886687 systemd[1]: sshd@6-139.178.70.101:22-139.178.89.65:38902.service: Deactivated successfully. Nov 6 00:42:26.890805 systemd[1]: session-9.scope: Deactivated successfully. Nov 6 00:42:26.891695 systemd[1]: session-9.scope: Consumed 4.370s CPU time, 154.8M memory peak. Nov 6 00:42:26.895243 systemd-logind[1659]: Session 9 logged out. Waiting for processes to exit. Nov 6 00:42:26.899713 systemd-logind[1659]: Removed session 9. Nov 6 00:42:30.938590 systemd[1]: Created slice kubepods-besteffort-pod1d735953_914d_4abd_a78d_d122f45108a2.slice - libcontainer container kubepods-besteffort-pod1d735953_914d_4abd_a78d_d122f45108a2.slice. Nov 6 00:42:31.013968 kubelet[2990]: I1106 00:42:31.013917 2990 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xc9sk\" (UniqueName: \"kubernetes.io/projected/1d735953-914d-4abd-a78d-d122f45108a2-kube-api-access-xc9sk\") pod \"calico-typha-588f658f97-xqfpp\" (UID: \"1d735953-914d-4abd-a78d-d122f45108a2\") " pod="calico-system/calico-typha-588f658f97-xqfpp" Nov 6 00:42:31.014371 kubelet[2990]: I1106 00:42:31.014298 2990 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1d735953-914d-4abd-a78d-d122f45108a2-tigera-ca-bundle\") pod \"calico-typha-588f658f97-xqfpp\" (UID: \"1d735953-914d-4abd-a78d-d122f45108a2\") " pod="calico-system/calico-typha-588f658f97-xqfpp" Nov 6 00:42:31.014371 kubelet[2990]: I1106 00:42:31.014321 2990 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/1d735953-914d-4abd-a78d-d122f45108a2-typha-certs\") pod \"calico-typha-588f658f97-xqfpp\" (UID: \"1d735953-914d-4abd-a78d-d122f45108a2\") " pod="calico-system/calico-typha-588f658f97-xqfpp" Nov 6 00:42:31.127247 systemd[1]: Created slice kubepods-besteffort-pod426df2a7_3028_4a3e_9573_ac9371ce23aa.slice - libcontainer container kubepods-besteffort-pod426df2a7_3028_4a3e_9573_ac9371ce23aa.slice. Nov 6 00:42:31.215375 kubelet[2990]: I1106 00:42:31.215305 2990 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/426df2a7-3028-4a3e-9573-ac9371ce23aa-cni-log-dir\") pod \"calico-node-9pgv8\" (UID: \"426df2a7-3028-4a3e-9573-ac9371ce23aa\") " pod="calico-system/calico-node-9pgv8" Nov 6 00:42:31.215375 kubelet[2990]: I1106 00:42:31.215334 2990 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/426df2a7-3028-4a3e-9573-ac9371ce23aa-node-certs\") pod \"calico-node-9pgv8\" (UID: \"426df2a7-3028-4a3e-9573-ac9371ce23aa\") " pod="calico-system/calico-node-9pgv8" Nov 6 00:42:31.215375 kubelet[2990]: I1106 00:42:31.215351 2990 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/426df2a7-3028-4a3e-9573-ac9371ce23aa-cni-net-dir\") pod \"calico-node-9pgv8\" (UID: \"426df2a7-3028-4a3e-9573-ac9371ce23aa\") " pod="calico-system/calico-node-9pgv8" Nov 6 00:42:31.215375 kubelet[2990]: I1106 00:42:31.215361 2990 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/426df2a7-3028-4a3e-9573-ac9371ce23aa-lib-modules\") pod \"calico-node-9pgv8\" (UID: \"426df2a7-3028-4a3e-9573-ac9371ce23aa\") " pod="calico-system/calico-node-9pgv8" Nov 6 00:42:31.215630 kubelet[2990]: I1106 00:42:31.215613 2990 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/426df2a7-3028-4a3e-9573-ac9371ce23aa-xtables-lock\") pod \"calico-node-9pgv8\" (UID: \"426df2a7-3028-4a3e-9573-ac9371ce23aa\") " pod="calico-system/calico-node-9pgv8" Nov 6 00:42:31.215660 kubelet[2990]: I1106 00:42:31.215636 2990 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/426df2a7-3028-4a3e-9573-ac9371ce23aa-cni-bin-dir\") pod \"calico-node-9pgv8\" (UID: \"426df2a7-3028-4a3e-9573-ac9371ce23aa\") " pod="calico-system/calico-node-9pgv8" Nov 6 00:42:31.215679 kubelet[2990]: I1106 00:42:31.215675 2990 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/426df2a7-3028-4a3e-9573-ac9371ce23aa-tigera-ca-bundle\") pod \"calico-node-9pgv8\" (UID: \"426df2a7-3028-4a3e-9573-ac9371ce23aa\") " pod="calico-system/calico-node-9pgv8" Nov 6 00:42:31.215696 kubelet[2990]: I1106 00:42:31.215688 2990 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/426df2a7-3028-4a3e-9573-ac9371ce23aa-var-lib-calico\") pod \"calico-node-9pgv8\" (UID: \"426df2a7-3028-4a3e-9573-ac9371ce23aa\") " pod="calico-system/calico-node-9pgv8" Nov 6 00:42:31.215715 kubelet[2990]: I1106 00:42:31.215700 2990 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/426df2a7-3028-4a3e-9573-ac9371ce23aa-flexvol-driver-host\") pod \"calico-node-9pgv8\" (UID: \"426df2a7-3028-4a3e-9573-ac9371ce23aa\") " pod="calico-system/calico-node-9pgv8" Nov 6 00:42:31.215715 kubelet[2990]: I1106 00:42:31.215708 2990 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/426df2a7-3028-4a3e-9573-ac9371ce23aa-policysync\") pod \"calico-node-9pgv8\" (UID: \"426df2a7-3028-4a3e-9573-ac9371ce23aa\") " pod="calico-system/calico-node-9pgv8" Nov 6 00:42:31.215766 kubelet[2990]: I1106 00:42:31.215717 2990 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/426df2a7-3028-4a3e-9573-ac9371ce23aa-var-run-calico\") pod \"calico-node-9pgv8\" (UID: \"426df2a7-3028-4a3e-9573-ac9371ce23aa\") " pod="calico-system/calico-node-9pgv8" Nov 6 00:42:31.215788 kubelet[2990]: I1106 00:42:31.215772 2990 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6fqt\" (UniqueName: \"kubernetes.io/projected/426df2a7-3028-4a3e-9573-ac9371ce23aa-kube-api-access-g6fqt\") pod \"calico-node-9pgv8\" (UID: \"426df2a7-3028-4a3e-9573-ac9371ce23aa\") " pod="calico-system/calico-node-9pgv8" Nov 6 00:42:31.263055 containerd[1685]: time="2025-11-06T00:42:31.262943673Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-588f658f97-xqfpp,Uid:1d735953-914d-4abd-a78d-d122f45108a2,Namespace:calico-system,Attempt:0,}" Nov 6 00:42:31.319094 kubelet[2990]: E1106 00:42:31.319066 2990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:42:31.319094 kubelet[2990]: W1106 00:42:31.319086 2990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:42:31.319094 kubelet[2990]: E1106 00:42:31.319100 2990 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:42:31.319260 kubelet[2990]: E1106 00:42:31.319194 2990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:42:31.319260 kubelet[2990]: W1106 00:42:31.319199 2990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:42:31.319260 kubelet[2990]: E1106 00:42:31.319206 2990 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:42:31.319335 kubelet[2990]: E1106 00:42:31.319292 2990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:42:31.319335 kubelet[2990]: W1106 00:42:31.319297 2990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:42:31.319335 kubelet[2990]: E1106 00:42:31.319303 2990 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:42:31.319577 kubelet[2990]: E1106 00:42:31.319407 2990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:42:31.319577 kubelet[2990]: W1106 00:42:31.319414 2990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:42:31.319577 kubelet[2990]: E1106 00:42:31.319420 2990 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:42:31.319577 kubelet[2990]: E1106 00:42:31.319514 2990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:42:31.319577 kubelet[2990]: W1106 00:42:31.319520 2990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:42:31.319577 kubelet[2990]: E1106 00:42:31.319525 2990 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:42:31.319893 kubelet[2990]: E1106 00:42:31.319740 2990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:42:31.319893 kubelet[2990]: W1106 00:42:31.319750 2990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:42:31.319893 kubelet[2990]: E1106 00:42:31.319759 2990 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:42:31.319893 kubelet[2990]: E1106 00:42:31.319852 2990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:42:31.319893 kubelet[2990]: W1106 00:42:31.319858 2990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:42:31.319893 kubelet[2990]: E1106 00:42:31.319875 2990 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:42:31.320949 kubelet[2990]: E1106 00:42:31.320931 2990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:42:31.320949 kubelet[2990]: W1106 00:42:31.320940 2990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:42:31.320949 kubelet[2990]: E1106 00:42:31.320949 2990 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:42:31.321129 kubelet[2990]: E1106 00:42:31.321092 2990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:42:31.321129 kubelet[2990]: W1106 00:42:31.321101 2990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:42:31.321129 kubelet[2990]: E1106 00:42:31.321107 2990 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:42:31.327004 kubelet[2990]: E1106 00:42:31.321219 2990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:42:31.327004 kubelet[2990]: W1106 00:42:31.321225 2990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:42:31.327004 kubelet[2990]: E1106 00:42:31.321231 2990 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:42:31.327004 kubelet[2990]: E1106 00:42:31.321403 2990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:42:31.327004 kubelet[2990]: W1106 00:42:31.321410 2990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:42:31.327004 kubelet[2990]: E1106 00:42:31.321418 2990 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:42:31.327004 kubelet[2990]: E1106 00:42:31.321530 2990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:42:31.327004 kubelet[2990]: W1106 00:42:31.321536 2990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:42:31.327004 kubelet[2990]: E1106 00:42:31.321542 2990 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:42:31.327004 kubelet[2990]: E1106 00:42:31.321841 2990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:42:31.331443 kubelet[2990]: W1106 00:42:31.321847 2990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:42:31.331443 kubelet[2990]: E1106 00:42:31.321854 2990 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:42:31.331443 kubelet[2990]: E1106 00:42:31.321970 2990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:42:31.331443 kubelet[2990]: W1106 00:42:31.321978 2990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:42:31.331443 kubelet[2990]: E1106 00:42:31.321987 2990 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:42:31.331443 kubelet[2990]: E1106 00:42:31.323200 2990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:42:31.331443 kubelet[2990]: W1106 00:42:31.323207 2990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:42:31.331443 kubelet[2990]: E1106 00:42:31.323215 2990 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:42:31.331443 kubelet[2990]: E1106 00:42:31.323317 2990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:42:31.331443 kubelet[2990]: W1106 00:42:31.323324 2990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:42:31.331650 kubelet[2990]: E1106 00:42:31.323334 2990 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:42:31.331650 kubelet[2990]: E1106 00:42:31.323780 2990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:42:31.331650 kubelet[2990]: W1106 00:42:31.323787 2990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:42:31.331650 kubelet[2990]: E1106 00:42:31.323794 2990 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:42:31.331650 kubelet[2990]: E1106 00:42:31.323934 2990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:42:31.331650 kubelet[2990]: W1106 00:42:31.323940 2990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:42:31.331650 kubelet[2990]: E1106 00:42:31.323946 2990 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:42:31.331650 kubelet[2990]: E1106 00:42:31.324126 2990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:42:31.331650 kubelet[2990]: W1106 00:42:31.324134 2990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:42:31.331650 kubelet[2990]: E1106 00:42:31.324143 2990 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:42:31.331820 kubelet[2990]: E1106 00:42:31.325371 2990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:42:31.331820 kubelet[2990]: W1106 00:42:31.325378 2990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:42:31.331820 kubelet[2990]: E1106 00:42:31.325386 2990 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:42:31.331820 kubelet[2990]: E1106 00:42:31.325503 2990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:42:31.331820 kubelet[2990]: W1106 00:42:31.325508 2990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:42:31.331820 kubelet[2990]: E1106 00:42:31.325514 2990 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:42:31.331820 kubelet[2990]: E1106 00:42:31.325747 2990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:42:31.331820 kubelet[2990]: W1106 00:42:31.325753 2990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:42:31.331820 kubelet[2990]: E1106 00:42:31.325759 2990 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:42:31.331820 kubelet[2990]: E1106 00:42:31.328263 2990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:42:31.332475 kubelet[2990]: W1106 00:42:31.328279 2990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:42:31.332475 kubelet[2990]: E1106 00:42:31.328290 2990 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:42:31.332475 kubelet[2990]: E1106 00:42:31.328494 2990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:42:31.332475 kubelet[2990]: W1106 00:42:31.328500 2990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:42:31.332475 kubelet[2990]: E1106 00:42:31.328556 2990 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:42:31.332475 kubelet[2990]: E1106 00:42:31.329094 2990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:42:31.332475 kubelet[2990]: W1106 00:42:31.329100 2990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:42:31.332475 kubelet[2990]: E1106 00:42:31.329107 2990 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:42:31.349655 kubelet[2990]: E1106 00:42:31.349534 2990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:42:31.350870 kubelet[2990]: W1106 00:42:31.350337 2990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:42:31.350870 kubelet[2990]: E1106 00:42:31.350361 2990 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:42:31.354894 kubelet[2990]: E1106 00:42:31.353687 2990 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ct65m" podUID="d1996df9-2b05-483b-a46f-e6437e23c06c" Nov 6 00:42:31.358523 containerd[1685]: time="2025-11-06T00:42:31.358356213Z" level=info msg="connecting to shim 81163bc332936a299aee1f257a2379a50248f2c7ad4ecf1d66ba26a9d422f9a8" address="unix:///run/containerd/s/f4d849983a3acf47246ab310b80773f2c68009e3818cb814fbc5620c3f5a534d" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:42:31.385007 systemd[1]: Started cri-containerd-81163bc332936a299aee1f257a2379a50248f2c7ad4ecf1d66ba26a9d422f9a8.scope - libcontainer container 81163bc332936a299aee1f257a2379a50248f2c7ad4ecf1d66ba26a9d422f9a8. Nov 6 00:42:31.402541 kubelet[2990]: E1106 00:42:31.402509 2990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:42:31.402541 kubelet[2990]: W1106 00:42:31.402536 2990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:42:31.402654 kubelet[2990]: E1106 00:42:31.402553 2990 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:42:31.402679 kubelet[2990]: E1106 00:42:31.402657 2990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:42:31.402679 kubelet[2990]: W1106 00:42:31.402662 2990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:42:31.402679 kubelet[2990]: E1106 00:42:31.402667 2990 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:42:31.402759 kubelet[2990]: E1106 00:42:31.402744 2990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:42:31.402759 kubelet[2990]: W1106 00:42:31.402753 2990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:42:31.402759 kubelet[2990]: E1106 00:42:31.402759 2990 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:42:31.402900 kubelet[2990]: E1106 00:42:31.402887 2990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:42:31.402900 kubelet[2990]: W1106 00:42:31.402896 2990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:42:31.402990 kubelet[2990]: E1106 00:42:31.402903 2990 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:42:31.403030 kubelet[2990]: E1106 00:42:31.403002 2990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:42:31.403030 kubelet[2990]: W1106 00:42:31.403006 2990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:42:31.403030 kubelet[2990]: E1106 00:42:31.403011 2990 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:42:31.403130 kubelet[2990]: E1106 00:42:31.403107 2990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:42:31.403130 kubelet[2990]: W1106 00:42:31.403114 2990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:42:31.403130 kubelet[2990]: E1106 00:42:31.403119 2990 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:42:31.403231 kubelet[2990]: E1106 00:42:31.403218 2990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:42:31.403231 kubelet[2990]: W1106 00:42:31.403226 2990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:42:31.403332 kubelet[2990]: E1106 00:42:31.403233 2990 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:42:31.403359 kubelet[2990]: E1106 00:42:31.403347 2990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:42:31.403359 kubelet[2990]: W1106 00:42:31.403352 2990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:42:31.403359 kubelet[2990]: E1106 00:42:31.403358 2990 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:42:31.403550 kubelet[2990]: E1106 00:42:31.403537 2990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:42:31.403550 kubelet[2990]: W1106 00:42:31.403547 2990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:42:31.403600 kubelet[2990]: E1106 00:42:31.403552 2990 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:42:31.403655 kubelet[2990]: E1106 00:42:31.403644 2990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:42:31.403655 kubelet[2990]: W1106 00:42:31.403652 2990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:42:31.403710 kubelet[2990]: E1106 00:42:31.403660 2990 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:42:31.403753 kubelet[2990]: E1106 00:42:31.403741 2990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:42:31.403753 kubelet[2990]: W1106 00:42:31.403748 2990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:42:31.403753 kubelet[2990]: E1106 00:42:31.403753 2990 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:42:31.403861 kubelet[2990]: E1106 00:42:31.403842 2990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:42:31.403861 kubelet[2990]: W1106 00:42:31.403848 2990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:42:31.403861 kubelet[2990]: E1106 00:42:31.403854 2990 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:42:31.404021 kubelet[2990]: E1106 00:42:31.404012 2990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:42:31.404021 kubelet[2990]: W1106 00:42:31.404020 2990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:42:31.404070 kubelet[2990]: E1106 00:42:31.404025 2990 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:42:31.404152 kubelet[2990]: E1106 00:42:31.404140 2990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:42:31.404152 kubelet[2990]: W1106 00:42:31.404150 2990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:42:31.404200 kubelet[2990]: E1106 00:42:31.404155 2990 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:42:31.404251 kubelet[2990]: E1106 00:42:31.404239 2990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:42:31.404251 kubelet[2990]: W1106 00:42:31.404249 2990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:42:31.404334 kubelet[2990]: E1106 00:42:31.404254 2990 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:42:31.404367 kubelet[2990]: E1106 00:42:31.404344 2990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:42:31.404367 kubelet[2990]: W1106 00:42:31.404348 2990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:42:31.404367 kubelet[2990]: E1106 00:42:31.404353 2990 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:42:31.404467 kubelet[2990]: E1106 00:42:31.404456 2990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:42:31.404467 kubelet[2990]: W1106 00:42:31.404464 2990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:42:31.404544 kubelet[2990]: E1106 00:42:31.404469 2990 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:42:31.404729 kubelet[2990]: E1106 00:42:31.404717 2990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:42:31.404729 kubelet[2990]: W1106 00:42:31.404727 2990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:42:31.404780 kubelet[2990]: E1106 00:42:31.404734 2990 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:42:31.404841 kubelet[2990]: E1106 00:42:31.404829 2990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:42:31.404841 kubelet[2990]: W1106 00:42:31.404838 2990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:42:31.404946 kubelet[2990]: E1106 00:42:31.404843 2990 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:42:31.404946 kubelet[2990]: E1106 00:42:31.404938 2990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:42:31.404946 kubelet[2990]: W1106 00:42:31.404943 2990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:42:31.405023 kubelet[2990]: E1106 00:42:31.404947 2990 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:42:31.417255 kubelet[2990]: E1106 00:42:31.417229 2990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:42:31.417255 kubelet[2990]: W1106 00:42:31.417247 2990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:42:31.417255 kubelet[2990]: E1106 00:42:31.417261 2990 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:42:31.418027 kubelet[2990]: I1106 00:42:31.417281 2990 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d1996df9-2b05-483b-a46f-e6437e23c06c-kubelet-dir\") pod \"csi-node-driver-ct65m\" (UID: \"d1996df9-2b05-483b-a46f-e6437e23c06c\") " pod="calico-system/csi-node-driver-ct65m" Nov 6 00:42:31.418027 kubelet[2990]: E1106 00:42:31.417396 2990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:42:31.418027 kubelet[2990]: W1106 00:42:31.417402 2990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:42:31.418027 kubelet[2990]: E1106 00:42:31.417407 2990 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:42:31.418027 kubelet[2990]: I1106 00:42:31.417418 2990 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/d1996df9-2b05-483b-a46f-e6437e23c06c-registration-dir\") pod \"csi-node-driver-ct65m\" (UID: \"d1996df9-2b05-483b-a46f-e6437e23c06c\") " pod="calico-system/csi-node-driver-ct65m" Nov 6 00:42:31.418027 kubelet[2990]: E1106 00:42:31.417493 2990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:42:31.418027 kubelet[2990]: W1106 00:42:31.417498 2990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:42:31.418027 kubelet[2990]: E1106 00:42:31.417505 2990 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:42:31.418253 kubelet[2990]: I1106 00:42:31.417514 2990 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/d1996df9-2b05-483b-a46f-e6437e23c06c-socket-dir\") pod \"csi-node-driver-ct65m\" (UID: \"d1996df9-2b05-483b-a46f-e6437e23c06c\") " pod="calico-system/csi-node-driver-ct65m" Nov 6 00:42:31.419489 kubelet[2990]: E1106 00:42:31.418638 2990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:42:31.419489 kubelet[2990]: W1106 00:42:31.418649 2990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:42:31.419489 kubelet[2990]: E1106 00:42:31.418661 2990 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:42:31.419489 kubelet[2990]: E1106 00:42:31.418982 2990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:42:31.419489 kubelet[2990]: W1106 00:42:31.418990 2990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:42:31.419489 kubelet[2990]: E1106 00:42:31.418998 2990 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:42:31.419489 kubelet[2990]: E1106 00:42:31.419171 2990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:42:31.419489 kubelet[2990]: W1106 00:42:31.419179 2990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:42:31.419489 kubelet[2990]: E1106 00:42:31.419185 2990 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:42:31.419489 kubelet[2990]: E1106 00:42:31.419272 2990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:42:31.419879 kubelet[2990]: W1106 00:42:31.419277 2990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:42:31.419879 kubelet[2990]: E1106 00:42:31.419283 2990 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:42:31.419879 kubelet[2990]: E1106 00:42:31.419529 2990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:42:31.419879 kubelet[2990]: W1106 00:42:31.419536 2990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:42:31.419879 kubelet[2990]: E1106 00:42:31.419541 2990 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:42:31.419879 kubelet[2990]: I1106 00:42:31.419556 2990 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjx5p\" (UniqueName: \"kubernetes.io/projected/d1996df9-2b05-483b-a46f-e6437e23c06c-kube-api-access-mjx5p\") pod \"csi-node-driver-ct65m\" (UID: \"d1996df9-2b05-483b-a46f-e6437e23c06c\") " pod="calico-system/csi-node-driver-ct65m" Nov 6 00:42:31.420819 kubelet[2990]: E1106 00:42:31.420327 2990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:42:31.420819 kubelet[2990]: W1106 00:42:31.420337 2990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:42:31.420819 kubelet[2990]: E1106 00:42:31.420347 2990 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:42:31.420819 kubelet[2990]: E1106 00:42:31.420434 2990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:42:31.420819 kubelet[2990]: W1106 00:42:31.420440 2990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:42:31.420819 kubelet[2990]: E1106 00:42:31.420445 2990 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:42:31.420819 kubelet[2990]: E1106 00:42:31.420521 2990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:42:31.420819 kubelet[2990]: W1106 00:42:31.420526 2990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:42:31.420819 kubelet[2990]: E1106 00:42:31.420531 2990 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:42:31.420819 kubelet[2990]: E1106 00:42:31.420621 2990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:42:31.421437 kubelet[2990]: W1106 00:42:31.420627 2990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:42:31.421437 kubelet[2990]: E1106 00:42:31.420632 2990 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:42:31.421437 kubelet[2990]: E1106 00:42:31.420736 2990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:42:31.421437 kubelet[2990]: W1106 00:42:31.420741 2990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:42:31.421437 kubelet[2990]: E1106 00:42:31.420745 2990 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:42:31.421437 kubelet[2990]: I1106 00:42:31.420765 2990 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/d1996df9-2b05-483b-a46f-e6437e23c06c-varrun\") pod \"csi-node-driver-ct65m\" (UID: \"d1996df9-2b05-483b-a46f-e6437e23c06c\") " pod="calico-system/csi-node-driver-ct65m" Nov 6 00:42:31.421437 kubelet[2990]: E1106 00:42:31.420963 2990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:42:31.421437 kubelet[2990]: W1106 00:42:31.420969 2990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:42:31.421437 kubelet[2990]: E1106 00:42:31.420974 2990 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:42:31.423454 kubelet[2990]: E1106 00:42:31.421071 2990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:42:31.423454 kubelet[2990]: W1106 00:42:31.421076 2990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:42:31.423454 kubelet[2990]: E1106 00:42:31.421082 2990 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:42:31.439404 containerd[1685]: time="2025-11-06T00:42:31.438652897Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-588f658f97-xqfpp,Uid:1d735953-914d-4abd-a78d-d122f45108a2,Namespace:calico-system,Attempt:0,} returns sandbox id \"81163bc332936a299aee1f257a2379a50248f2c7ad4ecf1d66ba26a9d422f9a8\"" Nov 6 00:42:31.440185 containerd[1685]: time="2025-11-06T00:42:31.440154421Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Nov 6 00:42:31.444235 containerd[1685]: time="2025-11-06T00:42:31.444213096Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-9pgv8,Uid:426df2a7-3028-4a3e-9573-ac9371ce23aa,Namespace:calico-system,Attempt:0,}" Nov 6 00:42:31.524968 kubelet[2990]: E1106 00:42:31.524939 2990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:42:31.525170 kubelet[2990]: W1106 00:42:31.525093 2990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:42:31.525170 kubelet[2990]: E1106 00:42:31.525112 2990 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:42:31.526887 kubelet[2990]: E1106 00:42:31.526861 2990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:42:31.526980 kubelet[2990]: W1106 00:42:31.526944 2990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:42:31.527053 kubelet[2990]: E1106 00:42:31.527035 2990 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:42:31.530067 kubelet[2990]: E1106 00:42:31.529999 2990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:42:31.530067 kubelet[2990]: W1106 00:42:31.530009 2990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:42:31.530067 kubelet[2990]: E1106 00:42:31.530019 2990 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:42:31.532969 kubelet[2990]: E1106 00:42:31.531094 2990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:42:31.532969 kubelet[2990]: W1106 00:42:31.531103 2990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:42:31.532969 kubelet[2990]: E1106 00:42:31.531124 2990 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:42:31.532969 kubelet[2990]: E1106 00:42:31.531224 2990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:42:31.532969 kubelet[2990]: W1106 00:42:31.531230 2990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:42:31.532969 kubelet[2990]: E1106 00:42:31.531236 2990 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:42:31.532969 kubelet[2990]: E1106 00:42:31.531336 2990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:42:31.532969 kubelet[2990]: W1106 00:42:31.531345 2990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:42:31.532969 kubelet[2990]: E1106 00:42:31.531366 2990 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:42:31.532969 kubelet[2990]: E1106 00:42:31.531459 2990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:42:31.533146 kubelet[2990]: W1106 00:42:31.531464 2990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:42:31.533146 kubelet[2990]: E1106 00:42:31.531470 2990 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:42:31.533146 kubelet[2990]: E1106 00:42:31.531592 2990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:42:31.533146 kubelet[2990]: W1106 00:42:31.531610 2990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:42:31.533146 kubelet[2990]: E1106 00:42:31.531617 2990 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:42:31.533146 kubelet[2990]: E1106 00:42:31.531732 2990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:42:31.533146 kubelet[2990]: W1106 00:42:31.531737 2990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:42:31.533146 kubelet[2990]: E1106 00:42:31.531743 2990 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:42:31.533146 kubelet[2990]: E1106 00:42:31.531843 2990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:42:31.533146 kubelet[2990]: W1106 00:42:31.531849 2990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:42:31.535161 kubelet[2990]: E1106 00:42:31.531854 2990 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:42:31.535161 kubelet[2990]: E1106 00:42:31.532005 2990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:42:31.535161 kubelet[2990]: W1106 00:42:31.532017 2990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:42:31.535161 kubelet[2990]: E1106 00:42:31.532023 2990 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:42:31.535161 kubelet[2990]: E1106 00:42:31.532134 2990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:42:31.535161 kubelet[2990]: W1106 00:42:31.532140 2990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:42:31.535161 kubelet[2990]: E1106 00:42:31.532145 2990 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:42:31.535161 kubelet[2990]: E1106 00:42:31.532235 2990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:42:31.535161 kubelet[2990]: W1106 00:42:31.532240 2990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:42:31.535161 kubelet[2990]: E1106 00:42:31.532245 2990 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:42:31.535470 kubelet[2990]: E1106 00:42:31.532365 2990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:42:31.535470 kubelet[2990]: W1106 00:42:31.532372 2990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:42:31.535470 kubelet[2990]: E1106 00:42:31.532377 2990 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:42:31.535470 kubelet[2990]: E1106 00:42:31.533245 2990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:42:31.535470 kubelet[2990]: W1106 00:42:31.533330 2990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:42:31.535470 kubelet[2990]: E1106 00:42:31.533340 2990 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:42:31.535470 kubelet[2990]: E1106 00:42:31.533548 2990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:42:31.535470 kubelet[2990]: W1106 00:42:31.533554 2990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:42:31.535470 kubelet[2990]: E1106 00:42:31.533560 2990 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:42:31.535470 kubelet[2990]: E1106 00:42:31.533715 2990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:42:31.536061 kubelet[2990]: W1106 00:42:31.533720 2990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:42:31.536061 kubelet[2990]: E1106 00:42:31.533725 2990 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:42:31.536061 kubelet[2990]: E1106 00:42:31.534043 2990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:42:31.536061 kubelet[2990]: W1106 00:42:31.534048 2990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:42:31.536061 kubelet[2990]: E1106 00:42:31.534054 2990 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:42:31.536061 kubelet[2990]: E1106 00:42:31.534450 2990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:42:31.536061 kubelet[2990]: W1106 00:42:31.534456 2990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:42:31.536061 kubelet[2990]: E1106 00:42:31.534463 2990 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:42:31.536061 kubelet[2990]: E1106 00:42:31.534581 2990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:42:31.536061 kubelet[2990]: W1106 00:42:31.534608 2990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:42:31.536271 kubelet[2990]: E1106 00:42:31.534617 2990 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:42:31.536271 kubelet[2990]: E1106 00:42:31.534715 2990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:42:31.536271 kubelet[2990]: W1106 00:42:31.534720 2990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:42:31.536271 kubelet[2990]: E1106 00:42:31.534725 2990 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:42:31.536271 kubelet[2990]: E1106 00:42:31.534895 2990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:42:31.536271 kubelet[2990]: W1106 00:42:31.534900 2990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:42:31.536271 kubelet[2990]: E1106 00:42:31.534906 2990 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:42:31.536271 kubelet[2990]: E1106 00:42:31.535506 2990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:42:31.536271 kubelet[2990]: W1106 00:42:31.535512 2990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:42:31.536271 kubelet[2990]: E1106 00:42:31.535518 2990 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:42:31.536510 kubelet[2990]: E1106 00:42:31.535692 2990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:42:31.536510 kubelet[2990]: W1106 00:42:31.535697 2990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:42:31.536510 kubelet[2990]: E1106 00:42:31.535702 2990 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:42:31.536510 kubelet[2990]: E1106 00:42:31.535890 2990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:42:31.536510 kubelet[2990]: W1106 00:42:31.535896 2990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:42:31.536510 kubelet[2990]: E1106 00:42:31.535902 2990 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:42:31.542716 kubelet[2990]: E1106 00:42:31.542699 2990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:42:31.542716 kubelet[2990]: W1106 00:42:31.542711 2990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:42:31.542716 kubelet[2990]: E1106 00:42:31.542723 2990 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:42:31.560270 containerd[1685]: time="2025-11-06T00:42:31.559947550Z" level=info msg="connecting to shim ee6c73feb89cc92e9029178cca5c9eeb4782b2b9d55c9c4f429b0b0122243aeb" address="unix:///run/containerd/s/c82e21b116edfd196c55a860050824059a5a574714573706ac3c2418abfb3880" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:42:31.580954 systemd[1]: Started cri-containerd-ee6c73feb89cc92e9029178cca5c9eeb4782b2b9d55c9c4f429b0b0122243aeb.scope - libcontainer container ee6c73feb89cc92e9029178cca5c9eeb4782b2b9d55c9c4f429b0b0122243aeb. Nov 6 00:42:31.603420 containerd[1685]: time="2025-11-06T00:42:31.603387525Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-9pgv8,Uid:426df2a7-3028-4a3e-9573-ac9371ce23aa,Namespace:calico-system,Attempt:0,} returns sandbox id \"ee6c73feb89cc92e9029178cca5c9eeb4782b2b9d55c9c4f429b0b0122243aeb\"" Nov 6 00:42:33.088897 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3580362541.mount: Deactivated successfully. Nov 6 00:42:33.289313 kubelet[2990]: E1106 00:42:33.289291 2990 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ct65m" podUID="d1996df9-2b05-483b-a46f-e6437e23c06c" Nov 6 00:42:34.345536 containerd[1685]: time="2025-11-06T00:42:34.345495835Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:42:34.346321 containerd[1685]: time="2025-11-06T00:42:34.346307260Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Nov 6 00:42:34.346689 containerd[1685]: time="2025-11-06T00:42:34.346644148Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:42:34.347884 containerd[1685]: time="2025-11-06T00:42:34.347843422Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:42:34.348211 containerd[1685]: time="2025-11-06T00:42:34.348193186Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 2.908019466s" Nov 6 00:42:34.348243 containerd[1685]: time="2025-11-06T00:42:34.348210732Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Nov 6 00:42:34.349449 containerd[1685]: time="2025-11-06T00:42:34.349315582Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Nov 6 00:42:34.358061 containerd[1685]: time="2025-11-06T00:42:34.358036099Z" level=info msg="CreateContainer within sandbox \"81163bc332936a299aee1f257a2379a50248f2c7ad4ecf1d66ba26a9d422f9a8\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 6 00:42:34.362733 containerd[1685]: time="2025-11-06T00:42:34.362712825Z" level=info msg="Container 2e96dbbb7a5755beed2c6ca02bc2ce7aedfe4f6d05f104cf050bad961c29ced8: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:42:34.365524 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount758067053.mount: Deactivated successfully. Nov 6 00:42:34.369137 containerd[1685]: time="2025-11-06T00:42:34.369112999Z" level=info msg="CreateContainer within sandbox \"81163bc332936a299aee1f257a2379a50248f2c7ad4ecf1d66ba26a9d422f9a8\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"2e96dbbb7a5755beed2c6ca02bc2ce7aedfe4f6d05f104cf050bad961c29ced8\"" Nov 6 00:42:34.369720 containerd[1685]: time="2025-11-06T00:42:34.369645858Z" level=info msg="StartContainer for \"2e96dbbb7a5755beed2c6ca02bc2ce7aedfe4f6d05f104cf050bad961c29ced8\"" Nov 6 00:42:34.371121 containerd[1685]: time="2025-11-06T00:42:34.370710640Z" level=info msg="connecting to shim 2e96dbbb7a5755beed2c6ca02bc2ce7aedfe4f6d05f104cf050bad961c29ced8" address="unix:///run/containerd/s/f4d849983a3acf47246ab310b80773f2c68009e3818cb814fbc5620c3f5a534d" protocol=ttrpc version=3 Nov 6 00:42:34.392978 systemd[1]: Started cri-containerd-2e96dbbb7a5755beed2c6ca02bc2ce7aedfe4f6d05f104cf050bad961c29ced8.scope - libcontainer container 2e96dbbb7a5755beed2c6ca02bc2ce7aedfe4f6d05f104cf050bad961c29ced8. Nov 6 00:42:34.430454 containerd[1685]: time="2025-11-06T00:42:34.430424224Z" level=info msg="StartContainer for \"2e96dbbb7a5755beed2c6ca02bc2ce7aedfe4f6d05f104cf050bad961c29ced8\" returns successfully" Nov 6 00:42:35.292847 kubelet[2990]: E1106 00:42:35.292813 2990 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ct65m" podUID="d1996df9-2b05-483b-a46f-e6437e23c06c" Nov 6 00:42:35.382673 kubelet[2990]: I1106 00:42:35.381295 2990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-588f658f97-xqfpp" podStartSLOduration=2.472156052 podStartE2EDuration="5.381282345s" podCreationTimestamp="2025-11-06 00:42:30 +0000 UTC" firstStartedPulling="2025-11-06 00:42:31.439917827 +0000 UTC m=+20.233591504" lastFinishedPulling="2025-11-06 00:42:34.349044119 +0000 UTC m=+23.142717797" observedRunningTime="2025-11-06 00:42:35.380578213 +0000 UTC m=+24.174251902" watchObservedRunningTime="2025-11-06 00:42:35.381282345 +0000 UTC m=+24.174956034" Nov 6 00:42:35.431832 kubelet[2990]: E1106 00:42:35.431805 2990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:42:35.431832 kubelet[2990]: W1106 00:42:35.431825 2990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:42:35.432337 kubelet[2990]: E1106 00:42:35.432314 2990 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:42:35.432463 kubelet[2990]: E1106 00:42:35.432450 2990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:42:35.432463 kubelet[2990]: W1106 00:42:35.432460 2990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:42:35.432505 kubelet[2990]: E1106 00:42:35.432467 2990 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:42:35.432570 kubelet[2990]: E1106 00:42:35.432558 2990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:42:35.432570 kubelet[2990]: W1106 00:42:35.432566 2990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:42:35.432570 kubelet[2990]: E1106 00:42:35.432571 2990 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:42:35.432712 kubelet[2990]: E1106 00:42:35.432701 2990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:42:35.432712 kubelet[2990]: W1106 00:42:35.432711 2990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:42:35.433030 kubelet[2990]: E1106 00:42:35.432719 2990 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:42:35.433030 kubelet[2990]: E1106 00:42:35.432815 2990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:42:35.433030 kubelet[2990]: W1106 00:42:35.432820 2990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:42:35.433030 kubelet[2990]: E1106 00:42:35.432824 2990 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:42:35.433030 kubelet[2990]: E1106 00:42:35.432921 2990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:42:35.433030 kubelet[2990]: W1106 00:42:35.432926 2990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:42:35.433030 kubelet[2990]: E1106 00:42:35.432930 2990 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:42:35.433202 kubelet[2990]: E1106 00:42:35.433191 2990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:42:35.433202 kubelet[2990]: W1106 00:42:35.433199 2990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:42:35.433243 kubelet[2990]: E1106 00:42:35.433204 2990 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:42:35.433314 kubelet[2990]: E1106 00:42:35.433303 2990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:42:35.433314 kubelet[2990]: W1106 00:42:35.433311 2990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:42:35.433361 kubelet[2990]: E1106 00:42:35.433316 2990 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:42:35.433426 kubelet[2990]: E1106 00:42:35.433411 2990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:42:35.433426 kubelet[2990]: W1106 00:42:35.433420 2990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:42:35.433426 kubelet[2990]: E1106 00:42:35.433424 2990 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:42:35.433514 kubelet[2990]: E1106 00:42:35.433505 2990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:42:35.433514 kubelet[2990]: W1106 00:42:35.433512 2990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:42:35.433550 kubelet[2990]: E1106 00:42:35.433518 2990 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:42:35.433608 kubelet[2990]: E1106 00:42:35.433597 2990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:42:35.433608 kubelet[2990]: W1106 00:42:35.433605 2990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:42:35.433929 kubelet[2990]: E1106 00:42:35.433609 2990 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:42:35.433929 kubelet[2990]: E1106 00:42:35.433698 2990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:42:35.433929 kubelet[2990]: W1106 00:42:35.433702 2990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:42:35.433929 kubelet[2990]: E1106 00:42:35.433707 2990 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:42:35.433929 kubelet[2990]: E1106 00:42:35.433792 2990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:42:35.433929 kubelet[2990]: W1106 00:42:35.433796 2990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:42:35.433929 kubelet[2990]: E1106 00:42:35.433801 2990 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:42:35.434089 kubelet[2990]: E1106 00:42:35.434078 2990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:42:35.434089 kubelet[2990]: W1106 00:42:35.434087 2990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:42:35.434128 kubelet[2990]: E1106 00:42:35.434093 2990 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:42:35.434187 kubelet[2990]: E1106 00:42:35.434173 2990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:42:35.434187 kubelet[2990]: W1106 00:42:35.434181 2990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:42:35.434187 kubelet[2990]: E1106 00:42:35.434186 2990 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:42:35.460209 kubelet[2990]: E1106 00:42:35.460136 2990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:42:35.460209 kubelet[2990]: W1106 00:42:35.460150 2990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:42:35.460209 kubelet[2990]: E1106 00:42:35.460163 2990 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:42:35.460339 kubelet[2990]: E1106 00:42:35.460266 2990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:42:35.460339 kubelet[2990]: W1106 00:42:35.460271 2990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:42:35.460339 kubelet[2990]: E1106 00:42:35.460276 2990 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:42:35.460430 kubelet[2990]: E1106 00:42:35.460360 2990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:42:35.460430 kubelet[2990]: W1106 00:42:35.460364 2990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:42:35.460430 kubelet[2990]: E1106 00:42:35.460370 2990 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:42:35.460495 kubelet[2990]: E1106 00:42:35.460487 2990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:42:35.460495 kubelet[2990]: W1106 00:42:35.460493 2990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:42:35.460536 kubelet[2990]: E1106 00:42:35.460503 2990 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:42:35.461478 kubelet[2990]: E1106 00:42:35.460586 2990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:42:35.461478 kubelet[2990]: W1106 00:42:35.460593 2990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:42:35.461478 kubelet[2990]: E1106 00:42:35.460597 2990 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:42:35.461478 kubelet[2990]: E1106 00:42:35.460660 2990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:42:35.461478 kubelet[2990]: W1106 00:42:35.460664 2990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:42:35.461478 kubelet[2990]: E1106 00:42:35.460669 2990 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:42:35.461478 kubelet[2990]: E1106 00:42:35.460763 2990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:42:35.461478 kubelet[2990]: W1106 00:42:35.460767 2990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:42:35.461478 kubelet[2990]: E1106 00:42:35.460772 2990 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:42:35.461478 kubelet[2990]: E1106 00:42:35.461035 2990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:42:35.462334 kubelet[2990]: W1106 00:42:35.461043 2990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:42:35.462334 kubelet[2990]: E1106 00:42:35.461050 2990 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:42:35.462334 kubelet[2990]: E1106 00:42:35.461232 2990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:42:35.462334 kubelet[2990]: W1106 00:42:35.461237 2990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:42:35.462334 kubelet[2990]: E1106 00:42:35.461242 2990 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:42:35.462334 kubelet[2990]: E1106 00:42:35.461895 2990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:42:35.462334 kubelet[2990]: W1106 00:42:35.461901 2990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:42:35.462334 kubelet[2990]: E1106 00:42:35.461907 2990 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:42:35.463825 kubelet[2990]: E1106 00:42:35.463454 2990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:42:35.463825 kubelet[2990]: W1106 00:42:35.463464 2990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:42:35.463825 kubelet[2990]: E1106 00:42:35.463472 2990 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:42:35.463825 kubelet[2990]: E1106 00:42:35.463657 2990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:42:35.463825 kubelet[2990]: W1106 00:42:35.463662 2990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:42:35.463825 kubelet[2990]: E1106 00:42:35.463667 2990 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:42:35.465422 kubelet[2990]: E1106 00:42:35.464008 2990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:42:35.465422 kubelet[2990]: W1106 00:42:35.464013 2990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:42:35.465422 kubelet[2990]: E1106 00:42:35.464020 2990 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:42:35.465422 kubelet[2990]: E1106 00:42:35.464299 2990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:42:35.465422 kubelet[2990]: W1106 00:42:35.464306 2990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:42:35.465422 kubelet[2990]: E1106 00:42:35.464551 2990 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:42:35.465422 kubelet[2990]: E1106 00:42:35.465163 2990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:42:35.465422 kubelet[2990]: W1106 00:42:35.465169 2990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:42:35.465422 kubelet[2990]: E1106 00:42:35.465177 2990 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:42:35.465892 kubelet[2990]: E1106 00:42:35.465724 2990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:42:35.465892 kubelet[2990]: W1106 00:42:35.465731 2990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:42:35.465892 kubelet[2990]: E1106 00:42:35.465737 2990 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:42:35.466356 kubelet[2990]: E1106 00:42:35.466293 2990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:42:35.466356 kubelet[2990]: W1106 00:42:35.466299 2990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:42:35.466356 kubelet[2990]: E1106 00:42:35.466305 2990 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:42:35.466808 kubelet[2990]: E1106 00:42:35.466781 2990 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:42:35.466808 kubelet[2990]: W1106 00:42:35.466789 2990 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:42:35.466808 kubelet[2990]: E1106 00:42:35.466795 2990 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:42:35.846908 containerd[1685]: time="2025-11-06T00:42:35.846883830Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:42:35.852820 containerd[1685]: time="2025-11-06T00:42:35.852792099Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Nov 6 00:42:35.855454 containerd[1685]: time="2025-11-06T00:42:35.855428593Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:42:35.864092 containerd[1685]: time="2025-11-06T00:42:35.864044566Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.514660764s" Nov 6 00:42:35.864092 containerd[1685]: time="2025-11-06T00:42:35.864080260Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Nov 6 00:42:35.864817 containerd[1685]: time="2025-11-06T00:42:35.864739533Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:42:35.882161 containerd[1685]: time="2025-11-06T00:42:35.882085688Z" level=info msg="CreateContainer within sandbox \"ee6c73feb89cc92e9029178cca5c9eeb4782b2b9d55c9c4f429b0b0122243aeb\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 6 00:42:35.898855 containerd[1685]: time="2025-11-06T00:42:35.896943974Z" level=info msg="Container 710dde522e94d153c868b6c0daa419f0affe773cc06d1a76207cf5efa3d2df03: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:42:35.898157 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1224493873.mount: Deactivated successfully. Nov 6 00:42:35.910061 containerd[1685]: time="2025-11-06T00:42:35.910040326Z" level=info msg="CreateContainer within sandbox \"ee6c73feb89cc92e9029178cca5c9eeb4782b2b9d55c9c4f429b0b0122243aeb\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"710dde522e94d153c868b6c0daa419f0affe773cc06d1a76207cf5efa3d2df03\"" Nov 6 00:42:35.911265 containerd[1685]: time="2025-11-06T00:42:35.911252653Z" level=info msg="StartContainer for \"710dde522e94d153c868b6c0daa419f0affe773cc06d1a76207cf5efa3d2df03\"" Nov 6 00:42:35.913190 containerd[1685]: time="2025-11-06T00:42:35.913084688Z" level=info msg="connecting to shim 710dde522e94d153c868b6c0daa419f0affe773cc06d1a76207cf5efa3d2df03" address="unix:///run/containerd/s/c82e21b116edfd196c55a860050824059a5a574714573706ac3c2418abfb3880" protocol=ttrpc version=3 Nov 6 00:42:35.933025 systemd[1]: Started cri-containerd-710dde522e94d153c868b6c0daa419f0affe773cc06d1a76207cf5efa3d2df03.scope - libcontainer container 710dde522e94d153c868b6c0daa419f0affe773cc06d1a76207cf5efa3d2df03. Nov 6 00:42:35.970882 systemd[1]: cri-containerd-710dde522e94d153c868b6c0daa419f0affe773cc06d1a76207cf5efa3d2df03.scope: Deactivated successfully. Nov 6 00:42:35.988522 containerd[1685]: time="2025-11-06T00:42:35.988329353Z" level=info msg="TaskExit event in podsandbox handler container_id:\"710dde522e94d153c868b6c0daa419f0affe773cc06d1a76207cf5efa3d2df03\" id:\"710dde522e94d153c868b6c0daa419f0affe773cc06d1a76207cf5efa3d2df03\" pid:3685 exited_at:{seconds:1762389755 nanos:970704427}" Nov 6 00:42:35.988522 containerd[1685]: time="2025-11-06T00:42:35.988438577Z" level=info msg="StartContainer for \"710dde522e94d153c868b6c0daa419f0affe773cc06d1a76207cf5efa3d2df03\" returns successfully" Nov 6 00:42:35.991918 containerd[1685]: time="2025-11-06T00:42:35.991891919Z" level=info msg="received exit event container_id:\"710dde522e94d153c868b6c0daa419f0affe773cc06d1a76207cf5efa3d2df03\" id:\"710dde522e94d153c868b6c0daa419f0affe773cc06d1a76207cf5efa3d2df03\" pid:3685 exited_at:{seconds:1762389755 nanos:970704427}" Nov 6 00:42:36.009347 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-710dde522e94d153c868b6c0daa419f0affe773cc06d1a76207cf5efa3d2df03-rootfs.mount: Deactivated successfully. Nov 6 00:42:36.374283 kubelet[2990]: I1106 00:42:36.373637 2990 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 6 00:42:37.288139 kubelet[2990]: E1106 00:42:37.287642 2990 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ct65m" podUID="d1996df9-2b05-483b-a46f-e6437e23c06c" Nov 6 00:42:37.377612 containerd[1685]: time="2025-11-06T00:42:37.377592210Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Nov 6 00:42:39.291705 kubelet[2990]: E1106 00:42:39.291651 2990 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ct65m" podUID="d1996df9-2b05-483b-a46f-e6437e23c06c" Nov 6 00:42:41.289122 kubelet[2990]: E1106 00:42:41.288876 2990 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ct65m" podUID="d1996df9-2b05-483b-a46f-e6437e23c06c" Nov 6 00:42:41.566524 containerd[1685]: time="2025-11-06T00:42:41.566432513Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:42:41.568969 containerd[1685]: time="2025-11-06T00:42:41.568908624Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Nov 6 00:42:41.571320 containerd[1685]: time="2025-11-06T00:42:41.571073321Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:42:41.573349 containerd[1685]: time="2025-11-06T00:42:41.573318604Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:42:41.573844 containerd[1685]: time="2025-11-06T00:42:41.573750429Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 4.196089478s" Nov 6 00:42:41.574023 containerd[1685]: time="2025-11-06T00:42:41.573952856Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Nov 6 00:42:41.586280 containerd[1685]: time="2025-11-06T00:42:41.586248561Z" level=info msg="CreateContainer within sandbox \"ee6c73feb89cc92e9029178cca5c9eeb4782b2b9d55c9c4f429b0b0122243aeb\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 6 00:42:41.592631 containerd[1685]: time="2025-11-06T00:42:41.592601728Z" level=info msg="Container 1be8ad9b8774e8eccd35c76149a06a29e8b84dfdfac3e591a71ee4ba44898589: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:42:41.597649 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1001146324.mount: Deactivated successfully. Nov 6 00:42:41.598961 containerd[1685]: time="2025-11-06T00:42:41.598938155Z" level=info msg="CreateContainer within sandbox \"ee6c73feb89cc92e9029178cca5c9eeb4782b2b9d55c9c4f429b0b0122243aeb\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"1be8ad9b8774e8eccd35c76149a06a29e8b84dfdfac3e591a71ee4ba44898589\"" Nov 6 00:42:41.600197 containerd[1685]: time="2025-11-06T00:42:41.599348616Z" level=info msg="StartContainer for \"1be8ad9b8774e8eccd35c76149a06a29e8b84dfdfac3e591a71ee4ba44898589\"" Nov 6 00:42:41.600412 containerd[1685]: time="2025-11-06T00:42:41.600381752Z" level=info msg="connecting to shim 1be8ad9b8774e8eccd35c76149a06a29e8b84dfdfac3e591a71ee4ba44898589" address="unix:///run/containerd/s/c82e21b116edfd196c55a860050824059a5a574714573706ac3c2418abfb3880" protocol=ttrpc version=3 Nov 6 00:42:41.624050 systemd[1]: Started cri-containerd-1be8ad9b8774e8eccd35c76149a06a29e8b84dfdfac3e591a71ee4ba44898589.scope - libcontainer container 1be8ad9b8774e8eccd35c76149a06a29e8b84dfdfac3e591a71ee4ba44898589. Nov 6 00:42:41.654303 containerd[1685]: time="2025-11-06T00:42:41.654274415Z" level=info msg="StartContainer for \"1be8ad9b8774e8eccd35c76149a06a29e8b84dfdfac3e591a71ee4ba44898589\" returns successfully" Nov 6 00:42:42.750259 systemd[1]: cri-containerd-1be8ad9b8774e8eccd35c76149a06a29e8b84dfdfac3e591a71ee4ba44898589.scope: Deactivated successfully. Nov 6 00:42:42.750470 systemd[1]: cri-containerd-1be8ad9b8774e8eccd35c76149a06a29e8b84dfdfac3e591a71ee4ba44898589.scope: Consumed 313ms CPU time, 168M memory peak, 4M read from disk, 171.3M written to disk. Nov 6 00:42:42.806882 kubelet[2990]: I1106 00:42:42.806855 2990 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Nov 6 00:42:42.820843 containerd[1685]: time="2025-11-06T00:42:42.820815872Z" level=info msg="received exit event container_id:\"1be8ad9b8774e8eccd35c76149a06a29e8b84dfdfac3e591a71ee4ba44898589\" id:\"1be8ad9b8774e8eccd35c76149a06a29e8b84dfdfac3e591a71ee4ba44898589\" pid:3743 exited_at:{seconds:1762389762 nanos:820621191}" Nov 6 00:42:42.821523 containerd[1685]: time="2025-11-06T00:42:42.821510264Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1be8ad9b8774e8eccd35c76149a06a29e8b84dfdfac3e591a71ee4ba44898589\" id:\"1be8ad9b8774e8eccd35c76149a06a29e8b84dfdfac3e591a71ee4ba44898589\" pid:3743 exited_at:{seconds:1762389762 nanos:820621191}" Nov 6 00:42:42.858289 systemd[1]: Created slice kubepods-besteffort-pod294239ca_8d86_4b82_a8b1_d110ecda217d.slice - libcontainer container kubepods-besteffort-pod294239ca_8d86_4b82_a8b1_d110ecda217d.slice. Nov 6 00:42:42.866788 kubelet[2990]: I1106 00:42:42.866315 2990 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/294239ca-8d86-4b82-a8b1-d110ecda217d-whisker-ca-bundle\") pod \"whisker-66b8974474-t5gk5\" (UID: \"294239ca-8d86-4b82-a8b1-d110ecda217d\") " pod="calico-system/whisker-66b8974474-t5gk5" Nov 6 00:42:42.866788 kubelet[2990]: I1106 00:42:42.866338 2990 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/294239ca-8d86-4b82-a8b1-d110ecda217d-whisker-backend-key-pair\") pod \"whisker-66b8974474-t5gk5\" (UID: \"294239ca-8d86-4b82-a8b1-d110ecda217d\") " pod="calico-system/whisker-66b8974474-t5gk5" Nov 6 00:42:42.866788 kubelet[2990]: I1106 00:42:42.866350 2990 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hk9p\" (UniqueName: \"kubernetes.io/projected/294239ca-8d86-4b82-a8b1-d110ecda217d-kube-api-access-9hk9p\") pod \"whisker-66b8974474-t5gk5\" (UID: \"294239ca-8d86-4b82-a8b1-d110ecda217d\") " pod="calico-system/whisker-66b8974474-t5gk5" Nov 6 00:42:42.871948 systemd[1]: Created slice kubepods-besteffort-podc0e494e7_cf65_452e_8423_9173b08e8c13.slice - libcontainer container kubepods-besteffort-podc0e494e7_cf65_452e_8423_9173b08e8c13.slice. Nov 6 00:42:42.880305 systemd[1]: Created slice kubepods-besteffort-podd2f87c6a_006e_4994_b5d3_96321f7afa74.slice - libcontainer container kubepods-besteffort-podd2f87c6a_006e_4994_b5d3_96321f7afa74.slice. Nov 6 00:42:42.892756 systemd[1]: Created slice kubepods-besteffort-podf9e94ba9_fb6e_467c_acd4_cc6ad1a73d5e.slice - libcontainer container kubepods-besteffort-podf9e94ba9_fb6e_467c_acd4_cc6ad1a73d5e.slice. Nov 6 00:42:42.900082 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1be8ad9b8774e8eccd35c76149a06a29e8b84dfdfac3e591a71ee4ba44898589-rootfs.mount: Deactivated successfully. Nov 6 00:42:42.905886 systemd[1]: Created slice kubepods-burstable-podf1a660b5_fc2e_4025_a112_e4a7be1ec985.slice - libcontainer container kubepods-burstable-podf1a660b5_fc2e_4025_a112_e4a7be1ec985.slice. Nov 6 00:42:42.914676 systemd[1]: Created slice kubepods-besteffort-pod64e4744f_a9ed_4557_9e74_304e879412c8.slice - libcontainer container kubepods-besteffort-pod64e4744f_a9ed_4557_9e74_304e879412c8.slice. Nov 6 00:42:42.921775 systemd[1]: Created slice kubepods-besteffort-pod68d221fc_9d1f_4317_856d_8104af586bb8.slice - libcontainer container kubepods-besteffort-pod68d221fc_9d1f_4317_856d_8104af586bb8.slice. Nov 6 00:42:42.927298 systemd[1]: Created slice kubepods-burstable-pod1b69788f_7d65_4052_bed4_5296f39e04d2.slice - libcontainer container kubepods-burstable-pod1b69788f_7d65_4052_bed4_5296f39e04d2.slice. Nov 6 00:42:42.967538 kubelet[2990]: I1106 00:42:42.967512 2990 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shd6c\" (UniqueName: \"kubernetes.io/projected/c0e494e7-cf65-452e-8423-9173b08e8c13-kube-api-access-shd6c\") pod \"calico-apiserver-5b4c854798-5ckfk\" (UID: \"c0e494e7-cf65-452e-8423-9173b08e8c13\") " pod="calico-apiserver/calico-apiserver-5b4c854798-5ckfk" Nov 6 00:42:42.967704 kubelet[2990]: I1106 00:42:42.967695 2990 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c0e494e7-cf65-452e-8423-9173b08e8c13-calico-apiserver-certs\") pod \"calico-apiserver-5b4c854798-5ckfk\" (UID: \"c0e494e7-cf65-452e-8423-9173b08e8c13\") " pod="calico-apiserver/calico-apiserver-5b4c854798-5ckfk" Nov 6 00:42:42.967762 kubelet[2990]: I1106 00:42:42.967755 2990 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f9e94ba9-fb6e-467c-acd4-cc6ad1a73d5e-calico-apiserver-certs\") pod \"calico-apiserver-5b4c854798-qdwsh\" (UID: \"f9e94ba9-fb6e-467c-acd4-cc6ad1a73d5e\") " pod="calico-apiserver/calico-apiserver-5b4c854798-qdwsh" Nov 6 00:42:42.967806 kubelet[2990]: I1106 00:42:42.967799 2990 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d2f87c6a-006e-4994-b5d3-96321f7afa74-calico-apiserver-certs\") pod \"calico-apiserver-b6568c646-xhvbr\" (UID: \"d2f87c6a-006e-4994-b5d3-96321f7afa74\") " pod="calico-apiserver/calico-apiserver-b6568c646-xhvbr" Nov 6 00:42:42.967930 kubelet[2990]: I1106 00:42:42.967922 2990 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1b69788f-7d65-4052-bed4-5296f39e04d2-config-volume\") pod \"coredns-66bc5c9577-8xwlw\" (UID: \"1b69788f-7d65-4052-bed4-5296f39e04d2\") " pod="kube-system/coredns-66bc5c9577-8xwlw" Nov 6 00:42:42.968738 kubelet[2990]: I1106 00:42:42.967993 2990 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dh2p\" (UniqueName: \"kubernetes.io/projected/68d221fc-9d1f-4317-856d-8104af586bb8-kube-api-access-7dh2p\") pod \"calico-kube-controllers-5b655bc9b9-hp8mt\" (UID: \"68d221fc-9d1f-4317-856d-8104af586bb8\") " pod="calico-system/calico-kube-controllers-5b655bc9b9-hp8mt" Nov 6 00:42:42.968738 kubelet[2990]: I1106 00:42:42.968005 2990 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f1a660b5-fc2e-4025-a112-e4a7be1ec985-config-volume\") pod \"coredns-66bc5c9577-84pcz\" (UID: \"f1a660b5-fc2e-4025-a112-e4a7be1ec985\") " pod="kube-system/coredns-66bc5c9577-84pcz" Nov 6 00:42:42.968738 kubelet[2990]: I1106 00:42:42.968015 2990 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ss6s\" (UniqueName: \"kubernetes.io/projected/1b69788f-7d65-4052-bed4-5296f39e04d2-kube-api-access-8ss6s\") pod \"coredns-66bc5c9577-8xwlw\" (UID: \"1b69788f-7d65-4052-bed4-5296f39e04d2\") " pod="kube-system/coredns-66bc5c9577-8xwlw" Nov 6 00:42:42.968738 kubelet[2990]: I1106 00:42:42.968030 2990 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64e4744f-a9ed-4557-9e74-304e879412c8-config\") pod \"goldmane-7c778bb748-tx6df\" (UID: \"64e4744f-a9ed-4557-9e74-304e879412c8\") " pod="calico-system/goldmane-7c778bb748-tx6df" Nov 6 00:42:42.968738 kubelet[2990]: I1106 00:42:42.968040 2990 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mvkl\" (UniqueName: \"kubernetes.io/projected/f1a660b5-fc2e-4025-a112-e4a7be1ec985-kube-api-access-4mvkl\") pod \"coredns-66bc5c9577-84pcz\" (UID: \"f1a660b5-fc2e-4025-a112-e4a7be1ec985\") " pod="kube-system/coredns-66bc5c9577-84pcz" Nov 6 00:42:42.970976 kubelet[2990]: I1106 00:42:42.968051 2990 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/64e4744f-a9ed-4557-9e74-304e879412c8-goldmane-key-pair\") pod \"goldmane-7c778bb748-tx6df\" (UID: \"64e4744f-a9ed-4557-9e74-304e879412c8\") " pod="calico-system/goldmane-7c778bb748-tx6df" Nov 6 00:42:42.970976 kubelet[2990]: I1106 00:42:42.968071 2990 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/68d221fc-9d1f-4317-856d-8104af586bb8-tigera-ca-bundle\") pod \"calico-kube-controllers-5b655bc9b9-hp8mt\" (UID: \"68d221fc-9d1f-4317-856d-8104af586bb8\") " pod="calico-system/calico-kube-controllers-5b655bc9b9-hp8mt" Nov 6 00:42:42.970976 kubelet[2990]: I1106 00:42:42.968080 2990 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvw6b\" (UniqueName: \"kubernetes.io/projected/f9e94ba9-fb6e-467c-acd4-cc6ad1a73d5e-kube-api-access-lvw6b\") pod \"calico-apiserver-5b4c854798-qdwsh\" (UID: \"f9e94ba9-fb6e-467c-acd4-cc6ad1a73d5e\") " pod="calico-apiserver/calico-apiserver-5b4c854798-qdwsh" Nov 6 00:42:42.970976 kubelet[2990]: I1106 00:42:42.968089 2990 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgfn7\" (UniqueName: \"kubernetes.io/projected/d2f87c6a-006e-4994-b5d3-96321f7afa74-kube-api-access-vgfn7\") pod \"calico-apiserver-b6568c646-xhvbr\" (UID: \"d2f87c6a-006e-4994-b5d3-96321f7afa74\") " pod="calico-apiserver/calico-apiserver-b6568c646-xhvbr" Nov 6 00:42:42.970976 kubelet[2990]: I1106 00:42:42.968099 2990 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgmlj\" (UniqueName: \"kubernetes.io/projected/64e4744f-a9ed-4557-9e74-304e879412c8-kube-api-access-cgmlj\") pod \"goldmane-7c778bb748-tx6df\" (UID: \"64e4744f-a9ed-4557-9e74-304e879412c8\") " pod="calico-system/goldmane-7c778bb748-tx6df" Nov 6 00:42:42.971088 kubelet[2990]: I1106 00:42:42.968111 2990 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/64e4744f-a9ed-4557-9e74-304e879412c8-goldmane-ca-bundle\") pod \"goldmane-7c778bb748-tx6df\" (UID: \"64e4744f-a9ed-4557-9e74-304e879412c8\") " pod="calico-system/goldmane-7c778bb748-tx6df" Nov 6 00:42:43.175028 containerd[1685]: time="2025-11-06T00:42:43.174914042Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-66b8974474-t5gk5,Uid:294239ca-8d86-4b82-a8b1-d110ecda217d,Namespace:calico-system,Attempt:0,}" Nov 6 00:42:43.181417 containerd[1685]: time="2025-11-06T00:42:43.181373837Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b4c854798-5ckfk,Uid:c0e494e7-cf65-452e-8423-9173b08e8c13,Namespace:calico-apiserver,Attempt:0,}" Nov 6 00:42:43.201768 containerd[1685]: time="2025-11-06T00:42:43.201540719Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b6568c646-xhvbr,Uid:d2f87c6a-006e-4994-b5d3-96321f7afa74,Namespace:calico-apiserver,Attempt:0,}" Nov 6 00:42:43.209168 containerd[1685]: time="2025-11-06T00:42:43.209130693Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b4c854798-qdwsh,Uid:f9e94ba9-fb6e-467c-acd4-cc6ad1a73d5e,Namespace:calico-apiserver,Attempt:0,}" Nov 6 00:42:43.214653 containerd[1685]: time="2025-11-06T00:42:43.214419327Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-84pcz,Uid:f1a660b5-fc2e-4025-a112-e4a7be1ec985,Namespace:kube-system,Attempt:0,}" Nov 6 00:42:43.227437 containerd[1685]: time="2025-11-06T00:42:43.226615091Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5b655bc9b9-hp8mt,Uid:68d221fc-9d1f-4317-856d-8104af586bb8,Namespace:calico-system,Attempt:0,}" Nov 6 00:42:43.229383 containerd[1685]: time="2025-11-06T00:42:43.229247529Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-tx6df,Uid:64e4744f-a9ed-4557-9e74-304e879412c8,Namespace:calico-system,Attempt:0,}" Nov 6 00:42:43.233272 containerd[1685]: time="2025-11-06T00:42:43.232924096Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-8xwlw,Uid:1b69788f-7d65-4052-bed4-5296f39e04d2,Namespace:kube-system,Attempt:0,}" Nov 6 00:42:43.293696 systemd[1]: Created slice kubepods-besteffort-podd1996df9_2b05_483b_a46f_e6437e23c06c.slice - libcontainer container kubepods-besteffort-podd1996df9_2b05_483b_a46f_e6437e23c06c.slice. Nov 6 00:42:43.296311 containerd[1685]: time="2025-11-06T00:42:43.296287326Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ct65m,Uid:d1996df9-2b05-483b-a46f-e6437e23c06c,Namespace:calico-system,Attempt:0,}" Nov 6 00:42:43.458090 containerd[1685]: time="2025-11-06T00:42:43.457391332Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Nov 6 00:42:43.492731 containerd[1685]: time="2025-11-06T00:42:43.492701465Z" level=error msg="Failed to destroy network for sandbox \"0039429aa57e8ce7f5fea3ad8c182659e30af46e99332dc460d7a889dde403cd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:42:43.494383 containerd[1685]: time="2025-11-06T00:42:43.494247250Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-8xwlw,Uid:1b69788f-7d65-4052-bed4-5296f39e04d2,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0039429aa57e8ce7f5fea3ad8c182659e30af46e99332dc460d7a889dde403cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:42:43.495396 kubelet[2990]: E1106 00:42:43.495364 2990 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0039429aa57e8ce7f5fea3ad8c182659e30af46e99332dc460d7a889dde403cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:42:43.495522 kubelet[2990]: E1106 00:42:43.495505 2990 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0039429aa57e8ce7f5fea3ad8c182659e30af46e99332dc460d7a889dde403cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-8xwlw" Nov 6 00:42:43.495580 kubelet[2990]: E1106 00:42:43.495524 2990 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0039429aa57e8ce7f5fea3ad8c182659e30af46e99332dc460d7a889dde403cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-8xwlw" Nov 6 00:42:43.495880 kubelet[2990]: E1106 00:42:43.495652 2990 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-8xwlw_kube-system(1b69788f-7d65-4052-bed4-5296f39e04d2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-8xwlw_kube-system(1b69788f-7d65-4052-bed4-5296f39e04d2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0039429aa57e8ce7f5fea3ad8c182659e30af46e99332dc460d7a889dde403cd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-8xwlw" podUID="1b69788f-7d65-4052-bed4-5296f39e04d2" Nov 6 00:42:43.514307 containerd[1685]: time="2025-11-06T00:42:43.514280445Z" level=error msg="Failed to destroy network for sandbox \"08084833d49a088b3f8e461de5e5a8ae34ba44681e4fe5ff84e4d95ee6fae4be\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:42:43.515211 containerd[1685]: time="2025-11-06T00:42:43.515171415Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5b655bc9b9-hp8mt,Uid:68d221fc-9d1f-4317-856d-8104af586bb8,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"08084833d49a088b3f8e461de5e5a8ae34ba44681e4fe5ff84e4d95ee6fae4be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:42:43.515714 kubelet[2990]: E1106 00:42:43.515418 2990 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"08084833d49a088b3f8e461de5e5a8ae34ba44681e4fe5ff84e4d95ee6fae4be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:42:43.515714 kubelet[2990]: E1106 00:42:43.515466 2990 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"08084833d49a088b3f8e461de5e5a8ae34ba44681e4fe5ff84e4d95ee6fae4be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5b655bc9b9-hp8mt" Nov 6 00:42:43.515714 kubelet[2990]: E1106 00:42:43.515480 2990 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"08084833d49a088b3f8e461de5e5a8ae34ba44681e4fe5ff84e4d95ee6fae4be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5b655bc9b9-hp8mt" Nov 6 00:42:43.515800 kubelet[2990]: E1106 00:42:43.515517 2990 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5b655bc9b9-hp8mt_calico-system(68d221fc-9d1f-4317-856d-8104af586bb8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5b655bc9b9-hp8mt_calico-system(68d221fc-9d1f-4317-856d-8104af586bb8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"08084833d49a088b3f8e461de5e5a8ae34ba44681e4fe5ff84e4d95ee6fae4be\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5b655bc9b9-hp8mt" podUID="68d221fc-9d1f-4317-856d-8104af586bb8" Nov 6 00:42:43.535150 containerd[1685]: time="2025-11-06T00:42:43.534780673Z" level=error msg="Failed to destroy network for sandbox \"8842319378797661f8d72d8ed5727745a6183a8cf4e4bc6a427204c4f1996766\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:42:43.536296 containerd[1685]: time="2025-11-06T00:42:43.536260844Z" level=error msg="Failed to destroy network for sandbox \"d814c0b8c865fd42e103d3b6247b50f9cfd65e762250909c598c603e2374a77f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:42:43.537450 containerd[1685]: time="2025-11-06T00:42:43.537420470Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b4c854798-5ckfk,Uid:c0e494e7-cf65-452e-8423-9173b08e8c13,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8842319378797661f8d72d8ed5727745a6183a8cf4e4bc6a427204c4f1996766\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:42:43.537744 kubelet[2990]: E1106 00:42:43.537621 2990 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8842319378797661f8d72d8ed5727745a6183a8cf4e4bc6a427204c4f1996766\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:42:43.537744 kubelet[2990]: E1106 00:42:43.537660 2990 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8842319378797661f8d72d8ed5727745a6183a8cf4e4bc6a427204c4f1996766\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5b4c854798-5ckfk" Nov 6 00:42:43.537744 kubelet[2990]: E1106 00:42:43.537674 2990 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8842319378797661f8d72d8ed5727745a6183a8cf4e4bc6a427204c4f1996766\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5b4c854798-5ckfk" Nov 6 00:42:43.538192 kubelet[2990]: E1106 00:42:43.537707 2990 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5b4c854798-5ckfk_calico-apiserver(c0e494e7-cf65-452e-8423-9173b08e8c13)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5b4c854798-5ckfk_calico-apiserver(c0e494e7-cf65-452e-8423-9173b08e8c13)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8842319378797661f8d72d8ed5727745a6183a8cf4e4bc6a427204c4f1996766\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5b4c854798-5ckfk" podUID="c0e494e7-cf65-452e-8423-9173b08e8c13" Nov 6 00:42:43.539176 kubelet[2990]: E1106 00:42:43.539009 2990 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d814c0b8c865fd42e103d3b6247b50f9cfd65e762250909c598c603e2374a77f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:42:43.539176 kubelet[2990]: E1106 00:42:43.539039 2990 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d814c0b8c865fd42e103d3b6247b50f9cfd65e762250909c598c603e2374a77f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-tx6df" Nov 6 00:42:43.539176 kubelet[2990]: E1106 00:42:43.539052 2990 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d814c0b8c865fd42e103d3b6247b50f9cfd65e762250909c598c603e2374a77f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-tx6df" Nov 6 00:42:43.539277 containerd[1685]: time="2025-11-06T00:42:43.538315983Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-tx6df,Uid:64e4744f-a9ed-4557-9e74-304e879412c8,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d814c0b8c865fd42e103d3b6247b50f9cfd65e762250909c598c603e2374a77f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:42:43.539315 kubelet[2990]: E1106 00:42:43.539083 2990 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-tx6df_calico-system(64e4744f-a9ed-4557-9e74-304e879412c8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-tx6df_calico-system(64e4744f-a9ed-4557-9e74-304e879412c8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d814c0b8c865fd42e103d3b6247b50f9cfd65e762250909c598c603e2374a77f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-tx6df" podUID="64e4744f-a9ed-4557-9e74-304e879412c8" Nov 6 00:42:43.540060 containerd[1685]: time="2025-11-06T00:42:43.540034131Z" level=error msg="Failed to destroy network for sandbox \"9932b381549b14b1f327e9599c1a4bade66716901256508da5a5f913f2389dfd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:42:43.540257 containerd[1685]: time="2025-11-06T00:42:43.540222779Z" level=error msg="Failed to destroy network for sandbox \"1b4d3f3c3558fa71a0d1f8ebc572c042e7894cfab6d4beaa89d7c8331d134eab\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:42:43.541115 containerd[1685]: time="2025-11-06T00:42:43.541028677Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b6568c646-xhvbr,Uid:d2f87c6a-006e-4994-b5d3-96321f7afa74,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9932b381549b14b1f327e9599c1a4bade66716901256508da5a5f913f2389dfd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:42:43.541283 kubelet[2990]: E1106 00:42:43.541264 2990 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9932b381549b14b1f327e9599c1a4bade66716901256508da5a5f913f2389dfd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:42:43.544173 kubelet[2990]: E1106 00:42:43.541351 2990 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9932b381549b14b1f327e9599c1a4bade66716901256508da5a5f913f2389dfd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-b6568c646-xhvbr" Nov 6 00:42:43.544173 kubelet[2990]: E1106 00:42:43.541364 2990 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9932b381549b14b1f327e9599c1a4bade66716901256508da5a5f913f2389dfd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-b6568c646-xhvbr" Nov 6 00:42:43.544173 kubelet[2990]: E1106 00:42:43.541436 2990 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-b6568c646-xhvbr_calico-apiserver(d2f87c6a-006e-4994-b5d3-96321f7afa74)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-b6568c646-xhvbr_calico-apiserver(d2f87c6a-006e-4994-b5d3-96321f7afa74)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9932b381549b14b1f327e9599c1a4bade66716901256508da5a5f913f2389dfd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-b6568c646-xhvbr" podUID="d2f87c6a-006e-4994-b5d3-96321f7afa74" Nov 6 00:42:43.544274 containerd[1685]: time="2025-11-06T00:42:43.541490804Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-84pcz,Uid:f1a660b5-fc2e-4025-a112-e4a7be1ec985,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b4d3f3c3558fa71a0d1f8ebc572c042e7894cfab6d4beaa89d7c8331d134eab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:42:43.544274 containerd[1685]: time="2025-11-06T00:42:43.542917216Z" level=error msg="Failed to destroy network for sandbox \"a75ca6e9f707647c92940ff21687c79c9632e4b483bee5fdb6d7d69a1f6ebb6d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:42:43.544330 kubelet[2990]: E1106 00:42:43.541596 2990 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b4d3f3c3558fa71a0d1f8ebc572c042e7894cfab6d4beaa89d7c8331d134eab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:42:43.544330 kubelet[2990]: E1106 00:42:43.541611 2990 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b4d3f3c3558fa71a0d1f8ebc572c042e7894cfab6d4beaa89d7c8331d134eab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-84pcz" Nov 6 00:42:43.544330 kubelet[2990]: E1106 00:42:43.541619 2990 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b4d3f3c3558fa71a0d1f8ebc572c042e7894cfab6d4beaa89d7c8331d134eab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-84pcz" Nov 6 00:42:43.544395 kubelet[2990]: E1106 00:42:43.541638 2990 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-84pcz_kube-system(f1a660b5-fc2e-4025-a112-e4a7be1ec985)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-84pcz_kube-system(f1a660b5-fc2e-4025-a112-e4a7be1ec985)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1b4d3f3c3558fa71a0d1f8ebc572c042e7894cfab6d4beaa89d7c8331d134eab\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-84pcz" podUID="f1a660b5-fc2e-4025-a112-e4a7be1ec985" Nov 6 00:42:43.544553 containerd[1685]: time="2025-11-06T00:42:43.544501102Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ct65m,Uid:d1996df9-2b05-483b-a46f-e6437e23c06c,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a75ca6e9f707647c92940ff21687c79c9632e4b483bee5fdb6d7d69a1f6ebb6d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:42:43.544755 kubelet[2990]: E1106 00:42:43.544623 2990 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a75ca6e9f707647c92940ff21687c79c9632e4b483bee5fdb6d7d69a1f6ebb6d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:42:43.544755 kubelet[2990]: E1106 00:42:43.544651 2990 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a75ca6e9f707647c92940ff21687c79c9632e4b483bee5fdb6d7d69a1f6ebb6d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ct65m" Nov 6 00:42:43.544755 kubelet[2990]: E1106 00:42:43.544663 2990 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a75ca6e9f707647c92940ff21687c79c9632e4b483bee5fdb6d7d69a1f6ebb6d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ct65m" Nov 6 00:42:43.544818 kubelet[2990]: E1106 00:42:43.544690 2990 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-ct65m_calico-system(d1996df9-2b05-483b-a46f-e6437e23c06c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-ct65m_calico-system(d1996df9-2b05-483b-a46f-e6437e23c06c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a75ca6e9f707647c92940ff21687c79c9632e4b483bee5fdb6d7d69a1f6ebb6d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-ct65m" podUID="d1996df9-2b05-483b-a46f-e6437e23c06c" Nov 6 00:42:43.552350 containerd[1685]: time="2025-11-06T00:42:43.552317806Z" level=error msg="Failed to destroy network for sandbox \"b81cba6315e147b2e1a7fb3c4fee46c0c7075326ee45800c0664805046fc1469\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:42:43.553132 containerd[1685]: time="2025-11-06T00:42:43.553068882Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b4c854798-qdwsh,Uid:f9e94ba9-fb6e-467c-acd4-cc6ad1a73d5e,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b81cba6315e147b2e1a7fb3c4fee46c0c7075326ee45800c0664805046fc1469\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:42:43.553304 containerd[1685]: time="2025-11-06T00:42:43.553279602Z" level=error msg="Failed to destroy network for sandbox \"97b788f81c40f6e540a24402046a228291744be3db1fdfc2b4de971aaa3903f1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:42:43.553432 kubelet[2990]: E1106 00:42:43.553414 2990 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b81cba6315e147b2e1a7fb3c4fee46c0c7075326ee45800c0664805046fc1469\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:42:43.553706 kubelet[2990]: E1106 00:42:43.553480 2990 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b81cba6315e147b2e1a7fb3c4fee46c0c7075326ee45800c0664805046fc1469\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5b4c854798-qdwsh" Nov 6 00:42:43.553706 kubelet[2990]: E1106 00:42:43.553495 2990 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b81cba6315e147b2e1a7fb3c4fee46c0c7075326ee45800c0664805046fc1469\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5b4c854798-qdwsh" Nov 6 00:42:43.553706 kubelet[2990]: E1106 00:42:43.553537 2990 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5b4c854798-qdwsh_calico-apiserver(f9e94ba9-fb6e-467c-acd4-cc6ad1a73d5e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5b4c854798-qdwsh_calico-apiserver(f9e94ba9-fb6e-467c-acd4-cc6ad1a73d5e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b81cba6315e147b2e1a7fb3c4fee46c0c7075326ee45800c0664805046fc1469\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5b4c854798-qdwsh" podUID="f9e94ba9-fb6e-467c-acd4-cc6ad1a73d5e" Nov 6 00:42:43.553784 containerd[1685]: time="2025-11-06T00:42:43.553618820Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-66b8974474-t5gk5,Uid:294239ca-8d86-4b82-a8b1-d110ecda217d,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"97b788f81c40f6e540a24402046a228291744be3db1fdfc2b4de971aaa3903f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:42:43.554295 kubelet[2990]: E1106 00:42:43.554206 2990 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"97b788f81c40f6e540a24402046a228291744be3db1fdfc2b4de971aaa3903f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:42:43.554295 kubelet[2990]: E1106 00:42:43.554225 2990 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"97b788f81c40f6e540a24402046a228291744be3db1fdfc2b4de971aaa3903f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-66b8974474-t5gk5" Nov 6 00:42:43.554295 kubelet[2990]: E1106 00:42:43.554235 2990 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"97b788f81c40f6e540a24402046a228291744be3db1fdfc2b4de971aaa3903f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-66b8974474-t5gk5" Nov 6 00:42:43.554406 kubelet[2990]: E1106 00:42:43.554274 2990 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-66b8974474-t5gk5_calico-system(294239ca-8d86-4b82-a8b1-d110ecda217d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-66b8974474-t5gk5_calico-system(294239ca-8d86-4b82-a8b1-d110ecda217d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"97b788f81c40f6e540a24402046a228291744be3db1fdfc2b4de971aaa3903f1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-66b8974474-t5gk5" podUID="294239ca-8d86-4b82-a8b1-d110ecda217d" Nov 6 00:42:50.780263 kubelet[2990]: I1106 00:42:50.779897 2990 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 6 00:42:51.137032 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2987230678.mount: Deactivated successfully. Nov 6 00:42:51.467600 containerd[1685]: time="2025-11-06T00:42:51.467203147Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Nov 6 00:42:51.476218 containerd[1685]: time="2025-11-06T00:42:51.455813892Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:42:51.483566 containerd[1685]: time="2025-11-06T00:42:51.483453656Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:42:51.484094 containerd[1685]: time="2025-11-06T00:42:51.483701993Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:42:51.489412 containerd[1685]: time="2025-11-06T00:42:51.487205119Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 8.026816908s" Nov 6 00:42:51.489412 containerd[1685]: time="2025-11-06T00:42:51.487221567Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Nov 6 00:42:51.537117 containerd[1685]: time="2025-11-06T00:42:51.537091457Z" level=info msg="CreateContainer within sandbox \"ee6c73feb89cc92e9029178cca5c9eeb4782b2b9d55c9c4f429b0b0122243aeb\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 6 00:42:51.579717 containerd[1685]: time="2025-11-06T00:42:51.579679403Z" level=info msg="Container 11d5b65d22425718dbef4a1fa49c80b01f4beabf387719ff9e32b2595660cfaa: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:42:51.580283 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1668522472.mount: Deactivated successfully. Nov 6 00:42:51.628435 containerd[1685]: time="2025-11-06T00:42:51.628401981Z" level=info msg="CreateContainer within sandbox \"ee6c73feb89cc92e9029178cca5c9eeb4782b2b9d55c9c4f429b0b0122243aeb\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"11d5b65d22425718dbef4a1fa49c80b01f4beabf387719ff9e32b2595660cfaa\"" Nov 6 00:42:51.629134 containerd[1685]: time="2025-11-06T00:42:51.628769861Z" level=info msg="StartContainer for \"11d5b65d22425718dbef4a1fa49c80b01f4beabf387719ff9e32b2595660cfaa\"" Nov 6 00:42:51.633313 containerd[1685]: time="2025-11-06T00:42:51.633279537Z" level=info msg="connecting to shim 11d5b65d22425718dbef4a1fa49c80b01f4beabf387719ff9e32b2595660cfaa" address="unix:///run/containerd/s/c82e21b116edfd196c55a860050824059a5a574714573706ac3c2418abfb3880" protocol=ttrpc version=3 Nov 6 00:42:51.671955 systemd[1]: Started cri-containerd-11d5b65d22425718dbef4a1fa49c80b01f4beabf387719ff9e32b2595660cfaa.scope - libcontainer container 11d5b65d22425718dbef4a1fa49c80b01f4beabf387719ff9e32b2595660cfaa. Nov 6 00:42:51.737114 containerd[1685]: time="2025-11-06T00:42:51.737057705Z" level=info msg="StartContainer for \"11d5b65d22425718dbef4a1fa49c80b01f4beabf387719ff9e32b2595660cfaa\" returns successfully" Nov 6 00:42:52.194331 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 6 00:42:52.202461 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 6 00:42:52.636822 kubelet[2990]: I1106 00:42:52.636769 2990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-9pgv8" podStartSLOduration=1.752062461 podStartE2EDuration="21.635611719s" podCreationTimestamp="2025-11-06 00:42:31 +0000 UTC" firstStartedPulling="2025-11-06 00:42:31.604407122 +0000 UTC m=+20.398080804" lastFinishedPulling="2025-11-06 00:42:51.487956384 +0000 UTC m=+40.281630062" observedRunningTime="2025-11-06 00:42:52.525625421 +0000 UTC m=+41.319299110" watchObservedRunningTime="2025-11-06 00:42:52.635611719 +0000 UTC m=+41.429285407" Nov 6 00:42:52.852701 kubelet[2990]: I1106 00:42:52.852669 2990 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9hk9p\" (UniqueName: \"kubernetes.io/projected/294239ca-8d86-4b82-a8b1-d110ecda217d-kube-api-access-9hk9p\") pod \"294239ca-8d86-4b82-a8b1-d110ecda217d\" (UID: \"294239ca-8d86-4b82-a8b1-d110ecda217d\") " Nov 6 00:42:52.852701 kubelet[2990]: I1106 00:42:52.852703 2990 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/294239ca-8d86-4b82-a8b1-d110ecda217d-whisker-ca-bundle\") pod \"294239ca-8d86-4b82-a8b1-d110ecda217d\" (UID: \"294239ca-8d86-4b82-a8b1-d110ecda217d\") " Nov 6 00:42:52.852972 kubelet[2990]: I1106 00:42:52.852716 2990 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/294239ca-8d86-4b82-a8b1-d110ecda217d-whisker-backend-key-pair\") pod \"294239ca-8d86-4b82-a8b1-d110ecda217d\" (UID: \"294239ca-8d86-4b82-a8b1-d110ecda217d\") " Nov 6 00:42:52.880229 systemd[1]: var-lib-kubelet-pods-294239ca\x2d8d86\x2d4b82\x2da8b1\x2dd110ecda217d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d9hk9p.mount: Deactivated successfully. Nov 6 00:42:52.890310 systemd[1]: var-lib-kubelet-pods-294239ca\x2d8d86\x2d4b82\x2da8b1\x2dd110ecda217d-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Nov 6 00:42:52.893497 kubelet[2990]: I1106 00:42:52.891434 2990 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/294239ca-8d86-4b82-a8b1-d110ecda217d-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "294239ca-8d86-4b82-a8b1-d110ecda217d" (UID: "294239ca-8d86-4b82-a8b1-d110ecda217d"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 6 00:42:52.893497 kubelet[2990]: I1106 00:42:52.891794 2990 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/294239ca-8d86-4b82-a8b1-d110ecda217d-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "294239ca-8d86-4b82-a8b1-d110ecda217d" (UID: "294239ca-8d86-4b82-a8b1-d110ecda217d"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 6 00:42:52.904248 kubelet[2990]: I1106 00:42:52.904140 2990 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/294239ca-8d86-4b82-a8b1-d110ecda217d-kube-api-access-9hk9p" (OuterVolumeSpecName: "kube-api-access-9hk9p") pod "294239ca-8d86-4b82-a8b1-d110ecda217d" (UID: "294239ca-8d86-4b82-a8b1-d110ecda217d"). InnerVolumeSpecName "kube-api-access-9hk9p". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 6 00:42:52.953188 kubelet[2990]: I1106 00:42:52.953140 2990 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9hk9p\" (UniqueName: \"kubernetes.io/projected/294239ca-8d86-4b82-a8b1-d110ecda217d-kube-api-access-9hk9p\") on node \"localhost\" DevicePath \"\"" Nov 6 00:42:52.953188 kubelet[2990]: I1106 00:42:52.953181 2990 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/294239ca-8d86-4b82-a8b1-d110ecda217d-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Nov 6 00:42:52.953188 kubelet[2990]: I1106 00:42:52.953193 2990 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/294239ca-8d86-4b82-a8b1-d110ecda217d-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Nov 6 00:42:53.292457 systemd[1]: Removed slice kubepods-besteffort-pod294239ca_8d86_4b82_a8b1_d110ecda217d.slice - libcontainer container kubepods-besteffort-pod294239ca_8d86_4b82_a8b1_d110ecda217d.slice. Nov 6 00:42:53.481606 kubelet[2990]: I1106 00:42:53.481582 2990 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 6 00:42:54.008753 systemd[1]: Created slice kubepods-besteffort-pod459c9ede_8e15_4c74_8abd_6b9e688b2183.slice - libcontainer container kubepods-besteffort-pod459c9ede_8e15_4c74_8abd_6b9e688b2183.slice. Nov 6 00:42:54.161490 kubelet[2990]: I1106 00:42:54.161366 2990 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjh8h\" (UniqueName: \"kubernetes.io/projected/459c9ede-8e15-4c74-8abd-6b9e688b2183-kube-api-access-zjh8h\") pod \"whisker-767d5fc77b-l2scd\" (UID: \"459c9ede-8e15-4c74-8abd-6b9e688b2183\") " pod="calico-system/whisker-767d5fc77b-l2scd" Nov 6 00:42:54.161490 kubelet[2990]: I1106 00:42:54.161418 2990 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/459c9ede-8e15-4c74-8abd-6b9e688b2183-whisker-backend-key-pair\") pod \"whisker-767d5fc77b-l2scd\" (UID: \"459c9ede-8e15-4c74-8abd-6b9e688b2183\") " pod="calico-system/whisker-767d5fc77b-l2scd" Nov 6 00:42:54.161490 kubelet[2990]: I1106 00:42:54.161444 2990 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/459c9ede-8e15-4c74-8abd-6b9e688b2183-whisker-ca-bundle\") pod \"whisker-767d5fc77b-l2scd\" (UID: \"459c9ede-8e15-4c74-8abd-6b9e688b2183\") " pod="calico-system/whisker-767d5fc77b-l2scd" Nov 6 00:42:54.360323 containerd[1685]: time="2025-11-06T00:42:54.360115717Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-tx6df,Uid:64e4744f-a9ed-4557-9e74-304e879412c8,Namespace:calico-system,Attempt:0,}" Nov 6 00:42:54.724966 containerd[1685]: time="2025-11-06T00:42:54.724788467Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-767d5fc77b-l2scd,Uid:459c9ede-8e15-4c74-8abd-6b9e688b2183,Namespace:calico-system,Attempt:0,}" Nov 6 00:42:54.964050 systemd-networkd[1585]: vxlan.calico: Link UP Nov 6 00:42:54.964055 systemd-networkd[1585]: vxlan.calico: Gained carrier Nov 6 00:42:55.487762 containerd[1685]: time="2025-11-06T00:42:55.487716627Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5b655bc9b9-hp8mt,Uid:68d221fc-9d1f-4317-856d-8104af586bb8,Namespace:calico-system,Attempt:0,}" Nov 6 00:42:55.520083 kubelet[2990]: I1106 00:42:55.518787 2990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="294239ca-8d86-4b82-a8b1-d110ecda217d" path="/var/lib/kubelet/pods/294239ca-8d86-4b82-a8b1-d110ecda217d/volumes" Nov 6 00:42:55.520329 containerd[1685]: time="2025-11-06T00:42:55.519934225Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b4c854798-qdwsh,Uid:f9e94ba9-fb6e-467c-acd4-cc6ad1a73d5e,Namespace:calico-apiserver,Attempt:0,}" Nov 6 00:42:55.556670 kubelet[2990]: I1106 00:42:55.556644 2990 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 6 00:42:55.626839 containerd[1685]: time="2025-11-06T00:42:55.626805158Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b4c854798-5ckfk,Uid:c0e494e7-cf65-452e-8423-9173b08e8c13,Namespace:calico-apiserver,Attempt:0,}" Nov 6 00:42:55.719932 containerd[1685]: time="2025-11-06T00:42:55.719898828Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-84pcz,Uid:f1a660b5-fc2e-4025-a112-e4a7be1ec985,Namespace:kube-system,Attempt:0,}" Nov 6 00:42:56.113395 containerd[1685]: time="2025-11-06T00:42:56.113348605Z" level=info msg="TaskExit event in podsandbox handler container_id:\"11d5b65d22425718dbef4a1fa49c80b01f4beabf387719ff9e32b2595660cfaa\" id:\"85a8bb72986105f2a5633747575642170b5d93bc5a84de51c373a5027cc2c5f8\" pid:4409 exited_at:{seconds:1762389776 nanos:113022946}" Nov 6 00:42:56.309539 containerd[1685]: time="2025-11-06T00:42:56.309320407Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ct65m,Uid:d1996df9-2b05-483b-a46f-e6437e23c06c,Namespace:calico-system,Attempt:0,}" Nov 6 00:42:56.686332 containerd[1685]: time="2025-11-06T00:42:56.686301127Z" level=info msg="TaskExit event in podsandbox handler container_id:\"11d5b65d22425718dbef4a1fa49c80b01f4beabf387719ff9e32b2595660cfaa\" id:\"5a721a7bb9999ebaec69f94be1b2f8726255264906601871c8d9fc470c293c5e\" pid:4450 exited_at:{seconds:1762389776 nanos:686081684}" Nov 6 00:42:56.905044 systemd-networkd[1585]: vxlan.calico: Gained IPv6LL Nov 6 00:42:58.189196 systemd-networkd[1585]: calib8f84cd0ab4: Link UP Nov 6 00:42:58.190173 systemd-networkd[1585]: calib8f84cd0ab4: Gained carrier Nov 6 00:42:58.208342 containerd[1685]: 2025-11-06 00:42:55.549 [INFO][4318] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--5b655bc9b9--hp8mt-eth0 calico-kube-controllers-5b655bc9b9- calico-system 68d221fc-9d1f-4317-856d-8104af586bb8 860 0 2025-11-06 00:42:31 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5b655bc9b9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-5b655bc9b9-hp8mt eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calib8f84cd0ab4 [] [] }} ContainerID="b8eab80e95c46007161dbd1dbc3c49dfac40499f00f178d56bc82730ead8181c" Namespace="calico-system" Pod="calico-kube-controllers-5b655bc9b9-hp8mt" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5b655bc9b9--hp8mt-" Nov 6 00:42:58.208342 containerd[1685]: 2025-11-06 00:42:55.550 [INFO][4318] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b8eab80e95c46007161dbd1dbc3c49dfac40499f00f178d56bc82730ead8181c" Namespace="calico-system" Pod="calico-kube-controllers-5b655bc9b9-hp8mt" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5b655bc9b9--hp8mt-eth0" Nov 6 00:42:58.208342 containerd[1685]: 2025-11-06 00:42:58.134 [INFO][4331] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b8eab80e95c46007161dbd1dbc3c49dfac40499f00f178d56bc82730ead8181c" HandleID="k8s-pod-network.b8eab80e95c46007161dbd1dbc3c49dfac40499f00f178d56bc82730ead8181c" Workload="localhost-k8s-calico--kube--controllers--5b655bc9b9--hp8mt-eth0" Nov 6 00:42:58.208670 containerd[1685]: 2025-11-06 00:42:58.134 [INFO][4331] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b8eab80e95c46007161dbd1dbc3c49dfac40499f00f178d56bc82730ead8181c" HandleID="k8s-pod-network.b8eab80e95c46007161dbd1dbc3c49dfac40499f00f178d56bc82730ead8181c" Workload="localhost-k8s-calico--kube--controllers--5b655bc9b9--hp8mt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f1b0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-5b655bc9b9-hp8mt", "timestamp":"2025-11-06 00:42:58.134809771 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 6 00:42:58.208670 containerd[1685]: 2025-11-06 00:42:58.134 [INFO][4331] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 6 00:42:58.208670 containerd[1685]: 2025-11-06 00:42:58.135 [INFO][4331] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 6 00:42:58.208670 containerd[1685]: 2025-11-06 00:42:58.136 [INFO][4331] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 6 00:42:58.208670 containerd[1685]: 2025-11-06 00:42:58.147 [INFO][4331] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b8eab80e95c46007161dbd1dbc3c49dfac40499f00f178d56bc82730ead8181c" host="localhost" Nov 6 00:42:58.208670 containerd[1685]: 2025-11-06 00:42:58.159 [INFO][4331] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 6 00:42:58.208670 containerd[1685]: 2025-11-06 00:42:58.164 [INFO][4331] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 6 00:42:58.208670 containerd[1685]: 2025-11-06 00:42:58.165 [INFO][4331] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 6 00:42:58.208670 containerd[1685]: 2025-11-06 00:42:58.168 [INFO][4331] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 6 00:42:58.208670 containerd[1685]: 2025-11-06 00:42:58.168 [INFO][4331] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b8eab80e95c46007161dbd1dbc3c49dfac40499f00f178d56bc82730ead8181c" host="localhost" Nov 6 00:42:58.208845 containerd[1685]: 2025-11-06 00:42:58.170 [INFO][4331] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b8eab80e95c46007161dbd1dbc3c49dfac40499f00f178d56bc82730ead8181c Nov 6 00:42:58.208845 containerd[1685]: 2025-11-06 00:42:58.174 [INFO][4331] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b8eab80e95c46007161dbd1dbc3c49dfac40499f00f178d56bc82730ead8181c" host="localhost" Nov 6 00:42:58.208845 containerd[1685]: 2025-11-06 00:42:58.179 [INFO][4331] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.b8eab80e95c46007161dbd1dbc3c49dfac40499f00f178d56bc82730ead8181c" host="localhost" Nov 6 00:42:58.208845 containerd[1685]: 2025-11-06 00:42:58.179 [INFO][4331] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.b8eab80e95c46007161dbd1dbc3c49dfac40499f00f178d56bc82730ead8181c" host="localhost" Nov 6 00:42:58.208845 containerd[1685]: 2025-11-06 00:42:58.179 [INFO][4331] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 6 00:42:58.208845 containerd[1685]: 2025-11-06 00:42:58.179 [INFO][4331] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="b8eab80e95c46007161dbd1dbc3c49dfac40499f00f178d56bc82730ead8181c" HandleID="k8s-pod-network.b8eab80e95c46007161dbd1dbc3c49dfac40499f00f178d56bc82730ead8181c" Workload="localhost-k8s-calico--kube--controllers--5b655bc9b9--hp8mt-eth0" Nov 6 00:42:58.209095 containerd[1685]: 2025-11-06 00:42:58.184 [INFO][4318] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b8eab80e95c46007161dbd1dbc3c49dfac40499f00f178d56bc82730ead8181c" Namespace="calico-system" Pod="calico-kube-controllers-5b655bc9b9-hp8mt" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5b655bc9b9--hp8mt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5b655bc9b9--hp8mt-eth0", GenerateName:"calico-kube-controllers-5b655bc9b9-", Namespace:"calico-system", SelfLink:"", UID:"68d221fc-9d1f-4317-856d-8104af586bb8", ResourceVersion:"860", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 42, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5b655bc9b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-5b655bc9b9-hp8mt", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib8f84cd0ab4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:42:58.215492 containerd[1685]: 2025-11-06 00:42:58.184 [INFO][4318] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="b8eab80e95c46007161dbd1dbc3c49dfac40499f00f178d56bc82730ead8181c" Namespace="calico-system" Pod="calico-kube-controllers-5b655bc9b9-hp8mt" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5b655bc9b9--hp8mt-eth0" Nov 6 00:42:58.215492 containerd[1685]: 2025-11-06 00:42:58.184 [INFO][4318] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib8f84cd0ab4 ContainerID="b8eab80e95c46007161dbd1dbc3c49dfac40499f00f178d56bc82730ead8181c" Namespace="calico-system" Pod="calico-kube-controllers-5b655bc9b9-hp8mt" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5b655bc9b9--hp8mt-eth0" Nov 6 00:42:58.215492 containerd[1685]: 2025-11-06 00:42:58.190 [INFO][4318] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b8eab80e95c46007161dbd1dbc3c49dfac40499f00f178d56bc82730ead8181c" Namespace="calico-system" Pod="calico-kube-controllers-5b655bc9b9-hp8mt" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5b655bc9b9--hp8mt-eth0" Nov 6 00:42:58.215579 containerd[1685]: 2025-11-06 00:42:58.191 [INFO][4318] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b8eab80e95c46007161dbd1dbc3c49dfac40499f00f178d56bc82730ead8181c" Namespace="calico-system" Pod="calico-kube-controllers-5b655bc9b9-hp8mt" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5b655bc9b9--hp8mt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5b655bc9b9--hp8mt-eth0", GenerateName:"calico-kube-controllers-5b655bc9b9-", Namespace:"calico-system", SelfLink:"", UID:"68d221fc-9d1f-4317-856d-8104af586bb8", ResourceVersion:"860", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 42, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5b655bc9b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b8eab80e95c46007161dbd1dbc3c49dfac40499f00f178d56bc82730ead8181c", Pod:"calico-kube-controllers-5b655bc9b9-hp8mt", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib8f84cd0ab4", MAC:"66:ca:17:e6:db:52", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:42:58.215625 containerd[1685]: 2025-11-06 00:42:58.204 [INFO][4318] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b8eab80e95c46007161dbd1dbc3c49dfac40499f00f178d56bc82730ead8181c" Namespace="calico-system" Pod="calico-kube-controllers-5b655bc9b9-hp8mt" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5b655bc9b9--hp8mt-eth0" Nov 6 00:42:58.310994 containerd[1685]: time="2025-11-06T00:42:58.310966270Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-8xwlw,Uid:1b69788f-7d65-4052-bed4-5296f39e04d2,Namespace:kube-system,Attempt:0,}" Nov 6 00:42:58.330376 containerd[1685]: time="2025-11-06T00:42:58.330342247Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b6568c646-xhvbr,Uid:d2f87c6a-006e-4994-b5d3-96321f7afa74,Namespace:calico-apiserver,Attempt:0,}" Nov 6 00:42:58.341522 systemd-networkd[1585]: cali505653cc17e: Link UP Nov 6 00:42:58.342513 systemd-networkd[1585]: cali505653cc17e: Gained carrier Nov 6 00:42:58.370031 containerd[1685]: 2025-11-06 00:42:55.583 [INFO][4332] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5b4c854798--qdwsh-eth0 calico-apiserver-5b4c854798- calico-apiserver f9e94ba9-fb6e-467c-acd4-cc6ad1a73d5e 854 0 2025-11-06 00:42:27 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5b4c854798 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5b4c854798-qdwsh eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali505653cc17e [] [] }} ContainerID="0cb8d7c66291be6d86965c0e74e037a423322d45edf94bb9e447d17fbab8176c" Namespace="calico-apiserver" Pod="calico-apiserver-5b4c854798-qdwsh" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b4c854798--qdwsh-" Nov 6 00:42:58.370031 containerd[1685]: 2025-11-06 00:42:55.583 [INFO][4332] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0cb8d7c66291be6d86965c0e74e037a423322d45edf94bb9e447d17fbab8176c" Namespace="calico-apiserver" Pod="calico-apiserver-5b4c854798-qdwsh" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b4c854798--qdwsh-eth0" Nov 6 00:42:58.370031 containerd[1685]: 2025-11-06 00:42:58.133 [INFO][4345] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0cb8d7c66291be6d86965c0e74e037a423322d45edf94bb9e447d17fbab8176c" HandleID="k8s-pod-network.0cb8d7c66291be6d86965c0e74e037a423322d45edf94bb9e447d17fbab8176c" Workload="localhost-k8s-calico--apiserver--5b4c854798--qdwsh-eth0" Nov 6 00:42:58.375284 containerd[1685]: 2025-11-06 00:42:58.134 [INFO][4345] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="0cb8d7c66291be6d86965c0e74e037a423322d45edf94bb9e447d17fbab8176c" HandleID="k8s-pod-network.0cb8d7c66291be6d86965c0e74e037a423322d45edf94bb9e447d17fbab8176c" Workload="localhost-k8s-calico--apiserver--5b4c854798--qdwsh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004ed30), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5b4c854798-qdwsh", "timestamp":"2025-11-06 00:42:58.133546324 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 6 00:42:58.375284 containerd[1685]: 2025-11-06 00:42:58.135 [INFO][4345] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 6 00:42:58.375284 containerd[1685]: 2025-11-06 00:42:58.179 [INFO][4345] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 6 00:42:58.375284 containerd[1685]: 2025-11-06 00:42:58.179 [INFO][4345] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 6 00:42:58.375284 containerd[1685]: 2025-11-06 00:42:58.246 [INFO][4345] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0cb8d7c66291be6d86965c0e74e037a423322d45edf94bb9e447d17fbab8176c" host="localhost" Nov 6 00:42:58.375284 containerd[1685]: 2025-11-06 00:42:58.279 [INFO][4345] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 6 00:42:58.375284 containerd[1685]: 2025-11-06 00:42:58.281 [INFO][4345] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 6 00:42:58.375284 containerd[1685]: 2025-11-06 00:42:58.283 [INFO][4345] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 6 00:42:58.375284 containerd[1685]: 2025-11-06 00:42:58.284 [INFO][4345] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 6 00:42:58.375284 containerd[1685]: 2025-11-06 00:42:58.284 [INFO][4345] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0cb8d7c66291be6d86965c0e74e037a423322d45edf94bb9e447d17fbab8176c" host="localhost" Nov 6 00:42:58.393442 containerd[1685]: 2025-11-06 00:42:58.285 [INFO][4345] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.0cb8d7c66291be6d86965c0e74e037a423322d45edf94bb9e447d17fbab8176c Nov 6 00:42:58.393442 containerd[1685]: 2025-11-06 00:42:58.308 [INFO][4345] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0cb8d7c66291be6d86965c0e74e037a423322d45edf94bb9e447d17fbab8176c" host="localhost" Nov 6 00:42:58.393442 containerd[1685]: 2025-11-06 00:42:58.315 [INFO][4345] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.0cb8d7c66291be6d86965c0e74e037a423322d45edf94bb9e447d17fbab8176c" host="localhost" Nov 6 00:42:58.393442 containerd[1685]: 2025-11-06 00:42:58.324 [INFO][4345] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.0cb8d7c66291be6d86965c0e74e037a423322d45edf94bb9e447d17fbab8176c" host="localhost" Nov 6 00:42:58.393442 containerd[1685]: 2025-11-06 00:42:58.324 [INFO][4345] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 6 00:42:58.393442 containerd[1685]: 2025-11-06 00:42:58.324 [INFO][4345] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="0cb8d7c66291be6d86965c0e74e037a423322d45edf94bb9e447d17fbab8176c" HandleID="k8s-pod-network.0cb8d7c66291be6d86965c0e74e037a423322d45edf94bb9e447d17fbab8176c" Workload="localhost-k8s-calico--apiserver--5b4c854798--qdwsh-eth0" Nov 6 00:42:58.393545 containerd[1685]: 2025-11-06 00:42:58.329 [INFO][4332] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0cb8d7c66291be6d86965c0e74e037a423322d45edf94bb9e447d17fbab8176c" Namespace="calico-apiserver" Pod="calico-apiserver-5b4c854798-qdwsh" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b4c854798--qdwsh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5b4c854798--qdwsh-eth0", GenerateName:"calico-apiserver-5b4c854798-", Namespace:"calico-apiserver", SelfLink:"", UID:"f9e94ba9-fb6e-467c-acd4-cc6ad1a73d5e", ResourceVersion:"854", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 42, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b4c854798", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5b4c854798-qdwsh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali505653cc17e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:42:58.393598 containerd[1685]: 2025-11-06 00:42:58.339 [INFO][4332] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="0cb8d7c66291be6d86965c0e74e037a423322d45edf94bb9e447d17fbab8176c" Namespace="calico-apiserver" Pod="calico-apiserver-5b4c854798-qdwsh" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b4c854798--qdwsh-eth0" Nov 6 00:42:58.393598 containerd[1685]: 2025-11-06 00:42:58.339 [INFO][4332] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali505653cc17e ContainerID="0cb8d7c66291be6d86965c0e74e037a423322d45edf94bb9e447d17fbab8176c" Namespace="calico-apiserver" Pod="calico-apiserver-5b4c854798-qdwsh" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b4c854798--qdwsh-eth0" Nov 6 00:42:58.393598 containerd[1685]: 2025-11-06 00:42:58.343 [INFO][4332] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0cb8d7c66291be6d86965c0e74e037a423322d45edf94bb9e447d17fbab8176c" Namespace="calico-apiserver" Pod="calico-apiserver-5b4c854798-qdwsh" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b4c854798--qdwsh-eth0" Nov 6 00:42:58.393650 containerd[1685]: 2025-11-06 00:42:58.343 [INFO][4332] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0cb8d7c66291be6d86965c0e74e037a423322d45edf94bb9e447d17fbab8176c" Namespace="calico-apiserver" Pod="calico-apiserver-5b4c854798-qdwsh" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b4c854798--qdwsh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5b4c854798--qdwsh-eth0", GenerateName:"calico-apiserver-5b4c854798-", Namespace:"calico-apiserver", SelfLink:"", UID:"f9e94ba9-fb6e-467c-acd4-cc6ad1a73d5e", ResourceVersion:"854", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 42, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b4c854798", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0cb8d7c66291be6d86965c0e74e037a423322d45edf94bb9e447d17fbab8176c", Pod:"calico-apiserver-5b4c854798-qdwsh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali505653cc17e", MAC:"12:20:f6:8a:96:e9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:42:58.393693 containerd[1685]: 2025-11-06 00:42:58.365 [INFO][4332] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0cb8d7c66291be6d86965c0e74e037a423322d45edf94bb9e447d17fbab8176c" Namespace="calico-apiserver" Pod="calico-apiserver-5b4c854798-qdwsh" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b4c854798--qdwsh-eth0" Nov 6 00:42:58.449675 systemd-networkd[1585]: cali73d5d0cc5a0: Link UP Nov 6 00:42:58.452555 systemd-networkd[1585]: cali73d5d0cc5a0: Gained carrier Nov 6 00:42:58.505449 containerd[1685]: 2025-11-06 00:42:55.749 [INFO][4347] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5b4c854798--5ckfk-eth0 calico-apiserver-5b4c854798- calico-apiserver c0e494e7-cf65-452e-8423-9173b08e8c13 857 0 2025-11-06 00:42:27 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5b4c854798 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5b4c854798-5ckfk eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali73d5d0cc5a0 [] [] }} ContainerID="aa2782526cc5631a3faa9e4ff1eaad6470225b15b7033f866602df994cbc91b9" Namespace="calico-apiserver" Pod="calico-apiserver-5b4c854798-5ckfk" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b4c854798--5ckfk-" Nov 6 00:42:58.505449 containerd[1685]: 2025-11-06 00:42:55.749 [INFO][4347] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="aa2782526cc5631a3faa9e4ff1eaad6470225b15b7033f866602df994cbc91b9" Namespace="calico-apiserver" Pod="calico-apiserver-5b4c854798-5ckfk" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b4c854798--5ckfk-eth0" Nov 6 00:42:58.505449 containerd[1685]: 2025-11-06 00:42:58.133 [INFO][4360] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="aa2782526cc5631a3faa9e4ff1eaad6470225b15b7033f866602df994cbc91b9" HandleID="k8s-pod-network.aa2782526cc5631a3faa9e4ff1eaad6470225b15b7033f866602df994cbc91b9" Workload="localhost-k8s-calico--apiserver--5b4c854798--5ckfk-eth0" Nov 6 00:42:58.511222 containerd[1685]: 2025-11-06 00:42:58.135 [INFO][4360] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="aa2782526cc5631a3faa9e4ff1eaad6470225b15b7033f866602df994cbc91b9" HandleID="k8s-pod-network.aa2782526cc5631a3faa9e4ff1eaad6470225b15b7033f866602df994cbc91b9" Workload="localhost-k8s-calico--apiserver--5b4c854798--5ckfk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003a7c40), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5b4c854798-5ckfk", "timestamp":"2025-11-06 00:42:58.133204752 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 6 00:42:58.511222 containerd[1685]: 2025-11-06 00:42:58.135 [INFO][4360] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 6 00:42:58.511222 containerd[1685]: 2025-11-06 00:42:58.324 [INFO][4360] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 6 00:42:58.511222 containerd[1685]: 2025-11-06 00:42:58.324 [INFO][4360] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 6 00:42:58.511222 containerd[1685]: 2025-11-06 00:42:58.350 [INFO][4360] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.aa2782526cc5631a3faa9e4ff1eaad6470225b15b7033f866602df994cbc91b9" host="localhost" Nov 6 00:42:58.511222 containerd[1685]: 2025-11-06 00:42:58.388 [INFO][4360] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 6 00:42:58.511222 containerd[1685]: 2025-11-06 00:42:58.403 [INFO][4360] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 6 00:42:58.511222 containerd[1685]: 2025-11-06 00:42:58.404 [INFO][4360] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 6 00:42:58.511222 containerd[1685]: 2025-11-06 00:42:58.409 [INFO][4360] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 6 00:42:58.511222 containerd[1685]: 2025-11-06 00:42:58.409 [INFO][4360] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.aa2782526cc5631a3faa9e4ff1eaad6470225b15b7033f866602df994cbc91b9" host="localhost" Nov 6 00:42:58.515559 containerd[1685]: 2025-11-06 00:42:58.410 [INFO][4360] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.aa2782526cc5631a3faa9e4ff1eaad6470225b15b7033f866602df994cbc91b9 Nov 6 00:42:58.515559 containerd[1685]: 2025-11-06 00:42:58.419 [INFO][4360] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.aa2782526cc5631a3faa9e4ff1eaad6470225b15b7033f866602df994cbc91b9" host="localhost" Nov 6 00:42:58.515559 containerd[1685]: 2025-11-06 00:42:58.438 [INFO][4360] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.aa2782526cc5631a3faa9e4ff1eaad6470225b15b7033f866602df994cbc91b9" host="localhost" Nov 6 00:42:58.515559 containerd[1685]: 2025-11-06 00:42:58.438 [INFO][4360] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.aa2782526cc5631a3faa9e4ff1eaad6470225b15b7033f866602df994cbc91b9" host="localhost" Nov 6 00:42:58.515559 containerd[1685]: 2025-11-06 00:42:58.438 [INFO][4360] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 6 00:42:58.515559 containerd[1685]: 2025-11-06 00:42:58.438 [INFO][4360] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="aa2782526cc5631a3faa9e4ff1eaad6470225b15b7033f866602df994cbc91b9" HandleID="k8s-pod-network.aa2782526cc5631a3faa9e4ff1eaad6470225b15b7033f866602df994cbc91b9" Workload="localhost-k8s-calico--apiserver--5b4c854798--5ckfk-eth0" Nov 6 00:42:58.519836 containerd[1685]: 2025-11-06 00:42:58.442 [INFO][4347] cni-plugin/k8s.go 418: Populated endpoint ContainerID="aa2782526cc5631a3faa9e4ff1eaad6470225b15b7033f866602df994cbc91b9" Namespace="calico-apiserver" Pod="calico-apiserver-5b4c854798-5ckfk" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b4c854798--5ckfk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5b4c854798--5ckfk-eth0", GenerateName:"calico-apiserver-5b4c854798-", Namespace:"calico-apiserver", SelfLink:"", UID:"c0e494e7-cf65-452e-8423-9173b08e8c13", ResourceVersion:"857", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 42, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b4c854798", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5b4c854798-5ckfk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali73d5d0cc5a0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:42:58.519955 containerd[1685]: 2025-11-06 00:42:58.442 [INFO][4347] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="aa2782526cc5631a3faa9e4ff1eaad6470225b15b7033f866602df994cbc91b9" Namespace="calico-apiserver" Pod="calico-apiserver-5b4c854798-5ckfk" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b4c854798--5ckfk-eth0" Nov 6 00:42:58.519955 containerd[1685]: 2025-11-06 00:42:58.442 [INFO][4347] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali73d5d0cc5a0 ContainerID="aa2782526cc5631a3faa9e4ff1eaad6470225b15b7033f866602df994cbc91b9" Namespace="calico-apiserver" Pod="calico-apiserver-5b4c854798-5ckfk" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b4c854798--5ckfk-eth0" Nov 6 00:42:58.519955 containerd[1685]: 2025-11-06 00:42:58.455 [INFO][4347] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="aa2782526cc5631a3faa9e4ff1eaad6470225b15b7033f866602df994cbc91b9" Namespace="calico-apiserver" Pod="calico-apiserver-5b4c854798-5ckfk" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b4c854798--5ckfk-eth0" Nov 6 00:42:58.525605 containerd[1685]: 2025-11-06 00:42:58.459 [INFO][4347] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="aa2782526cc5631a3faa9e4ff1eaad6470225b15b7033f866602df994cbc91b9" Namespace="calico-apiserver" Pod="calico-apiserver-5b4c854798-5ckfk" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b4c854798--5ckfk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5b4c854798--5ckfk-eth0", GenerateName:"calico-apiserver-5b4c854798-", Namespace:"calico-apiserver", SelfLink:"", UID:"c0e494e7-cf65-452e-8423-9173b08e8c13", ResourceVersion:"857", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 42, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b4c854798", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"aa2782526cc5631a3faa9e4ff1eaad6470225b15b7033f866602df994cbc91b9", Pod:"calico-apiserver-5b4c854798-5ckfk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali73d5d0cc5a0", MAC:"72:2f:82:05:f2:21", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:42:58.526710 containerd[1685]: 2025-11-06 00:42:58.478 [INFO][4347] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="aa2782526cc5631a3faa9e4ff1eaad6470225b15b7033f866602df994cbc91b9" Namespace="calico-apiserver" Pod="calico-apiserver-5b4c854798-5ckfk" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b4c854798--5ckfk-eth0" Nov 6 00:42:58.565539 systemd-networkd[1585]: cali484973e90e0: Link UP Nov 6 00:42:58.571425 systemd-networkd[1585]: cali484973e90e0: Gained carrier Nov 6 00:42:58.613522 containerd[1685]: 2025-11-06 00:42:56.376 [INFO][4420] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--ct65m-eth0 csi-node-driver- calico-system d1996df9-2b05-483b-a46f-e6437e23c06c 745 0 2025-11-06 00:42:31 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:9d99788f7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-ct65m eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali484973e90e0 [] [] }} ContainerID="2bd3cff76df85c11eeb2ce059b4d03ee16150ef275588e152e55e9c90b907116" Namespace="calico-system" Pod="csi-node-driver-ct65m" WorkloadEndpoint="localhost-k8s-csi--node--driver--ct65m-" Nov 6 00:42:58.613522 containerd[1685]: 2025-11-06 00:42:56.376 [INFO][4420] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2bd3cff76df85c11eeb2ce059b4d03ee16150ef275588e152e55e9c90b907116" Namespace="calico-system" Pod="csi-node-driver-ct65m" WorkloadEndpoint="localhost-k8s-csi--node--driver--ct65m-eth0" Nov 6 00:42:58.613522 containerd[1685]: 2025-11-06 00:42:58.134 [INFO][4434] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2bd3cff76df85c11eeb2ce059b4d03ee16150ef275588e152e55e9c90b907116" HandleID="k8s-pod-network.2bd3cff76df85c11eeb2ce059b4d03ee16150ef275588e152e55e9c90b907116" Workload="localhost-k8s-csi--node--driver--ct65m-eth0" Nov 6 00:42:58.613913 containerd[1685]: 2025-11-06 00:42:58.135 [INFO][4434] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="2bd3cff76df85c11eeb2ce059b4d03ee16150ef275588e152e55e9c90b907116" HandleID="k8s-pod-network.2bd3cff76df85c11eeb2ce059b4d03ee16150ef275588e152e55e9c90b907116" Workload="localhost-k8s-csi--node--driver--ct65m-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00036fe80), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-ct65m", "timestamp":"2025-11-06 00:42:58.134205415 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 6 00:42:58.613913 containerd[1685]: 2025-11-06 00:42:58.135 [INFO][4434] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 6 00:42:58.613913 containerd[1685]: 2025-11-06 00:42:58.439 [INFO][4434] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 6 00:42:58.613913 containerd[1685]: 2025-11-06 00:42:58.439 [INFO][4434] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 6 00:42:58.613913 containerd[1685]: 2025-11-06 00:42:58.449 [INFO][4434] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2bd3cff76df85c11eeb2ce059b4d03ee16150ef275588e152e55e9c90b907116" host="localhost" Nov 6 00:42:58.613913 containerd[1685]: 2025-11-06 00:42:58.484 [INFO][4434] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 6 00:42:58.613913 containerd[1685]: 2025-11-06 00:42:58.504 [INFO][4434] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 6 00:42:58.613913 containerd[1685]: 2025-11-06 00:42:58.508 [INFO][4434] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 6 00:42:58.613913 containerd[1685]: 2025-11-06 00:42:58.514 [INFO][4434] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 6 00:42:58.613913 containerd[1685]: 2025-11-06 00:42:58.514 [INFO][4434] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2bd3cff76df85c11eeb2ce059b4d03ee16150ef275588e152e55e9c90b907116" host="localhost" Nov 6 00:42:58.615309 containerd[1685]: 2025-11-06 00:42:58.520 [INFO][4434] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.2bd3cff76df85c11eeb2ce059b4d03ee16150ef275588e152e55e9c90b907116 Nov 6 00:42:58.615309 containerd[1685]: 2025-11-06 00:42:58.529 [INFO][4434] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2bd3cff76df85c11eeb2ce059b4d03ee16150ef275588e152e55e9c90b907116" host="localhost" Nov 6 00:42:58.615309 containerd[1685]: 2025-11-06 00:42:58.543 [INFO][4434] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.2bd3cff76df85c11eeb2ce059b4d03ee16150ef275588e152e55e9c90b907116" host="localhost" Nov 6 00:42:58.615309 containerd[1685]: 2025-11-06 00:42:58.543 [INFO][4434] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.2bd3cff76df85c11eeb2ce059b4d03ee16150ef275588e152e55e9c90b907116" host="localhost" Nov 6 00:42:58.615309 containerd[1685]: 2025-11-06 00:42:58.543 [INFO][4434] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 6 00:42:58.615309 containerd[1685]: 2025-11-06 00:42:58.543 [INFO][4434] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="2bd3cff76df85c11eeb2ce059b4d03ee16150ef275588e152e55e9c90b907116" HandleID="k8s-pod-network.2bd3cff76df85c11eeb2ce059b4d03ee16150ef275588e152e55e9c90b907116" Workload="localhost-k8s-csi--node--driver--ct65m-eth0" Nov 6 00:42:58.624417 containerd[1685]: 2025-11-06 00:42:58.549 [INFO][4420] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2bd3cff76df85c11eeb2ce059b4d03ee16150ef275588e152e55e9c90b907116" Namespace="calico-system" Pod="csi-node-driver-ct65m" WorkloadEndpoint="localhost-k8s-csi--node--driver--ct65m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--ct65m-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d1996df9-2b05-483b-a46f-e6437e23c06c", ResourceVersion:"745", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 42, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-ct65m", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali484973e90e0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:42:58.624527 containerd[1685]: 2025-11-06 00:42:58.550 [INFO][4420] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="2bd3cff76df85c11eeb2ce059b4d03ee16150ef275588e152e55e9c90b907116" Namespace="calico-system" Pod="csi-node-driver-ct65m" WorkloadEndpoint="localhost-k8s-csi--node--driver--ct65m-eth0" Nov 6 00:42:58.624527 containerd[1685]: 2025-11-06 00:42:58.550 [INFO][4420] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali484973e90e0 ContainerID="2bd3cff76df85c11eeb2ce059b4d03ee16150ef275588e152e55e9c90b907116" Namespace="calico-system" Pod="csi-node-driver-ct65m" WorkloadEndpoint="localhost-k8s-csi--node--driver--ct65m-eth0" Nov 6 00:42:58.624527 containerd[1685]: 2025-11-06 00:42:58.575 [INFO][4420] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2bd3cff76df85c11eeb2ce059b4d03ee16150ef275588e152e55e9c90b907116" Namespace="calico-system" Pod="csi-node-driver-ct65m" WorkloadEndpoint="localhost-k8s-csi--node--driver--ct65m-eth0" Nov 6 00:42:58.624622 containerd[1685]: 2025-11-06 00:42:58.577 [INFO][4420] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2bd3cff76df85c11eeb2ce059b4d03ee16150ef275588e152e55e9c90b907116" Namespace="calico-system" Pod="csi-node-driver-ct65m" WorkloadEndpoint="localhost-k8s-csi--node--driver--ct65m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--ct65m-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d1996df9-2b05-483b-a46f-e6437e23c06c", ResourceVersion:"745", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 42, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2bd3cff76df85c11eeb2ce059b4d03ee16150ef275588e152e55e9c90b907116", Pod:"csi-node-driver-ct65m", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali484973e90e0", MAC:"a2:5d:98:b8:ad:d1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:42:58.637575 containerd[1685]: 2025-11-06 00:42:58.605 [INFO][4420] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2bd3cff76df85c11eeb2ce059b4d03ee16150ef275588e152e55e9c90b907116" Namespace="calico-system" Pod="csi-node-driver-ct65m" WorkloadEndpoint="localhost-k8s-csi--node--driver--ct65m-eth0" Nov 6 00:42:58.687166 containerd[1685]: time="2025-11-06T00:42:58.686970768Z" level=info msg="connecting to shim b8eab80e95c46007161dbd1dbc3c49dfac40499f00f178d56bc82730ead8181c" address="unix:///run/containerd/s/474b02c28b790fb55008ecb5facbf603be2ddc6ddfe3ac98ceb932d46f28da0e" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:42:58.692379 systemd-networkd[1585]: cali7a54e9adb22: Link UP Nov 6 00:42:58.693898 containerd[1685]: time="2025-11-06T00:42:58.693827687Z" level=info msg="connecting to shim 0cb8d7c66291be6d86965c0e74e037a423322d45edf94bb9e447d17fbab8176c" address="unix:///run/containerd/s/24780c9d669bdaa54e9b548388173da90c509835eb4df7f6f2ac405f2f49efdc" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:42:58.695160 systemd-networkd[1585]: cali7a54e9adb22: Gained carrier Nov 6 00:42:58.695889 containerd[1685]: time="2025-11-06T00:42:58.695820377Z" level=info msg="connecting to shim aa2782526cc5631a3faa9e4ff1eaad6470225b15b7033f866602df994cbc91b9" address="unix:///run/containerd/s/02e0825fa5bff609888a161ae9ab9c47167b08bc7a9e5348dafbec48eda87437" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:42:58.726818 containerd[1685]: 2025-11-06 00:42:55.165 [INFO][4230] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--767d5fc77b--l2scd-eth0 whisker-767d5fc77b- calico-system 459c9ede-8e15-4c74-8abd-6b9e688b2183 938 0 2025-11-06 00:42:53 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:767d5fc77b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-767d5fc77b-l2scd eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali7a54e9adb22 [] [] }} ContainerID="4b4f37d67822f971f7a4d80b953b94cf5a186ae83a5dd75f6195f5666200565f" Namespace="calico-system" Pod="whisker-767d5fc77b-l2scd" WorkloadEndpoint="localhost-k8s-whisker--767d5fc77b--l2scd-" Nov 6 00:42:58.726818 containerd[1685]: 2025-11-06 00:42:55.182 [INFO][4230] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4b4f37d67822f971f7a4d80b953b94cf5a186ae83a5dd75f6195f5666200565f" Namespace="calico-system" Pod="whisker-767d5fc77b-l2scd" WorkloadEndpoint="localhost-k8s-whisker--767d5fc77b--l2scd-eth0" Nov 6 00:42:58.726818 containerd[1685]: 2025-11-06 00:42:58.133 [INFO][4289] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4b4f37d67822f971f7a4d80b953b94cf5a186ae83a5dd75f6195f5666200565f" HandleID="k8s-pod-network.4b4f37d67822f971f7a4d80b953b94cf5a186ae83a5dd75f6195f5666200565f" Workload="localhost-k8s-whisker--767d5fc77b--l2scd-eth0" Nov 6 00:42:58.727078 containerd[1685]: 2025-11-06 00:42:58.134 [INFO][4289] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="4b4f37d67822f971f7a4d80b953b94cf5a186ae83a5dd75f6195f5666200565f" HandleID="k8s-pod-network.4b4f37d67822f971f7a4d80b953b94cf5a186ae83a5dd75f6195f5666200565f" Workload="localhost-k8s-whisker--767d5fc77b--l2scd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00039f2d0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-767d5fc77b-l2scd", "timestamp":"2025-11-06 00:42:58.133857747 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 6 00:42:58.727078 containerd[1685]: 2025-11-06 00:42:58.134 [INFO][4289] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 6 00:42:58.727078 containerd[1685]: 2025-11-06 00:42:58.543 [INFO][4289] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 6 00:42:58.727078 containerd[1685]: 2025-11-06 00:42:58.543 [INFO][4289] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 6 00:42:58.727078 containerd[1685]: 2025-11-06 00:42:58.571 [INFO][4289] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4b4f37d67822f971f7a4d80b953b94cf5a186ae83a5dd75f6195f5666200565f" host="localhost" Nov 6 00:42:58.727078 containerd[1685]: 2025-11-06 00:42:58.585 [INFO][4289] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 6 00:42:58.727078 containerd[1685]: 2025-11-06 00:42:58.615 [INFO][4289] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 6 00:42:58.727078 containerd[1685]: 2025-11-06 00:42:58.618 [INFO][4289] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 6 00:42:58.727078 containerd[1685]: 2025-11-06 00:42:58.624 [INFO][4289] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 6 00:42:58.727078 containerd[1685]: 2025-11-06 00:42:58.624 [INFO][4289] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4b4f37d67822f971f7a4d80b953b94cf5a186ae83a5dd75f6195f5666200565f" host="localhost" Nov 6 00:42:58.727425 containerd[1685]: 2025-11-06 00:42:58.626 [INFO][4289] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.4b4f37d67822f971f7a4d80b953b94cf5a186ae83a5dd75f6195f5666200565f Nov 6 00:42:58.727425 containerd[1685]: 2025-11-06 00:42:58.650 [INFO][4289] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4b4f37d67822f971f7a4d80b953b94cf5a186ae83a5dd75f6195f5666200565f" host="localhost" Nov 6 00:42:58.727425 containerd[1685]: 2025-11-06 00:42:58.667 [INFO][4289] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.4b4f37d67822f971f7a4d80b953b94cf5a186ae83a5dd75f6195f5666200565f" host="localhost" Nov 6 00:42:58.727425 containerd[1685]: 2025-11-06 00:42:58.667 [INFO][4289] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.4b4f37d67822f971f7a4d80b953b94cf5a186ae83a5dd75f6195f5666200565f" host="localhost" Nov 6 00:42:58.727425 containerd[1685]: 2025-11-06 00:42:58.667 [INFO][4289] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 6 00:42:58.727425 containerd[1685]: 2025-11-06 00:42:58.667 [INFO][4289] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="4b4f37d67822f971f7a4d80b953b94cf5a186ae83a5dd75f6195f5666200565f" HandleID="k8s-pod-network.4b4f37d67822f971f7a4d80b953b94cf5a186ae83a5dd75f6195f5666200565f" Workload="localhost-k8s-whisker--767d5fc77b--l2scd-eth0" Nov 6 00:42:58.737298 containerd[1685]: 2025-11-06 00:42:58.680 [INFO][4230] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4b4f37d67822f971f7a4d80b953b94cf5a186ae83a5dd75f6195f5666200565f" Namespace="calico-system" Pod="whisker-767d5fc77b-l2scd" WorkloadEndpoint="localhost-k8s-whisker--767d5fc77b--l2scd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--767d5fc77b--l2scd-eth0", GenerateName:"whisker-767d5fc77b-", Namespace:"calico-system", SelfLink:"", UID:"459c9ede-8e15-4c74-8abd-6b9e688b2183", ResourceVersion:"938", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 42, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"767d5fc77b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-767d5fc77b-l2scd", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali7a54e9adb22", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:42:58.737298 containerd[1685]: 2025-11-06 00:42:58.680 [INFO][4230] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="4b4f37d67822f971f7a4d80b953b94cf5a186ae83a5dd75f6195f5666200565f" Namespace="calico-system" Pod="whisker-767d5fc77b-l2scd" WorkloadEndpoint="localhost-k8s-whisker--767d5fc77b--l2scd-eth0" Nov 6 00:42:58.737475 containerd[1685]: 2025-11-06 00:42:58.680 [INFO][4230] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7a54e9adb22 ContainerID="4b4f37d67822f971f7a4d80b953b94cf5a186ae83a5dd75f6195f5666200565f" Namespace="calico-system" Pod="whisker-767d5fc77b-l2scd" WorkloadEndpoint="localhost-k8s-whisker--767d5fc77b--l2scd-eth0" Nov 6 00:42:58.737475 containerd[1685]: 2025-11-06 00:42:58.696 [INFO][4230] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4b4f37d67822f971f7a4d80b953b94cf5a186ae83a5dd75f6195f5666200565f" Namespace="calico-system" Pod="whisker-767d5fc77b-l2scd" WorkloadEndpoint="localhost-k8s-whisker--767d5fc77b--l2scd-eth0" Nov 6 00:42:58.747448 containerd[1685]: 2025-11-06 00:42:58.697 [INFO][4230] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4b4f37d67822f971f7a4d80b953b94cf5a186ae83a5dd75f6195f5666200565f" Namespace="calico-system" Pod="whisker-767d5fc77b-l2scd" WorkloadEndpoint="localhost-k8s-whisker--767d5fc77b--l2scd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--767d5fc77b--l2scd-eth0", GenerateName:"whisker-767d5fc77b-", Namespace:"calico-system", SelfLink:"", UID:"459c9ede-8e15-4c74-8abd-6b9e688b2183", ResourceVersion:"938", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 42, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"767d5fc77b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4b4f37d67822f971f7a4d80b953b94cf5a186ae83a5dd75f6195f5666200565f", Pod:"whisker-767d5fc77b-l2scd", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali7a54e9adb22", MAC:"fa:80:7d:b3:18:83", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:42:58.741106 systemd[1]: Started cri-containerd-0cb8d7c66291be6d86965c0e74e037a423322d45edf94bb9e447d17fbab8176c.scope - libcontainer container 0cb8d7c66291be6d86965c0e74e037a423322d45edf94bb9e447d17fbab8176c. Nov 6 00:42:58.747666 containerd[1685]: 2025-11-06 00:42:58.723 [INFO][4230] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4b4f37d67822f971f7a4d80b953b94cf5a186ae83a5dd75f6195f5666200565f" Namespace="calico-system" Pod="whisker-767d5fc77b-l2scd" WorkloadEndpoint="localhost-k8s-whisker--767d5fc77b--l2scd-eth0" Nov 6 00:42:58.747666 containerd[1685]: time="2025-11-06T00:42:58.741472439Z" level=info msg="connecting to shim 2bd3cff76df85c11eeb2ce059b4d03ee16150ef275588e152e55e9c90b907116" address="unix:///run/containerd/s/2428ffa295814dbecfbe29dfcdfc49f2642bff7c113eff952260e1206fca5790" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:42:58.746996 systemd[1]: Started cri-containerd-aa2782526cc5631a3faa9e4ff1eaad6470225b15b7033f866602df994cbc91b9.scope - libcontainer container aa2782526cc5631a3faa9e4ff1eaad6470225b15b7033f866602df994cbc91b9. Nov 6 00:42:58.751529 systemd[1]: Started cri-containerd-b8eab80e95c46007161dbd1dbc3c49dfac40499f00f178d56bc82730ead8181c.scope - libcontainer container b8eab80e95c46007161dbd1dbc3c49dfac40499f00f178d56bc82730ead8181c. Nov 6 00:42:58.788097 systemd[1]: Started cri-containerd-2bd3cff76df85c11eeb2ce059b4d03ee16150ef275588e152e55e9c90b907116.scope - libcontainer container 2bd3cff76df85c11eeb2ce059b4d03ee16150ef275588e152e55e9c90b907116. Nov 6 00:42:58.808637 systemd-networkd[1585]: calidcda1a0d366: Link UP Nov 6 00:42:58.809147 systemd-networkd[1585]: calidcda1a0d366: Gained carrier Nov 6 00:42:58.817665 systemd-resolved[1341]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 6 00:42:58.828529 systemd-resolved[1341]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 6 00:42:58.829068 systemd-resolved[1341]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 6 00:42:58.838332 containerd[1685]: 2025-11-06 00:42:55.166 [INFO][4198] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--7c778bb748--tx6df-eth0 goldmane-7c778bb748- calico-system 64e4744f-a9ed-4557-9e74-304e879412c8 859 0 2025-11-06 00:42:29 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7c778bb748 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-7c778bb748-tx6df eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calidcda1a0d366 [] [] }} ContainerID="cc543b649807be9ec264f06ea6d072bda725cdaf0d312ee6d3d14c6bb1c84f31" Namespace="calico-system" Pod="goldmane-7c778bb748-tx6df" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--tx6df-" Nov 6 00:42:58.838332 containerd[1685]: 2025-11-06 00:42:55.182 [INFO][4198] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cc543b649807be9ec264f06ea6d072bda725cdaf0d312ee6d3d14c6bb1c84f31" Namespace="calico-system" Pod="goldmane-7c778bb748-tx6df" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--tx6df-eth0" Nov 6 00:42:58.838332 containerd[1685]: 2025-11-06 00:42:58.134 [INFO][4288] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cc543b649807be9ec264f06ea6d072bda725cdaf0d312ee6d3d14c6bb1c84f31" HandleID="k8s-pod-network.cc543b649807be9ec264f06ea6d072bda725cdaf0d312ee6d3d14c6bb1c84f31" Workload="localhost-k8s-goldmane--7c778bb748--tx6df-eth0" Nov 6 00:42:58.845113 containerd[1685]: 2025-11-06 00:42:58.135 [INFO][4288] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="cc543b649807be9ec264f06ea6d072bda725cdaf0d312ee6d3d14c6bb1c84f31" HandleID="k8s-pod-network.cc543b649807be9ec264f06ea6d072bda725cdaf0d312ee6d3d14c6bb1c84f31" Workload="localhost-k8s-goldmane--7c778bb748--tx6df-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004ea90), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-7c778bb748-tx6df", "timestamp":"2025-11-06 00:42:58.134518144 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 6 00:42:58.845113 containerd[1685]: 2025-11-06 00:42:58.135 [INFO][4288] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 6 00:42:58.845113 containerd[1685]: 2025-11-06 00:42:58.668 [INFO][4288] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 6 00:42:58.845113 containerd[1685]: 2025-11-06 00:42:58.669 [INFO][4288] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 6 00:42:58.845113 containerd[1685]: 2025-11-06 00:42:58.690 [INFO][4288] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.cc543b649807be9ec264f06ea6d072bda725cdaf0d312ee6d3d14c6bb1c84f31" host="localhost" Nov 6 00:42:58.845113 containerd[1685]: 2025-11-06 00:42:58.705 [INFO][4288] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 6 00:42:58.845113 containerd[1685]: 2025-11-06 00:42:58.732 [INFO][4288] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 6 00:42:58.845113 containerd[1685]: 2025-11-06 00:42:58.738 [INFO][4288] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 6 00:42:58.845113 containerd[1685]: 2025-11-06 00:42:58.746 [INFO][4288] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 6 00:42:58.845113 containerd[1685]: 2025-11-06 00:42:58.746 [INFO][4288] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.cc543b649807be9ec264f06ea6d072bda725cdaf0d312ee6d3d14c6bb1c84f31" host="localhost" Nov 6 00:42:58.850063 containerd[1685]: 2025-11-06 00:42:58.749 [INFO][4288] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.cc543b649807be9ec264f06ea6d072bda725cdaf0d312ee6d3d14c6bb1c84f31 Nov 6 00:42:58.850063 containerd[1685]: 2025-11-06 00:42:58.768 [INFO][4288] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.cc543b649807be9ec264f06ea6d072bda725cdaf0d312ee6d3d14c6bb1c84f31" host="localhost" Nov 6 00:42:58.850063 containerd[1685]: 2025-11-06 00:42:58.785 [INFO][4288] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.cc543b649807be9ec264f06ea6d072bda725cdaf0d312ee6d3d14c6bb1c84f31" host="localhost" Nov 6 00:42:58.850063 containerd[1685]: 2025-11-06 00:42:58.786 [INFO][4288] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.cc543b649807be9ec264f06ea6d072bda725cdaf0d312ee6d3d14c6bb1c84f31" host="localhost" Nov 6 00:42:58.850063 containerd[1685]: 2025-11-06 00:42:58.786 [INFO][4288] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 6 00:42:58.850063 containerd[1685]: 2025-11-06 00:42:58.786 [INFO][4288] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="cc543b649807be9ec264f06ea6d072bda725cdaf0d312ee6d3d14c6bb1c84f31" HandleID="k8s-pod-network.cc543b649807be9ec264f06ea6d072bda725cdaf0d312ee6d3d14c6bb1c84f31" Workload="localhost-k8s-goldmane--7c778bb748--tx6df-eth0" Nov 6 00:42:58.856776 systemd-resolved[1341]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 6 00:42:58.859207 containerd[1685]: 2025-11-06 00:42:58.791 [INFO][4198] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cc543b649807be9ec264f06ea6d072bda725cdaf0d312ee6d3d14c6bb1c84f31" Namespace="calico-system" Pod="goldmane-7c778bb748-tx6df" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--tx6df-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7c778bb748--tx6df-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"64e4744f-a9ed-4557-9e74-304e879412c8", ResourceVersion:"859", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 42, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-7c778bb748-tx6df", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calidcda1a0d366", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:42:58.859207 containerd[1685]: 2025-11-06 00:42:58.792 [INFO][4198] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="cc543b649807be9ec264f06ea6d072bda725cdaf0d312ee6d3d14c6bb1c84f31" Namespace="calico-system" Pod="goldmane-7c778bb748-tx6df" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--tx6df-eth0" Nov 6 00:42:58.859478 containerd[1685]: 2025-11-06 00:42:58.792 [INFO][4198] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidcda1a0d366 ContainerID="cc543b649807be9ec264f06ea6d072bda725cdaf0d312ee6d3d14c6bb1c84f31" Namespace="calico-system" Pod="goldmane-7c778bb748-tx6df" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--tx6df-eth0" Nov 6 00:42:58.859478 containerd[1685]: 2025-11-06 00:42:58.809 [INFO][4198] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cc543b649807be9ec264f06ea6d072bda725cdaf0d312ee6d3d14c6bb1c84f31" Namespace="calico-system" Pod="goldmane-7c778bb748-tx6df" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--tx6df-eth0" Nov 6 00:42:58.859558 containerd[1685]: 2025-11-06 00:42:58.811 [INFO][4198] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cc543b649807be9ec264f06ea6d072bda725cdaf0d312ee6d3d14c6bb1c84f31" Namespace="calico-system" Pod="goldmane-7c778bb748-tx6df" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--tx6df-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7c778bb748--tx6df-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"64e4744f-a9ed-4557-9e74-304e879412c8", ResourceVersion:"859", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 42, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"cc543b649807be9ec264f06ea6d072bda725cdaf0d312ee6d3d14c6bb1c84f31", Pod:"goldmane-7c778bb748-tx6df", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calidcda1a0d366", MAC:"ae:d6:8c:86:4d:8d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:42:58.859720 containerd[1685]: 2025-11-06 00:42:58.833 [INFO][4198] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cc543b649807be9ec264f06ea6d072bda725cdaf0d312ee6d3d14c6bb1c84f31" Namespace="calico-system" Pod="goldmane-7c778bb748-tx6df" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--tx6df-eth0" Nov 6 00:42:58.906024 systemd-networkd[1585]: calia2ae2f52b4a: Link UP Nov 6 00:42:58.909373 systemd-networkd[1585]: calia2ae2f52b4a: Gained carrier Nov 6 00:42:58.923325 containerd[1685]: time="2025-11-06T00:42:58.923263177Z" level=info msg="connecting to shim 4b4f37d67822f971f7a4d80b953b94cf5a186ae83a5dd75f6195f5666200565f" address="unix:///run/containerd/s/7416831286de0c425c1e3b0409cefc7e88cb3f3cc2e54e08dac9e370027075da" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:42:58.951108 systemd[1]: Started cri-containerd-4b4f37d67822f971f7a4d80b953b94cf5a186ae83a5dd75f6195f5666200565f.scope - libcontainer container 4b4f37d67822f971f7a4d80b953b94cf5a186ae83a5dd75f6195f5666200565f. Nov 6 00:42:58.976194 containerd[1685]: 2025-11-06 00:42:55.827 [INFO][4361] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--84pcz-eth0 coredns-66bc5c9577- kube-system f1a660b5-fc2e-4025-a112-e4a7be1ec985 858 0 2025-11-06 00:42:18 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-84pcz eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calia2ae2f52b4a [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="4cd57a4bad182a4e2df2f491b9c71fbd100efbfc931fdd1b272fe8ab2781e668" Namespace="kube-system" Pod="coredns-66bc5c9577-84pcz" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--84pcz-" Nov 6 00:42:58.976194 containerd[1685]: 2025-11-06 00:42:55.828 [INFO][4361] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4cd57a4bad182a4e2df2f491b9c71fbd100efbfc931fdd1b272fe8ab2781e668" Namespace="kube-system" Pod="coredns-66bc5c9577-84pcz" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--84pcz-eth0" Nov 6 00:42:58.976194 containerd[1685]: 2025-11-06 00:42:58.133 [INFO][4392] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4cd57a4bad182a4e2df2f491b9c71fbd100efbfc931fdd1b272fe8ab2781e668" HandleID="k8s-pod-network.4cd57a4bad182a4e2df2f491b9c71fbd100efbfc931fdd1b272fe8ab2781e668" Workload="localhost-k8s-coredns--66bc5c9577--84pcz-eth0" Nov 6 00:42:58.977906 systemd-resolved[1341]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 6 00:42:58.986720 containerd[1685]: 2025-11-06 00:42:58.136 [INFO][4392] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="4cd57a4bad182a4e2df2f491b9c71fbd100efbfc931fdd1b272fe8ab2781e668" HandleID="k8s-pod-network.4cd57a4bad182a4e2df2f491b9c71fbd100efbfc931fdd1b272fe8ab2781e668" Workload="localhost-k8s-coredns--66bc5c9577--84pcz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f5d0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-84pcz", "timestamp":"2025-11-06 00:42:58.133210618 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 6 00:42:58.986720 containerd[1685]: 2025-11-06 00:42:58.136 [INFO][4392] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 6 00:42:58.986720 containerd[1685]: 2025-11-06 00:42:58.786 [INFO][4392] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 6 00:42:58.986720 containerd[1685]: 2025-11-06 00:42:58.786 [INFO][4392] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 6 00:42:58.986720 containerd[1685]: 2025-11-06 00:42:58.807 [INFO][4392] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4cd57a4bad182a4e2df2f491b9c71fbd100efbfc931fdd1b272fe8ab2781e668" host="localhost" Nov 6 00:42:58.986720 containerd[1685]: 2025-11-06 00:42:58.834 [INFO][4392] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 6 00:42:58.986720 containerd[1685]: 2025-11-06 00:42:58.841 [INFO][4392] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 6 00:42:58.986720 containerd[1685]: 2025-11-06 00:42:58.844 [INFO][4392] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 6 00:42:58.986720 containerd[1685]: 2025-11-06 00:42:58.849 [INFO][4392] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 6 00:42:58.986720 containerd[1685]: 2025-11-06 00:42:58.849 [INFO][4392] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4cd57a4bad182a4e2df2f491b9c71fbd100efbfc931fdd1b272fe8ab2781e668" host="localhost" Nov 6 00:42:59.005956 containerd[1685]: 2025-11-06 00:42:58.852 [INFO][4392] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.4cd57a4bad182a4e2df2f491b9c71fbd100efbfc931fdd1b272fe8ab2781e668 Nov 6 00:42:59.005956 containerd[1685]: 2025-11-06 00:42:58.858 [INFO][4392] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4cd57a4bad182a4e2df2f491b9c71fbd100efbfc931fdd1b272fe8ab2781e668" host="localhost" Nov 6 00:42:59.005956 containerd[1685]: 2025-11-06 00:42:58.881 [INFO][4392] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.4cd57a4bad182a4e2df2f491b9c71fbd100efbfc931fdd1b272fe8ab2781e668" host="localhost" Nov 6 00:42:59.005956 containerd[1685]: 2025-11-06 00:42:58.881 [INFO][4392] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.4cd57a4bad182a4e2df2f491b9c71fbd100efbfc931fdd1b272fe8ab2781e668" host="localhost" Nov 6 00:42:59.005956 containerd[1685]: 2025-11-06 00:42:58.881 [INFO][4392] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 6 00:42:59.005956 containerd[1685]: 2025-11-06 00:42:58.881 [INFO][4392] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="4cd57a4bad182a4e2df2f491b9c71fbd100efbfc931fdd1b272fe8ab2781e668" HandleID="k8s-pod-network.4cd57a4bad182a4e2df2f491b9c71fbd100efbfc931fdd1b272fe8ab2781e668" Workload="localhost-k8s-coredns--66bc5c9577--84pcz-eth0" Nov 6 00:42:59.018657 containerd[1685]: 2025-11-06 00:42:58.898 [INFO][4361] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4cd57a4bad182a4e2df2f491b9c71fbd100efbfc931fdd1b272fe8ab2781e668" Namespace="kube-system" Pod="coredns-66bc5c9577-84pcz" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--84pcz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--84pcz-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"f1a660b5-fc2e-4025-a112-e4a7be1ec985", ResourceVersion:"858", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 42, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-84pcz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia2ae2f52b4a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:42:59.018657 containerd[1685]: 2025-11-06 00:42:58.898 [INFO][4361] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="4cd57a4bad182a4e2df2f491b9c71fbd100efbfc931fdd1b272fe8ab2781e668" Namespace="kube-system" Pod="coredns-66bc5c9577-84pcz" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--84pcz-eth0" Nov 6 00:42:59.018657 containerd[1685]: 2025-11-06 00:42:58.898 [INFO][4361] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia2ae2f52b4a ContainerID="4cd57a4bad182a4e2df2f491b9c71fbd100efbfc931fdd1b272fe8ab2781e668" Namespace="kube-system" Pod="coredns-66bc5c9577-84pcz" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--84pcz-eth0" Nov 6 00:42:59.018657 containerd[1685]: 2025-11-06 00:42:58.909 [INFO][4361] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4cd57a4bad182a4e2df2f491b9c71fbd100efbfc931fdd1b272fe8ab2781e668" Namespace="kube-system" Pod="coredns-66bc5c9577-84pcz" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--84pcz-eth0" Nov 6 00:42:59.018657 containerd[1685]: 2025-11-06 00:42:58.917 [INFO][4361] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4cd57a4bad182a4e2df2f491b9c71fbd100efbfc931fdd1b272fe8ab2781e668" Namespace="kube-system" Pod="coredns-66bc5c9577-84pcz" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--84pcz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--84pcz-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"f1a660b5-fc2e-4025-a112-e4a7be1ec985", ResourceVersion:"858", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 42, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4cd57a4bad182a4e2df2f491b9c71fbd100efbfc931fdd1b272fe8ab2781e668", Pod:"coredns-66bc5c9577-84pcz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia2ae2f52b4a", MAC:"ea:0f:36:99:cb:8d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:42:59.018657 containerd[1685]: 2025-11-06 00:42:58.969 [INFO][4361] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4cd57a4bad182a4e2df2f491b9c71fbd100efbfc931fdd1b272fe8ab2781e668" Namespace="kube-system" Pod="coredns-66bc5c9577-84pcz" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--84pcz-eth0" Nov 6 00:42:59.042490 systemd-networkd[1585]: cali2101674283c: Link UP Nov 6 00:42:59.043343 systemd-networkd[1585]: cali2101674283c: Gained carrier Nov 6 00:42:59.084499 containerd[1685]: 2025-11-06 00:42:58.463 [INFO][4489] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--8xwlw-eth0 coredns-66bc5c9577- kube-system 1b69788f-7d65-4052-bed4-5296f39e04d2 861 0 2025-11-06 00:42:18 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-8xwlw eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali2101674283c [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="80999a82ca8479d7d6de81499256b740a509b6e5cc87f5e7f5946c8628c9adf0" Namespace="kube-system" Pod="coredns-66bc5c9577-8xwlw" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--8xwlw-" Nov 6 00:42:59.084499 containerd[1685]: 2025-11-06 00:42:58.463 [INFO][4489] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="80999a82ca8479d7d6de81499256b740a509b6e5cc87f5e7f5946c8628c9adf0" Namespace="kube-system" Pod="coredns-66bc5c9577-8xwlw" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--8xwlw-eth0" Nov 6 00:42:59.084499 containerd[1685]: 2025-11-06 00:42:58.530 [INFO][4516] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="80999a82ca8479d7d6de81499256b740a509b6e5cc87f5e7f5946c8628c9adf0" HandleID="k8s-pod-network.80999a82ca8479d7d6de81499256b740a509b6e5cc87f5e7f5946c8628c9adf0" Workload="localhost-k8s-coredns--66bc5c9577--8xwlw-eth0" Nov 6 00:42:59.084499 containerd[1685]: 2025-11-06 00:42:58.531 [INFO][4516] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="80999a82ca8479d7d6de81499256b740a509b6e5cc87f5e7f5946c8628c9adf0" HandleID="k8s-pod-network.80999a82ca8479d7d6de81499256b740a509b6e5cc87f5e7f5946c8628c9adf0" Workload="localhost-k8s-coredns--66bc5c9577--8xwlw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00033a850), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-8xwlw", "timestamp":"2025-11-06 00:42:58.530389552 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 6 00:42:59.084499 containerd[1685]: 2025-11-06 00:42:58.531 [INFO][4516] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 6 00:42:59.084499 containerd[1685]: 2025-11-06 00:42:58.882 [INFO][4516] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 6 00:42:59.084499 containerd[1685]: 2025-11-06 00:42:58.882 [INFO][4516] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 6 00:42:59.084499 containerd[1685]: 2025-11-06 00:42:58.905 [INFO][4516] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.80999a82ca8479d7d6de81499256b740a509b6e5cc87f5e7f5946c8628c9adf0" host="localhost" Nov 6 00:42:59.084499 containerd[1685]: 2025-11-06 00:42:58.966 [INFO][4516] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 6 00:42:59.084499 containerd[1685]: 2025-11-06 00:42:58.976 [INFO][4516] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 6 00:42:59.084499 containerd[1685]: 2025-11-06 00:42:58.979 [INFO][4516] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 6 00:42:59.084499 containerd[1685]: 2025-11-06 00:42:58.981 [INFO][4516] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 6 00:42:59.084499 containerd[1685]: 2025-11-06 00:42:58.981 [INFO][4516] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.80999a82ca8479d7d6de81499256b740a509b6e5cc87f5e7f5946c8628c9adf0" host="localhost" Nov 6 00:42:59.084499 containerd[1685]: 2025-11-06 00:42:58.982 [INFO][4516] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.80999a82ca8479d7d6de81499256b740a509b6e5cc87f5e7f5946c8628c9adf0 Nov 6 00:42:59.084499 containerd[1685]: 2025-11-06 00:42:58.990 [INFO][4516] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.80999a82ca8479d7d6de81499256b740a509b6e5cc87f5e7f5946c8628c9adf0" host="localhost" Nov 6 00:42:59.084499 containerd[1685]: 2025-11-06 00:42:59.035 [INFO][4516] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.80999a82ca8479d7d6de81499256b740a509b6e5cc87f5e7f5946c8628c9adf0" host="localhost" Nov 6 00:42:59.084499 containerd[1685]: 2025-11-06 00:42:59.035 [INFO][4516] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.80999a82ca8479d7d6de81499256b740a509b6e5cc87f5e7f5946c8628c9adf0" host="localhost" Nov 6 00:42:59.084499 containerd[1685]: 2025-11-06 00:42:59.035 [INFO][4516] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 6 00:42:59.084499 containerd[1685]: 2025-11-06 00:42:59.035 [INFO][4516] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="80999a82ca8479d7d6de81499256b740a509b6e5cc87f5e7f5946c8628c9adf0" HandleID="k8s-pod-network.80999a82ca8479d7d6de81499256b740a509b6e5cc87f5e7f5946c8628c9adf0" Workload="localhost-k8s-coredns--66bc5c9577--8xwlw-eth0" Nov 6 00:42:59.085226 containerd[1685]: 2025-11-06 00:42:59.037 [INFO][4489] cni-plugin/k8s.go 418: Populated endpoint ContainerID="80999a82ca8479d7d6de81499256b740a509b6e5cc87f5e7f5946c8628c9adf0" Namespace="kube-system" Pod="coredns-66bc5c9577-8xwlw" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--8xwlw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--8xwlw-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"1b69788f-7d65-4052-bed4-5296f39e04d2", ResourceVersion:"861", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 42, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-8xwlw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2101674283c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:42:59.085226 containerd[1685]: 2025-11-06 00:42:59.038 [INFO][4489] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="80999a82ca8479d7d6de81499256b740a509b6e5cc87f5e7f5946c8628c9adf0" Namespace="kube-system" Pod="coredns-66bc5c9577-8xwlw" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--8xwlw-eth0" Nov 6 00:42:59.085226 containerd[1685]: 2025-11-06 00:42:59.038 [INFO][4489] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2101674283c ContainerID="80999a82ca8479d7d6de81499256b740a509b6e5cc87f5e7f5946c8628c9adf0" Namespace="kube-system" Pod="coredns-66bc5c9577-8xwlw" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--8xwlw-eth0" Nov 6 00:42:59.085226 containerd[1685]: 2025-11-06 00:42:59.047 [INFO][4489] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="80999a82ca8479d7d6de81499256b740a509b6e5cc87f5e7f5946c8628c9adf0" Namespace="kube-system" Pod="coredns-66bc5c9577-8xwlw" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--8xwlw-eth0" Nov 6 00:42:59.085226 containerd[1685]: 2025-11-06 00:42:59.051 [INFO][4489] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="80999a82ca8479d7d6de81499256b740a509b6e5cc87f5e7f5946c8628c9adf0" Namespace="kube-system" Pod="coredns-66bc5c9577-8xwlw" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--8xwlw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--8xwlw-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"1b69788f-7d65-4052-bed4-5296f39e04d2", ResourceVersion:"861", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 42, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"80999a82ca8479d7d6de81499256b740a509b6e5cc87f5e7f5946c8628c9adf0", Pod:"coredns-66bc5c9577-8xwlw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2101674283c", MAC:"ee:e0:1e:b0:b2:eb", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:42:59.085226 containerd[1685]: 2025-11-06 00:42:59.081 [INFO][4489] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="80999a82ca8479d7d6de81499256b740a509b6e5cc87f5e7f5946c8628c9adf0" Namespace="kube-system" Pod="coredns-66bc5c9577-8xwlw" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--8xwlw-eth0" Nov 6 00:42:59.128546 containerd[1685]: time="2025-11-06T00:42:59.128511074Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b4c854798-5ckfk,Uid:c0e494e7-cf65-452e-8423-9173b08e8c13,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"aa2782526cc5631a3faa9e4ff1eaad6470225b15b7033f866602df994cbc91b9\"" Nov 6 00:42:59.142746 systemd-networkd[1585]: cali5c69775872d: Link UP Nov 6 00:42:59.143452 systemd-networkd[1585]: cali5c69775872d: Gained carrier Nov 6 00:42:59.154798 containerd[1685]: time="2025-11-06T00:42:59.154768969Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ct65m,Uid:d1996df9-2b05-483b-a46f-e6437e23c06c,Namespace:calico-system,Attempt:0,} returns sandbox id \"2bd3cff76df85c11eeb2ce059b4d03ee16150ef275588e152e55e9c90b907116\"" Nov 6 00:42:59.164921 containerd[1685]: 2025-11-06 00:42:58.558 [INFO][4504] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--b6568c646--xhvbr-eth0 calico-apiserver-b6568c646- calico-apiserver d2f87c6a-006e-4994-b5d3-96321f7afa74 851 0 2025-11-06 00:42:27 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:b6568c646 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-b6568c646-xhvbr eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali5c69775872d [] [] }} ContainerID="9595523713db8a944f17b063f466374963743a546abcc598c8b269d8e647b392" Namespace="calico-apiserver" Pod="calico-apiserver-b6568c646-xhvbr" WorkloadEndpoint="localhost-k8s-calico--apiserver--b6568c646--xhvbr-" Nov 6 00:42:59.164921 containerd[1685]: 2025-11-06 00:42:58.561 [INFO][4504] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9595523713db8a944f17b063f466374963743a546abcc598c8b269d8e647b392" Namespace="calico-apiserver" Pod="calico-apiserver-b6568c646-xhvbr" WorkloadEndpoint="localhost-k8s-calico--apiserver--b6568c646--xhvbr-eth0" Nov 6 00:42:59.164921 containerd[1685]: 2025-11-06 00:42:58.596 [INFO][4544] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9595523713db8a944f17b063f466374963743a546abcc598c8b269d8e647b392" HandleID="k8s-pod-network.9595523713db8a944f17b063f466374963743a546abcc598c8b269d8e647b392" Workload="localhost-k8s-calico--apiserver--b6568c646--xhvbr-eth0" Nov 6 00:42:59.164921 containerd[1685]: 2025-11-06 00:42:58.596 [INFO][4544] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="9595523713db8a944f17b063f466374963743a546abcc598c8b269d8e647b392" HandleID="k8s-pod-network.9595523713db8a944f17b063f466374963743a546abcc598c8b269d8e647b392" Workload="localhost-k8s-calico--apiserver--b6568c646--xhvbr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d4fe0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-b6568c646-xhvbr", "timestamp":"2025-11-06 00:42:58.596129551 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 6 00:42:59.164921 containerd[1685]: 2025-11-06 00:42:58.596 [INFO][4544] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 6 00:42:59.164921 containerd[1685]: 2025-11-06 00:42:59.035 [INFO][4544] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 6 00:42:59.164921 containerd[1685]: 2025-11-06 00:42:59.036 [INFO][4544] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 6 00:42:59.164921 containerd[1685]: 2025-11-06 00:42:59.047 [INFO][4544] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9595523713db8a944f17b063f466374963743a546abcc598c8b269d8e647b392" host="localhost" Nov 6 00:42:59.164921 containerd[1685]: 2025-11-06 00:42:59.054 [INFO][4544] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 6 00:42:59.164921 containerd[1685]: 2025-11-06 00:42:59.099 [INFO][4544] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 6 00:42:59.164921 containerd[1685]: 2025-11-06 00:42:59.102 [INFO][4544] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 6 00:42:59.164921 containerd[1685]: 2025-11-06 00:42:59.105 [INFO][4544] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 6 00:42:59.164921 containerd[1685]: 2025-11-06 00:42:59.105 [INFO][4544] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9595523713db8a944f17b063f466374963743a546abcc598c8b269d8e647b392" host="localhost" Nov 6 00:42:59.164921 containerd[1685]: 2025-11-06 00:42:59.110 [INFO][4544] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.9595523713db8a944f17b063f466374963743a546abcc598c8b269d8e647b392 Nov 6 00:42:59.164921 containerd[1685]: 2025-11-06 00:42:59.121 [INFO][4544] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9595523713db8a944f17b063f466374963743a546abcc598c8b269d8e647b392" host="localhost" Nov 6 00:42:59.164921 containerd[1685]: 2025-11-06 00:42:59.137 [INFO][4544] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.137/26] block=192.168.88.128/26 handle="k8s-pod-network.9595523713db8a944f17b063f466374963743a546abcc598c8b269d8e647b392" host="localhost" Nov 6 00:42:59.164921 containerd[1685]: 2025-11-06 00:42:59.137 [INFO][4544] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.137/26] handle="k8s-pod-network.9595523713db8a944f17b063f466374963743a546abcc598c8b269d8e647b392" host="localhost" Nov 6 00:42:59.164921 containerd[1685]: 2025-11-06 00:42:59.137 [INFO][4544] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 6 00:42:59.164921 containerd[1685]: 2025-11-06 00:42:59.137 [INFO][4544] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.137/26] IPv6=[] ContainerID="9595523713db8a944f17b063f466374963743a546abcc598c8b269d8e647b392" HandleID="k8s-pod-network.9595523713db8a944f17b063f466374963743a546abcc598c8b269d8e647b392" Workload="localhost-k8s-calico--apiserver--b6568c646--xhvbr-eth0" Nov 6 00:42:59.169426 containerd[1685]: 2025-11-06 00:42:59.139 [INFO][4504] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9595523713db8a944f17b063f466374963743a546abcc598c8b269d8e647b392" Namespace="calico-apiserver" Pod="calico-apiserver-b6568c646-xhvbr" WorkloadEndpoint="localhost-k8s-calico--apiserver--b6568c646--xhvbr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--b6568c646--xhvbr-eth0", GenerateName:"calico-apiserver-b6568c646-", Namespace:"calico-apiserver", SelfLink:"", UID:"d2f87c6a-006e-4994-b5d3-96321f7afa74", ResourceVersion:"851", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 42, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"b6568c646", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-b6568c646-xhvbr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5c69775872d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:42:59.169426 containerd[1685]: 2025-11-06 00:42:59.139 [INFO][4504] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.137/32] ContainerID="9595523713db8a944f17b063f466374963743a546abcc598c8b269d8e647b392" Namespace="calico-apiserver" Pod="calico-apiserver-b6568c646-xhvbr" WorkloadEndpoint="localhost-k8s-calico--apiserver--b6568c646--xhvbr-eth0" Nov 6 00:42:59.169426 containerd[1685]: 2025-11-06 00:42:59.139 [INFO][4504] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5c69775872d ContainerID="9595523713db8a944f17b063f466374963743a546abcc598c8b269d8e647b392" Namespace="calico-apiserver" Pod="calico-apiserver-b6568c646-xhvbr" WorkloadEndpoint="localhost-k8s-calico--apiserver--b6568c646--xhvbr-eth0" Nov 6 00:42:59.169426 containerd[1685]: 2025-11-06 00:42:59.144 [INFO][4504] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9595523713db8a944f17b063f466374963743a546abcc598c8b269d8e647b392" Namespace="calico-apiserver" Pod="calico-apiserver-b6568c646-xhvbr" WorkloadEndpoint="localhost-k8s-calico--apiserver--b6568c646--xhvbr-eth0" Nov 6 00:42:59.169426 containerd[1685]: 2025-11-06 00:42:59.145 [INFO][4504] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9595523713db8a944f17b063f466374963743a546abcc598c8b269d8e647b392" Namespace="calico-apiserver" Pod="calico-apiserver-b6568c646-xhvbr" WorkloadEndpoint="localhost-k8s-calico--apiserver--b6568c646--xhvbr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--b6568c646--xhvbr-eth0", GenerateName:"calico-apiserver-b6568c646-", Namespace:"calico-apiserver", SelfLink:"", UID:"d2f87c6a-006e-4994-b5d3-96321f7afa74", ResourceVersion:"851", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 42, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"b6568c646", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9595523713db8a944f17b063f466374963743a546abcc598c8b269d8e647b392", Pod:"calico-apiserver-b6568c646-xhvbr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5c69775872d", MAC:"32:3d:76:7d:ca:e7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:42:59.169426 containerd[1685]: 2025-11-06 00:42:59.161 [INFO][4504] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9595523713db8a944f17b063f466374963743a546abcc598c8b269d8e647b392" Namespace="calico-apiserver" Pod="calico-apiserver-b6568c646-xhvbr" WorkloadEndpoint="localhost-k8s-calico--apiserver--b6568c646--xhvbr-eth0" Nov 6 00:42:59.200194 containerd[1685]: time="2025-11-06T00:42:59.200090179Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 6 00:42:59.265752 containerd[1685]: time="2025-11-06T00:42:59.265654807Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5b655bc9b9-hp8mt,Uid:68d221fc-9d1f-4317-856d-8104af586bb8,Namespace:calico-system,Attempt:0,} returns sandbox id \"b8eab80e95c46007161dbd1dbc3c49dfac40499f00f178d56bc82730ead8181c\"" Nov 6 00:42:59.338050 systemd-networkd[1585]: calib8f84cd0ab4: Gained IPv6LL Nov 6 00:42:59.362253 containerd[1685]: time="2025-11-06T00:42:59.362203246Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b4c854798-qdwsh,Uid:f9e94ba9-fb6e-467c-acd4-cc6ad1a73d5e,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"0cb8d7c66291be6d86965c0e74e037a423322d45edf94bb9e447d17fbab8176c\"" Nov 6 00:42:59.438260 containerd[1685]: time="2025-11-06T00:42:59.438214725Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-767d5fc77b-l2scd,Uid:459c9ede-8e15-4c74-8abd-6b9e688b2183,Namespace:calico-system,Attempt:0,} returns sandbox id \"4b4f37d67822f971f7a4d80b953b94cf5a186ae83a5dd75f6195f5666200565f\"" Nov 6 00:42:59.798390 containerd[1685]: time="2025-11-06T00:42:59.798333659Z" level=info msg="connecting to shim cc543b649807be9ec264f06ea6d072bda725cdaf0d312ee6d3d14c6bb1c84f31" address="unix:///run/containerd/s/63cdb44985881e678d5461c9282e24554d0ce2e085f6b93683cf8ec67d914415" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:42:59.818988 systemd[1]: Started cri-containerd-cc543b649807be9ec264f06ea6d072bda725cdaf0d312ee6d3d14c6bb1c84f31.scope - libcontainer container cc543b649807be9ec264f06ea6d072bda725cdaf0d312ee6d3d14c6bb1c84f31. Nov 6 00:42:59.830881 systemd-resolved[1341]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 6 00:42:59.900908 containerd[1685]: time="2025-11-06T00:42:59.900789037Z" level=info msg="connecting to shim 4cd57a4bad182a4e2df2f491b9c71fbd100efbfc931fdd1b272fe8ab2781e668" address="unix:///run/containerd/s/358c10ed31e8d3dd9531c78d4d2545235b12f217485234873fc93b7a2df72d1a" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:42:59.913214 systemd-networkd[1585]: cali484973e90e0: Gained IPv6LL Nov 6 00:42:59.913963 systemd[1]: Started cri-containerd-4cd57a4bad182a4e2df2f491b9c71fbd100efbfc931fdd1b272fe8ab2781e668.scope - libcontainer container 4cd57a4bad182a4e2df2f491b9c71fbd100efbfc931fdd1b272fe8ab2781e668. Nov 6 00:42:59.922291 systemd-resolved[1341]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 6 00:42:59.925317 containerd[1685]: time="2025-11-06T00:42:59.925275410Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-tx6df,Uid:64e4744f-a9ed-4557-9e74-304e879412c8,Namespace:calico-system,Attempt:0,} returns sandbox id \"cc543b649807be9ec264f06ea6d072bda725cdaf0d312ee6d3d14c6bb1c84f31\"" Nov 6 00:42:59.932007 containerd[1685]: time="2025-11-06T00:42:59.931976630Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:42:59.945677 containerd[1685]: time="2025-11-06T00:42:59.945636577Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 6 00:42:59.945677 containerd[1685]: time="2025-11-06T00:42:59.945668345Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 6 00:42:59.956499 containerd[1685]: time="2025-11-06T00:42:59.956346396Z" level=info msg="connecting to shim 80999a82ca8479d7d6de81499256b740a509b6e5cc87f5e7f5946c8628c9adf0" address="unix:///run/containerd/s/08cf427df5c395ff06bde42d7152dace4a7324fc9b4b9e483bdd52671877bcb0" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:42:59.959360 containerd[1685]: time="2025-11-06T00:42:59.959336679Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-84pcz,Uid:f1a660b5-fc2e-4025-a112-e4a7be1ec985,Namespace:kube-system,Attempt:0,} returns sandbox id \"4cd57a4bad182a4e2df2f491b9c71fbd100efbfc931fdd1b272fe8ab2781e668\"" Nov 6 00:42:59.964045 containerd[1685]: time="2025-11-06T00:42:59.964010684Z" level=info msg="connecting to shim 9595523713db8a944f17b063f466374963743a546abcc598c8b269d8e647b392" address="unix:///run/containerd/s/5168c9f9d42783d49c3163a59a47b226b4104bd922f71af6b56560451c2269cf" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:42:59.980513 kubelet[2990]: E1106 00:42:59.980372 2990 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:42:59.981528 containerd[1685]: time="2025-11-06T00:42:59.981302069Z" level=info msg="CreateContainer within sandbox \"4cd57a4bad182a4e2df2f491b9c71fbd100efbfc931fdd1b272fe8ab2781e668\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 6 00:42:59.982112 kubelet[2990]: E1106 00:42:59.982085 2990 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:42:59.988914 containerd[1685]: time="2025-11-06T00:42:59.988142562Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 6 00:42:59.990953 kubelet[2990]: E1106 00:42:59.990771 2990 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-5b4c854798-5ckfk_calico-apiserver(c0e494e7-cf65-452e-8423-9173b08e8c13): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 6 00:42:59.993575 kubelet[2990]: E1106 00:42:59.993449 2990 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b4c854798-5ckfk" podUID="c0e494e7-cf65-452e-8423-9173b08e8c13" Nov 6 00:42:59.994037 systemd[1]: Started cri-containerd-9595523713db8a944f17b063f466374963743a546abcc598c8b269d8e647b392.scope - libcontainer container 9595523713db8a944f17b063f466374963743a546abcc598c8b269d8e647b392. Nov 6 00:43:00.007051 systemd[1]: Started cri-containerd-80999a82ca8479d7d6de81499256b740a509b6e5cc87f5e7f5946c8628c9adf0.scope - libcontainer container 80999a82ca8479d7d6de81499256b740a509b6e5cc87f5e7f5946c8628c9adf0. Nov 6 00:43:00.016979 systemd-resolved[1341]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 6 00:43:00.024113 systemd-resolved[1341]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 6 00:43:00.060900 containerd[1685]: time="2025-11-06T00:43:00.060756505Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b6568c646-xhvbr,Uid:d2f87c6a-006e-4994-b5d3-96321f7afa74,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"9595523713db8a944f17b063f466374963743a546abcc598c8b269d8e647b392\"" Nov 6 00:43:00.064322 containerd[1685]: time="2025-11-06T00:43:00.064247465Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-8xwlw,Uid:1b69788f-7d65-4052-bed4-5296f39e04d2,Namespace:kube-system,Attempt:0,} returns sandbox id \"80999a82ca8479d7d6de81499256b740a509b6e5cc87f5e7f5946c8628c9adf0\"" Nov 6 00:43:00.083618 containerd[1685]: time="2025-11-06T00:43:00.083462091Z" level=info msg="Container 2b518b961555717486453e386e0ded740c0678df95a20eac63ff76b5df041739: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:43:00.123994 containerd[1685]: time="2025-11-06T00:43:00.123967559Z" level=info msg="CreateContainer within sandbox \"4cd57a4bad182a4e2df2f491b9c71fbd100efbfc931fdd1b272fe8ab2781e668\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2b518b961555717486453e386e0ded740c0678df95a20eac63ff76b5df041739\"" Nov 6 00:43:00.170015 containerd[1685]: time="2025-11-06T00:43:00.169945971Z" level=info msg="StartContainer for \"2b518b961555717486453e386e0ded740c0678df95a20eac63ff76b5df041739\"" Nov 6 00:43:00.170185 systemd-networkd[1585]: cali73d5d0cc5a0: Gained IPv6LL Nov 6 00:43:00.175356 containerd[1685]: time="2025-11-06T00:43:00.175134569Z" level=info msg="connecting to shim 2b518b961555717486453e386e0ded740c0678df95a20eac63ff76b5df041739" address="unix:///run/containerd/s/358c10ed31e8d3dd9531c78d4d2545235b12f217485234873fc93b7a2df72d1a" protocol=ttrpc version=3 Nov 6 00:43:00.182815 containerd[1685]: time="2025-11-06T00:43:00.181574127Z" level=info msg="CreateContainer within sandbox \"80999a82ca8479d7d6de81499256b740a509b6e5cc87f5e7f5946c8628c9adf0\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 6 00:43:00.193160 systemd[1]: Started cri-containerd-2b518b961555717486453e386e0ded740c0678df95a20eac63ff76b5df041739.scope - libcontainer container 2b518b961555717486453e386e0ded740c0678df95a20eac63ff76b5df041739. Nov 6 00:43:00.201952 kubelet[2990]: E1106 00:43:00.201714 2990 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b4c854798-5ckfk" podUID="c0e494e7-cf65-452e-8423-9173b08e8c13" Nov 6 00:43:00.207209 containerd[1685]: time="2025-11-06T00:43:00.207178282Z" level=info msg="Container 7a06b5c73a14fdcf8e3674583359db3885678904018c0c6837007a54ebf47311: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:43:00.233028 systemd-networkd[1585]: cali505653cc17e: Gained IPv6LL Nov 6 00:43:00.274059 containerd[1685]: time="2025-11-06T00:43:00.274024041Z" level=info msg="StartContainer for \"2b518b961555717486453e386e0ded740c0678df95a20eac63ff76b5df041739\" returns successfully" Nov 6 00:43:00.335804 containerd[1685]: time="2025-11-06T00:43:00.335730751Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:43:00.550994 containerd[1685]: time="2025-11-06T00:43:00.550638168Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 6 00:43:00.551090 containerd[1685]: time="2025-11-06T00:43:00.551066805Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 6 00:43:00.551332 kubelet[2990]: E1106 00:43:00.551237 2990 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 6 00:43:00.551377 kubelet[2990]: E1106 00:43:00.551336 2990 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 6 00:43:00.551577 kubelet[2990]: E1106 00:43:00.551548 2990 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-ct65m_calico-system(d1996df9-2b05-483b-a46f-e6437e23c06c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 6 00:43:00.551779 containerd[1685]: time="2025-11-06T00:43:00.551762770Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 6 00:43:00.554411 containerd[1685]: time="2025-11-06T00:43:00.554206655Z" level=info msg="CreateContainer within sandbox \"80999a82ca8479d7d6de81499256b740a509b6e5cc87f5e7f5946c8628c9adf0\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7a06b5c73a14fdcf8e3674583359db3885678904018c0c6837007a54ebf47311\"" Nov 6 00:43:00.554773 containerd[1685]: time="2025-11-06T00:43:00.554749173Z" level=info msg="StartContainer for \"7a06b5c73a14fdcf8e3674583359db3885678904018c0c6837007a54ebf47311\"" Nov 6 00:43:00.555607 containerd[1685]: time="2025-11-06T00:43:00.555583557Z" level=info msg="connecting to shim 7a06b5c73a14fdcf8e3674583359db3885678904018c0c6837007a54ebf47311" address="unix:///run/containerd/s/08cf427df5c395ff06bde42d7152dace4a7324fc9b4b9e483bdd52671877bcb0" protocol=ttrpc version=3 Nov 6 00:43:00.579020 systemd[1]: Started cri-containerd-7a06b5c73a14fdcf8e3674583359db3885678904018c0c6837007a54ebf47311.scope - libcontainer container 7a06b5c73a14fdcf8e3674583359db3885678904018c0c6837007a54ebf47311. Nov 6 00:43:00.609204 containerd[1685]: time="2025-11-06T00:43:00.609150069Z" level=info msg="StartContainer for \"7a06b5c73a14fdcf8e3674583359db3885678904018c0c6837007a54ebf47311\" returns successfully" Nov 6 00:43:00.616957 systemd-networkd[1585]: cali7a54e9adb22: Gained IPv6LL Nov 6 00:43:00.680956 systemd-networkd[1585]: calidcda1a0d366: Gained IPv6LL Nov 6 00:43:00.745093 systemd-networkd[1585]: cali5c69775872d: Gained IPv6LL Nov 6 00:43:00.870016 containerd[1685]: time="2025-11-06T00:43:00.869854157Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:43:00.872972 systemd-networkd[1585]: calia2ae2f52b4a: Gained IPv6LL Nov 6 00:43:00.879703 containerd[1685]: time="2025-11-06T00:43:00.879591548Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 6 00:43:00.879703 containerd[1685]: time="2025-11-06T00:43:00.879677738Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 6 00:43:00.880058 kubelet[2990]: E1106 00:43:00.880002 2990 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 6 00:43:00.880133 kubelet[2990]: E1106 00:43:00.880061 2990 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 6 00:43:00.880487 kubelet[2990]: E1106 00:43:00.880214 2990 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-5b655bc9b9-hp8mt_calico-system(68d221fc-9d1f-4317-856d-8104af586bb8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 6 00:43:00.880487 kubelet[2990]: E1106 00:43:00.880254 2990 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5b655bc9b9-hp8mt" podUID="68d221fc-9d1f-4317-856d-8104af586bb8" Nov 6 00:43:00.880836 containerd[1685]: time="2025-11-06T00:43:00.880267718Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 6 00:43:01.001014 systemd-networkd[1585]: cali2101674283c: Gained IPv6LL Nov 6 00:43:01.209156 kubelet[2990]: E1106 00:43:01.209060 2990 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b4c854798-5ckfk" podUID="c0e494e7-cf65-452e-8423-9173b08e8c13" Nov 6 00:43:01.226554 kubelet[2990]: E1106 00:43:01.226522 2990 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5b655bc9b9-hp8mt" podUID="68d221fc-9d1f-4317-856d-8104af586bb8" Nov 6 00:43:01.260809 containerd[1685]: time="2025-11-06T00:43:01.260782984Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:43:01.278782 containerd[1685]: time="2025-11-06T00:43:01.278734839Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 6 00:43:01.279032 containerd[1685]: time="2025-11-06T00:43:01.278777866Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 6 00:43:01.279060 kubelet[2990]: E1106 00:43:01.278921 2990 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:43:01.279060 kubelet[2990]: E1106 00:43:01.278951 2990 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:43:01.279103 kubelet[2990]: E1106 00:43:01.279060 2990 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-5b4c854798-qdwsh_calico-apiserver(f9e94ba9-fb6e-467c-acd4-cc6ad1a73d5e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 6 00:43:01.279103 kubelet[2990]: E1106 00:43:01.279083 2990 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b4c854798-qdwsh" podUID="f9e94ba9-fb6e-467c-acd4-cc6ad1a73d5e" Nov 6 00:43:01.294321 containerd[1685]: time="2025-11-06T00:43:01.279223105Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 6 00:43:01.429202 kubelet[2990]: I1106 00:43:01.429141 2990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-8xwlw" podStartSLOduration=43.377684635 podStartE2EDuration="43.377684635s" podCreationTimestamp="2025-11-06 00:42:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 00:43:01.247414459 +0000 UTC m=+50.041088148" watchObservedRunningTime="2025-11-06 00:43:01.377684635 +0000 UTC m=+50.171358318" Nov 6 00:43:01.441881 kubelet[2990]: I1106 00:43:01.441642 2990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-84pcz" podStartSLOduration=43.441629988 podStartE2EDuration="43.441629988s" podCreationTimestamp="2025-11-06 00:42:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 00:43:01.42939684 +0000 UTC m=+50.223070517" watchObservedRunningTime="2025-11-06 00:43:01.441629988 +0000 UTC m=+50.235303672" Nov 6 00:43:01.676239 containerd[1685]: time="2025-11-06T00:43:01.676190466Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:43:01.689549 containerd[1685]: time="2025-11-06T00:43:01.689502345Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 6 00:43:01.689646 containerd[1685]: time="2025-11-06T00:43:01.689570470Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 6 00:43:01.689796 kubelet[2990]: E1106 00:43:01.689713 2990 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 6 00:43:01.689796 kubelet[2990]: E1106 00:43:01.689760 2990 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 6 00:43:01.689994 kubelet[2990]: E1106 00:43:01.689896 2990 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-767d5fc77b-l2scd_calico-system(459c9ede-8e15-4c74-8abd-6b9e688b2183): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 6 00:43:01.691212 containerd[1685]: time="2025-11-06T00:43:01.691003257Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 6 00:43:02.071458 containerd[1685]: time="2025-11-06T00:43:02.071365348Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:43:02.075979 containerd[1685]: time="2025-11-06T00:43:02.075943948Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 6 00:43:02.076062 containerd[1685]: time="2025-11-06T00:43:02.076029711Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 6 00:43:02.076353 kubelet[2990]: E1106 00:43:02.076200 2990 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 6 00:43:02.076353 kubelet[2990]: E1106 00:43:02.076239 2990 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 6 00:43:02.076531 kubelet[2990]: E1106 00:43:02.076511 2990 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-tx6df_calico-system(64e4744f-a9ed-4557-9e74-304e879412c8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 6 00:43:02.076569 kubelet[2990]: E1106 00:43:02.076543 2990 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-tx6df" podUID="64e4744f-a9ed-4557-9e74-304e879412c8" Nov 6 00:43:02.077589 containerd[1685]: time="2025-11-06T00:43:02.077379762Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 6 00:43:02.211599 kubelet[2990]: E1106 00:43:02.211572 2990 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b4c854798-qdwsh" podUID="f9e94ba9-fb6e-467c-acd4-cc6ad1a73d5e" Nov 6 00:43:02.218491 kubelet[2990]: E1106 00:43:02.212115 2990 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-tx6df" podUID="64e4744f-a9ed-4557-9e74-304e879412c8" Nov 6 00:43:02.446717 containerd[1685]: time="2025-11-06T00:43:02.446548413Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:43:02.450331 containerd[1685]: time="2025-11-06T00:43:02.450303739Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 6 00:43:02.450441 containerd[1685]: time="2025-11-06T00:43:02.450431228Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 6 00:43:02.450896 kubelet[2990]: E1106 00:43:02.450580 2990 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:43:02.450896 kubelet[2990]: E1106 00:43:02.450619 2990 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:43:02.450896 kubelet[2990]: E1106 00:43:02.450729 2990 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-b6568c646-xhvbr_calico-apiserver(d2f87c6a-006e-4994-b5d3-96321f7afa74): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 6 00:43:02.450896 kubelet[2990]: E1106 00:43:02.450752 2990 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-b6568c646-xhvbr" podUID="d2f87c6a-006e-4994-b5d3-96321f7afa74" Nov 6 00:43:02.457530 containerd[1685]: time="2025-11-06T00:43:02.451244530Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 6 00:43:02.786626 containerd[1685]: time="2025-11-06T00:43:02.786473818Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:43:02.786925 containerd[1685]: time="2025-11-06T00:43:02.786889383Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 6 00:43:02.787008 containerd[1685]: time="2025-11-06T00:43:02.786966842Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 6 00:43:02.787165 kubelet[2990]: E1106 00:43:02.787136 2990 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 6 00:43:02.787239 kubelet[2990]: E1106 00:43:02.787219 2990 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 6 00:43:02.788212 kubelet[2990]: E1106 00:43:02.787355 2990 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-ct65m_calico-system(d1996df9-2b05-483b-a46f-e6437e23c06c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 6 00:43:02.788212 kubelet[2990]: E1106 00:43:02.787381 2990 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ct65m" podUID="d1996df9-2b05-483b-a46f-e6437e23c06c" Nov 6 00:43:02.788697 containerd[1685]: time="2025-11-06T00:43:02.788531132Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 6 00:43:03.111046 containerd[1685]: time="2025-11-06T00:43:03.110954007Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:43:03.119461 containerd[1685]: time="2025-11-06T00:43:03.119418816Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 6 00:43:03.119616 containerd[1685]: time="2025-11-06T00:43:03.119488418Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 6 00:43:03.119756 kubelet[2990]: E1106 00:43:03.119735 2990 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 6 00:43:03.119872 kubelet[2990]: E1106 00:43:03.119813 2990 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 6 00:43:03.126961 kubelet[2990]: E1106 00:43:03.119968 2990 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-767d5fc77b-l2scd_calico-system(459c9ede-8e15-4c74-8abd-6b9e688b2183): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 6 00:43:03.126961 kubelet[2990]: E1106 00:43:03.120003 2990 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-767d5fc77b-l2scd" podUID="459c9ede-8e15-4c74-8abd-6b9e688b2183" Nov 6 00:43:03.226879 kubelet[2990]: E1106 00:43:03.225856 2990 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-767d5fc77b-l2scd" podUID="459c9ede-8e15-4c74-8abd-6b9e688b2183" Nov 6 00:43:03.227291 kubelet[2990]: E1106 00:43:03.227273 2990 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ct65m" podUID="d1996df9-2b05-483b-a46f-e6437e23c06c" Nov 6 00:43:03.227424 kubelet[2990]: E1106 00:43:03.227412 2990 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-b6568c646-xhvbr" podUID="d2f87c6a-006e-4994-b5d3-96321f7afa74" Nov 6 00:43:13.290203 containerd[1685]: time="2025-11-06T00:43:13.289941733Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 6 00:43:13.668257 containerd[1685]: time="2025-11-06T00:43:13.668158525Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:43:13.668561 containerd[1685]: time="2025-11-06T00:43:13.668538976Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 6 00:43:13.668680 containerd[1685]: time="2025-11-06T00:43:13.668612935Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 6 00:43:13.668740 kubelet[2990]: E1106 00:43:13.668712 2990 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 6 00:43:13.668990 kubelet[2990]: E1106 00:43:13.668747 2990 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 6 00:43:13.668990 kubelet[2990]: E1106 00:43:13.668800 2990 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-5b655bc9b9-hp8mt_calico-system(68d221fc-9d1f-4317-856d-8104af586bb8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 6 00:43:13.668990 kubelet[2990]: E1106 00:43:13.668826 2990 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5b655bc9b9-hp8mt" podUID="68d221fc-9d1f-4317-856d-8104af586bb8" Nov 6 00:43:14.289565 containerd[1685]: time="2025-11-06T00:43:14.289391471Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 6 00:43:14.627129 containerd[1685]: time="2025-11-06T00:43:14.627034085Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:43:14.632463 containerd[1685]: time="2025-11-06T00:43:14.632330560Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 6 00:43:14.632463 containerd[1685]: time="2025-11-06T00:43:14.632394799Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 6 00:43:14.632677 kubelet[2990]: E1106 00:43:14.632632 2990 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 6 00:43:14.632715 kubelet[2990]: E1106 00:43:14.632676 2990 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 6 00:43:14.632752 kubelet[2990]: E1106 00:43:14.632733 2990 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-767d5fc77b-l2scd_calico-system(459c9ede-8e15-4c74-8abd-6b9e688b2183): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 6 00:43:14.633994 containerd[1685]: time="2025-11-06T00:43:14.633762701Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 6 00:43:15.000234 containerd[1685]: time="2025-11-06T00:43:15.000166650Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:43:15.000688 containerd[1685]: time="2025-11-06T00:43:15.000652815Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 6 00:43:15.000744 containerd[1685]: time="2025-11-06T00:43:15.000720757Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 6 00:43:15.001226 kubelet[2990]: E1106 00:43:15.001047 2990 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 6 00:43:15.001226 kubelet[2990]: E1106 00:43:15.001090 2990 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 6 00:43:15.001226 kubelet[2990]: E1106 00:43:15.001153 2990 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-767d5fc77b-l2scd_calico-system(459c9ede-8e15-4c74-8abd-6b9e688b2183): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 6 00:43:15.001887 kubelet[2990]: E1106 00:43:15.001189 2990 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-767d5fc77b-l2scd" podUID="459c9ede-8e15-4c74-8abd-6b9e688b2183" Nov 6 00:43:15.289495 containerd[1685]: time="2025-11-06T00:43:15.288991485Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 6 00:43:15.632929 containerd[1685]: time="2025-11-06T00:43:15.631977365Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:43:15.633502 containerd[1685]: time="2025-11-06T00:43:15.633428088Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 6 00:43:15.633502 containerd[1685]: time="2025-11-06T00:43:15.633484843Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 6 00:43:15.634988 kubelet[2990]: E1106 00:43:15.634965 2990 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 6 00:43:15.635151 kubelet[2990]: E1106 00:43:15.635049 2990 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 6 00:43:15.635189 kubelet[2990]: E1106 00:43:15.635150 2990 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-ct65m_calico-system(d1996df9-2b05-483b-a46f-e6437e23c06c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 6 00:43:15.635486 containerd[1685]: time="2025-11-06T00:43:15.635475280Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 6 00:43:16.013293 containerd[1685]: time="2025-11-06T00:43:16.013260811Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:43:16.013568 containerd[1685]: time="2025-11-06T00:43:16.013547623Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 6 00:43:16.013606 containerd[1685]: time="2025-11-06T00:43:16.013555562Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 6 00:43:16.013705 kubelet[2990]: E1106 00:43:16.013679 2990 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:43:16.013907 kubelet[2990]: E1106 00:43:16.013710 2990 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:43:16.013907 kubelet[2990]: E1106 00:43:16.013825 2990 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-5b4c854798-5ckfk_calico-apiserver(c0e494e7-cf65-452e-8423-9173b08e8c13): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 6 00:43:16.013907 kubelet[2990]: E1106 00:43:16.013846 2990 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b4c854798-5ckfk" podUID="c0e494e7-cf65-452e-8423-9173b08e8c13" Nov 6 00:43:16.014284 containerd[1685]: time="2025-11-06T00:43:16.014247072Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 6 00:43:16.373440 containerd[1685]: time="2025-11-06T00:43:16.372966120Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:43:16.374470 containerd[1685]: time="2025-11-06T00:43:16.373605706Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 6 00:43:16.374470 containerd[1685]: time="2025-11-06T00:43:16.373642583Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 6 00:43:16.374470 containerd[1685]: time="2025-11-06T00:43:16.374211638Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 6 00:43:16.376483 kubelet[2990]: E1106 00:43:16.373729 2990 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:43:16.376483 kubelet[2990]: E1106 00:43:16.373772 2990 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:43:16.376483 kubelet[2990]: E1106 00:43:16.373925 2990 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-b6568c646-xhvbr_calico-apiserver(d2f87c6a-006e-4994-b5d3-96321f7afa74): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 6 00:43:16.376483 kubelet[2990]: E1106 00:43:16.373947 2990 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-b6568c646-xhvbr" podUID="d2f87c6a-006e-4994-b5d3-96321f7afa74" Nov 6 00:43:16.718114 containerd[1685]: time="2025-11-06T00:43:16.718076411Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:43:16.718913 containerd[1685]: time="2025-11-06T00:43:16.718375599Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 6 00:43:16.718913 containerd[1685]: time="2025-11-06T00:43:16.718436494Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 6 00:43:16.718999 kubelet[2990]: E1106 00:43:16.718526 2990 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 6 00:43:16.718999 kubelet[2990]: E1106 00:43:16.718560 2990 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 6 00:43:16.718999 kubelet[2990]: E1106 00:43:16.718616 2990 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-ct65m_calico-system(d1996df9-2b05-483b-a46f-e6437e23c06c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 6 00:43:16.719116 kubelet[2990]: E1106 00:43:16.718648 2990 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ct65m" podUID="d1996df9-2b05-483b-a46f-e6437e23c06c" Nov 6 00:43:17.289974 containerd[1685]: time="2025-11-06T00:43:17.289546838Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 6 00:43:17.698046 containerd[1685]: time="2025-11-06T00:43:17.697961211Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:43:17.700993 containerd[1685]: time="2025-11-06T00:43:17.700915905Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 6 00:43:17.700993 containerd[1685]: time="2025-11-06T00:43:17.700967648Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 6 00:43:17.701141 kubelet[2990]: E1106 00:43:17.701113 2990 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 6 00:43:17.708206 kubelet[2990]: E1106 00:43:17.701144 2990 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 6 00:43:17.708206 kubelet[2990]: E1106 00:43:17.701295 2990 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-tx6df_calico-system(64e4744f-a9ed-4557-9e74-304e879412c8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 6 00:43:17.708206 kubelet[2990]: E1106 00:43:17.701410 2990 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-tx6df" podUID="64e4744f-a9ed-4557-9e74-304e879412c8" Nov 6 00:43:17.708444 containerd[1685]: time="2025-11-06T00:43:17.701513810Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 6 00:43:18.069602 containerd[1685]: time="2025-11-06T00:43:18.069554984Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:43:18.069932 containerd[1685]: time="2025-11-06T00:43:18.069914335Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 6 00:43:18.070042 containerd[1685]: time="2025-11-06T00:43:18.069973424Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 6 00:43:18.070180 kubelet[2990]: E1106 00:43:18.070063 2990 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:43:18.070180 kubelet[2990]: E1106 00:43:18.070101 2990 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:43:18.070180 kubelet[2990]: E1106 00:43:18.070151 2990 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-5b4c854798-qdwsh_calico-apiserver(f9e94ba9-fb6e-467c-acd4-cc6ad1a73d5e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 6 00:43:18.070252 kubelet[2990]: E1106 00:43:18.070171 2990 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b4c854798-qdwsh" podUID="f9e94ba9-fb6e-467c-acd4-cc6ad1a73d5e" Nov 6 00:43:26.573645 containerd[1685]: time="2025-11-06T00:43:26.573607568Z" level=info msg="TaskExit event in podsandbox handler container_id:\"11d5b65d22425718dbef4a1fa49c80b01f4beabf387719ff9e32b2595660cfaa\" id:\"3139eb4e1de115acd3fcbf543a61d8dd5a51fe037628e1bee2e6b4ba22cc7c01\" pid:5128 exited_at:{seconds:1762389806 nanos:567907563}" Nov 6 00:43:27.288758 kubelet[2990]: E1106 00:43:27.288632 2990 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5b655bc9b9-hp8mt" podUID="68d221fc-9d1f-4317-856d-8104af586bb8" Nov 6 00:43:27.289666 kubelet[2990]: E1106 00:43:27.288970 2990 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-767d5fc77b-l2scd" podUID="459c9ede-8e15-4c74-8abd-6b9e688b2183" Nov 6 00:43:28.288894 kubelet[2990]: E1106 00:43:28.288845 2990 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-tx6df" podUID="64e4744f-a9ed-4557-9e74-304e879412c8" Nov 6 00:43:29.290353 kubelet[2990]: E1106 00:43:29.290325 2990 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b4c854798-5ckfk" podUID="c0e494e7-cf65-452e-8423-9173b08e8c13" Nov 6 00:43:30.289850 kubelet[2990]: E1106 00:43:30.289768 2990 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b4c854798-qdwsh" podUID="f9e94ba9-fb6e-467c-acd4-cc6ad1a73d5e" Nov 6 00:43:30.290458 kubelet[2990]: E1106 00:43:30.290427 2990 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ct65m" podUID="d1996df9-2b05-483b-a46f-e6437e23c06c" Nov 6 00:43:31.288936 kubelet[2990]: E1106 00:43:31.288839 2990 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-b6568c646-xhvbr" podUID="d2f87c6a-006e-4994-b5d3-96321f7afa74" Nov 6 00:43:37.368334 systemd[1]: Started sshd@7-139.178.70.101:22-139.178.89.65:53092.service - OpenSSH per-connection server daemon (139.178.89.65:53092). Nov 6 00:43:37.637362 sshd[5153]: Accepted publickey for core from 139.178.89.65 port 53092 ssh2: RSA SHA256:aeJ0iyVFFcS2Dq9U+AYxygIIK9Bveb16u++2F2avGlc Nov 6 00:43:37.640783 sshd-session[5153]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:43:37.656080 systemd-logind[1659]: New session 10 of user core. Nov 6 00:43:37.660849 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 6 00:43:38.579757 sshd[5160]: Connection closed by 139.178.89.65 port 53092 Nov 6 00:43:38.579338 sshd-session[5153]: pam_unix(sshd:session): session closed for user core Nov 6 00:43:38.589440 systemd[1]: sshd@7-139.178.70.101:22-139.178.89.65:53092.service: Deactivated successfully. Nov 6 00:43:38.591771 systemd[1]: session-10.scope: Deactivated successfully. Nov 6 00:43:38.594090 systemd-logind[1659]: Session 10 logged out. Waiting for processes to exit. Nov 6 00:43:38.595563 systemd-logind[1659]: Removed session 10. Nov 6 00:43:40.288239 containerd[1685]: time="2025-11-06T00:43:40.288159212Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 6 00:43:40.625959 containerd[1685]: time="2025-11-06T00:43:40.625750140Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:43:40.634060 containerd[1685]: time="2025-11-06T00:43:40.634030525Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 6 00:43:40.634153 containerd[1685]: time="2025-11-06T00:43:40.634085029Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 6 00:43:40.634227 kubelet[2990]: E1106 00:43:40.634201 2990 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 6 00:43:40.634484 kubelet[2990]: E1106 00:43:40.634234 2990 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 6 00:43:40.634484 kubelet[2990]: E1106 00:43:40.634303 2990 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-5b655bc9b9-hp8mt_calico-system(68d221fc-9d1f-4317-856d-8104af586bb8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 6 00:43:40.634484 kubelet[2990]: E1106 00:43:40.634325 2990 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5b655bc9b9-hp8mt" podUID="68d221fc-9d1f-4317-856d-8104af586bb8" Nov 6 00:43:41.292250 containerd[1685]: time="2025-11-06T00:43:41.291540186Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 6 00:43:41.648542 containerd[1685]: time="2025-11-06T00:43:41.648452885Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:43:41.661087 containerd[1685]: time="2025-11-06T00:43:41.660931386Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 6 00:43:41.661087 containerd[1685]: time="2025-11-06T00:43:41.661031117Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 6 00:43:41.661249 kubelet[2990]: E1106 00:43:41.661152 2990 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:43:41.661249 kubelet[2990]: E1106 00:43:41.661191 2990 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:43:41.662275 kubelet[2990]: E1106 00:43:41.661342 2990 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-5b4c854798-5ckfk_calico-apiserver(c0e494e7-cf65-452e-8423-9173b08e8c13): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 6 00:43:41.662275 kubelet[2990]: E1106 00:43:41.661376 2990 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b4c854798-5ckfk" podUID="c0e494e7-cf65-452e-8423-9173b08e8c13" Nov 6 00:43:41.662370 containerd[1685]: time="2025-11-06T00:43:41.661770896Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 6 00:43:41.985103 containerd[1685]: time="2025-11-06T00:43:41.984954581Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:43:41.990067 containerd[1685]: time="2025-11-06T00:43:41.990047441Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 6 00:43:41.990148 containerd[1685]: time="2025-11-06T00:43:41.990104596Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 6 00:43:41.990384 kubelet[2990]: E1106 00:43:41.990289 2990 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:43:41.990384 kubelet[2990]: E1106 00:43:41.990338 2990 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:43:41.990484 kubelet[2990]: E1106 00:43:41.990466 2990 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-5b4c854798-qdwsh_calico-apiserver(f9e94ba9-fb6e-467c-acd4-cc6ad1a73d5e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 6 00:43:41.990513 kubelet[2990]: E1106 00:43:41.990495 2990 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b4c854798-qdwsh" podUID="f9e94ba9-fb6e-467c-acd4-cc6ad1a73d5e" Nov 6 00:43:41.991346 containerd[1685]: time="2025-11-06T00:43:41.991329412Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 6 00:43:42.337000 containerd[1685]: time="2025-11-06T00:43:42.336895494Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:43:42.337640 containerd[1685]: time="2025-11-06T00:43:42.337370493Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 6 00:43:42.337640 containerd[1685]: time="2025-11-06T00:43:42.337416121Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 6 00:43:42.337790 kubelet[2990]: E1106 00:43:42.337541 2990 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 6 00:43:42.337790 kubelet[2990]: E1106 00:43:42.337574 2990 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 6 00:43:42.337790 kubelet[2990]: E1106 00:43:42.337625 2990 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-767d5fc77b-l2scd_calico-system(459c9ede-8e15-4c74-8abd-6b9e688b2183): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 6 00:43:42.339014 containerd[1685]: time="2025-11-06T00:43:42.338909145Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 6 00:43:42.659440 containerd[1685]: time="2025-11-06T00:43:42.659364186Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:43:42.667687 containerd[1685]: time="2025-11-06T00:43:42.667665790Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 6 00:43:42.667739 containerd[1685]: time="2025-11-06T00:43:42.667716116Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 6 00:43:42.667835 kubelet[2990]: E1106 00:43:42.667809 2990 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 6 00:43:42.668057 kubelet[2990]: E1106 00:43:42.667841 2990 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 6 00:43:42.668057 kubelet[2990]: E1106 00:43:42.667910 2990 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-767d5fc77b-l2scd_calico-system(459c9ede-8e15-4c74-8abd-6b9e688b2183): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 6 00:43:42.668057 kubelet[2990]: E1106 00:43:42.667937 2990 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-767d5fc77b-l2scd" podUID="459c9ede-8e15-4c74-8abd-6b9e688b2183" Nov 6 00:43:43.288647 containerd[1685]: time="2025-11-06T00:43:43.288484053Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 6 00:43:43.591508 systemd[1]: Started sshd@8-139.178.70.101:22-139.178.89.65:53108.service - OpenSSH per-connection server daemon (139.178.89.65:53108). Nov 6 00:43:43.605382 containerd[1685]: time="2025-11-06T00:43:43.605342668Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:43:43.605638 containerd[1685]: time="2025-11-06T00:43:43.605619894Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 6 00:43:43.605879 containerd[1685]: time="2025-11-06T00:43:43.605663983Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 6 00:43:43.605910 kubelet[2990]: E1106 00:43:43.605770 2990 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 6 00:43:43.605910 kubelet[2990]: E1106 00:43:43.605797 2990 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 6 00:43:43.605910 kubelet[2990]: E1106 00:43:43.605851 2990 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-tx6df_calico-system(64e4744f-a9ed-4557-9e74-304e879412c8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 6 00:43:43.606918 kubelet[2990]: E1106 00:43:43.606895 2990 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-tx6df" podUID="64e4744f-a9ed-4557-9e74-304e879412c8" Nov 6 00:43:43.660188 sshd[5178]: Accepted publickey for core from 139.178.89.65 port 53108 ssh2: RSA SHA256:aeJ0iyVFFcS2Dq9U+AYxygIIK9Bveb16u++2F2avGlc Nov 6 00:43:43.661148 sshd-session[5178]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:43:43.666033 systemd-logind[1659]: New session 11 of user core. Nov 6 00:43:43.672028 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 6 00:43:43.800140 sshd[5181]: Connection closed by 139.178.89.65 port 53108 Nov 6 00:43:43.801983 sshd-session[5178]: pam_unix(sshd:session): session closed for user core Nov 6 00:43:43.804520 systemd[1]: sshd@8-139.178.70.101:22-139.178.89.65:53108.service: Deactivated successfully. Nov 6 00:43:43.804893 systemd-logind[1659]: Session 11 logged out. Waiting for processes to exit. Nov 6 00:43:43.806847 systemd[1]: session-11.scope: Deactivated successfully. Nov 6 00:43:43.809828 systemd-logind[1659]: Removed session 11. Nov 6 00:43:44.289992 containerd[1685]: time="2025-11-06T00:43:44.289674708Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 6 00:43:44.631717 containerd[1685]: time="2025-11-06T00:43:44.631606508Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:43:44.634252 containerd[1685]: time="2025-11-06T00:43:44.634224864Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 6 00:43:44.634310 containerd[1685]: time="2025-11-06T00:43:44.634278977Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 6 00:43:44.634751 kubelet[2990]: E1106 00:43:44.634379 2990 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 6 00:43:44.634751 kubelet[2990]: E1106 00:43:44.634410 2990 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 6 00:43:44.634751 kubelet[2990]: E1106 00:43:44.634457 2990 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-ct65m_calico-system(d1996df9-2b05-483b-a46f-e6437e23c06c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 6 00:43:44.635607 containerd[1685]: time="2025-11-06T00:43:44.635116257Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 6 00:43:44.982795 containerd[1685]: time="2025-11-06T00:43:44.982040669Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:43:44.986860 containerd[1685]: time="2025-11-06T00:43:44.986791466Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 6 00:43:44.986860 containerd[1685]: time="2025-11-06T00:43:44.986843849Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 6 00:43:44.987183 kubelet[2990]: E1106 00:43:44.987024 2990 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 6 00:43:44.987183 kubelet[2990]: E1106 00:43:44.987073 2990 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 6 00:43:44.987183 kubelet[2990]: E1106 00:43:44.987124 2990 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-ct65m_calico-system(d1996df9-2b05-483b-a46f-e6437e23c06c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 6 00:43:44.987302 kubelet[2990]: E1106 00:43:44.987157 2990 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ct65m" podUID="d1996df9-2b05-483b-a46f-e6437e23c06c" Nov 6 00:43:46.289010 containerd[1685]: time="2025-11-06T00:43:46.288546020Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 6 00:43:46.694837 containerd[1685]: time="2025-11-06T00:43:46.694671714Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:43:46.699735 containerd[1685]: time="2025-11-06T00:43:46.699703591Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 6 00:43:46.699843 containerd[1685]: time="2025-11-06T00:43:46.699762460Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 6 00:43:46.699886 kubelet[2990]: E1106 00:43:46.699841 2990 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:43:46.700097 kubelet[2990]: E1106 00:43:46.699905 2990 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:43:46.700097 kubelet[2990]: E1106 00:43:46.699981 2990 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-b6568c646-xhvbr_calico-apiserver(d2f87c6a-006e-4994-b5d3-96321f7afa74): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 6 00:43:46.700097 kubelet[2990]: E1106 00:43:46.700003 2990 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-b6568c646-xhvbr" podUID="d2f87c6a-006e-4994-b5d3-96321f7afa74" Nov 6 00:43:48.807810 systemd[1]: Started sshd@9-139.178.70.101:22-139.178.89.65:56840.service - OpenSSH per-connection server daemon (139.178.89.65:56840). Nov 6 00:43:49.088987 sshd[5196]: Accepted publickey for core from 139.178.89.65 port 56840 ssh2: RSA SHA256:aeJ0iyVFFcS2Dq9U+AYxygIIK9Bveb16u++2F2avGlc Nov 6 00:43:49.090142 sshd-session[5196]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:43:49.093347 systemd-logind[1659]: New session 12 of user core. Nov 6 00:43:49.098123 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 6 00:43:49.242675 sshd[5199]: Connection closed by 139.178.89.65 port 56840 Nov 6 00:43:49.244056 sshd-session[5196]: pam_unix(sshd:session): session closed for user core Nov 6 00:43:49.249246 systemd[1]: sshd@9-139.178.70.101:22-139.178.89.65:56840.service: Deactivated successfully. Nov 6 00:43:49.250462 systemd[1]: session-12.scope: Deactivated successfully. Nov 6 00:43:49.253375 systemd-logind[1659]: Session 12 logged out. Waiting for processes to exit. Nov 6 00:43:49.255906 systemd-logind[1659]: Removed session 12. Nov 6 00:43:49.257609 systemd[1]: Started sshd@10-139.178.70.101:22-139.178.89.65:56848.service - OpenSSH per-connection server daemon (139.178.89.65:56848). Nov 6 00:43:49.302084 sshd[5212]: Accepted publickey for core from 139.178.89.65 port 56848 ssh2: RSA SHA256:aeJ0iyVFFcS2Dq9U+AYxygIIK9Bveb16u++2F2avGlc Nov 6 00:43:49.304105 sshd-session[5212]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:43:49.307881 systemd-logind[1659]: New session 13 of user core. Nov 6 00:43:49.313986 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 6 00:43:49.479889 sshd[5215]: Connection closed by 139.178.89.65 port 56848 Nov 6 00:43:49.480447 sshd-session[5212]: pam_unix(sshd:session): session closed for user core Nov 6 00:43:49.487655 systemd[1]: sshd@10-139.178.70.101:22-139.178.89.65:56848.service: Deactivated successfully. Nov 6 00:43:49.490293 systemd[1]: session-13.scope: Deactivated successfully. Nov 6 00:43:49.491903 systemd-logind[1659]: Session 13 logged out. Waiting for processes to exit. Nov 6 00:43:49.496359 systemd-logind[1659]: Removed session 13. Nov 6 00:43:49.498269 systemd[1]: Started sshd@11-139.178.70.101:22-139.178.89.65:56856.service - OpenSSH per-connection server daemon (139.178.89.65:56856). Nov 6 00:43:49.566922 sshd[5225]: Accepted publickey for core from 139.178.89.65 port 56856 ssh2: RSA SHA256:aeJ0iyVFFcS2Dq9U+AYxygIIK9Bveb16u++2F2avGlc Nov 6 00:43:49.568411 sshd-session[5225]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:43:49.574661 systemd-logind[1659]: New session 14 of user core. Nov 6 00:43:49.578509 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 6 00:43:49.688232 sshd[5228]: Connection closed by 139.178.89.65 port 56856 Nov 6 00:43:49.688721 sshd-session[5225]: pam_unix(sshd:session): session closed for user core Nov 6 00:43:49.691261 systemd-logind[1659]: Session 14 logged out. Waiting for processes to exit. Nov 6 00:43:49.691413 systemd[1]: sshd@11-139.178.70.101:22-139.178.89.65:56856.service: Deactivated successfully. Nov 6 00:43:49.692752 systemd[1]: session-14.scope: Deactivated successfully. Nov 6 00:43:49.694386 systemd-logind[1659]: Removed session 14. Nov 6 00:43:52.288940 kubelet[2990]: E1106 00:43:52.288894 2990 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5b655bc9b9-hp8mt" podUID="68d221fc-9d1f-4317-856d-8104af586bb8" Nov 6 00:43:54.697852 systemd[1]: Started sshd@12-139.178.70.101:22-139.178.89.65:56864.service - OpenSSH per-connection server daemon (139.178.89.65:56864). Nov 6 00:43:54.737102 sshd[5245]: Accepted publickey for core from 139.178.89.65 port 56864 ssh2: RSA SHA256:aeJ0iyVFFcS2Dq9U+AYxygIIK9Bveb16u++2F2avGlc Nov 6 00:43:54.738329 sshd-session[5245]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:43:54.742140 systemd-logind[1659]: New session 15 of user core. Nov 6 00:43:54.749006 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 6 00:43:54.858665 sshd[5249]: Connection closed by 139.178.89.65 port 56864 Nov 6 00:43:54.858846 sshd-session[5245]: pam_unix(sshd:session): session closed for user core Nov 6 00:43:54.863040 systemd[1]: sshd@12-139.178.70.101:22-139.178.89.65:56864.service: Deactivated successfully. Nov 6 00:43:54.865003 systemd[1]: session-15.scope: Deactivated successfully. Nov 6 00:43:54.867099 systemd-logind[1659]: Session 15 logged out. Waiting for processes to exit. Nov 6 00:43:54.868561 systemd-logind[1659]: Removed session 15. Nov 6 00:43:55.289788 kubelet[2990]: E1106 00:43:55.289602 2990 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b4c854798-5ckfk" podUID="c0e494e7-cf65-452e-8423-9173b08e8c13" Nov 6 00:43:56.290206 kubelet[2990]: E1106 00:43:56.290167 2990 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b4c854798-qdwsh" podUID="f9e94ba9-fb6e-467c-acd4-cc6ad1a73d5e" Nov 6 00:43:56.291948 kubelet[2990]: E1106 00:43:56.291912 2990 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-767d5fc77b-l2scd" podUID="459c9ede-8e15-4c74-8abd-6b9e688b2183" Nov 6 00:43:56.292174 kubelet[2990]: E1106 00:43:56.292150 2990 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ct65m" podUID="d1996df9-2b05-483b-a46f-e6437e23c06c" Nov 6 00:43:56.681657 containerd[1685]: time="2025-11-06T00:43:56.681443452Z" level=info msg="TaskExit event in podsandbox handler container_id:\"11d5b65d22425718dbef4a1fa49c80b01f4beabf387719ff9e32b2595660cfaa\" id:\"f1d3673e88b86d662f448cd84d9004f63d8d43f46e7b6cd83eca9e3d86117316\" pid:5272 exit_status:1 exited_at:{seconds:1762389836 nanos:645992579}" Nov 6 00:43:59.289026 kubelet[2990]: E1106 00:43:59.288779 2990 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-tx6df" podUID="64e4744f-a9ed-4557-9e74-304e879412c8" Nov 6 00:43:59.869380 systemd[1]: Started sshd@13-139.178.70.101:22-139.178.89.65:37912.service - OpenSSH per-connection server daemon (139.178.89.65:37912). Nov 6 00:44:00.073136 sshd[5285]: Accepted publickey for core from 139.178.89.65 port 37912 ssh2: RSA SHA256:aeJ0iyVFFcS2Dq9U+AYxygIIK9Bveb16u++2F2avGlc Nov 6 00:44:00.074189 sshd-session[5285]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:44:00.076926 systemd-logind[1659]: New session 16 of user core. Nov 6 00:44:00.084985 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 6 00:44:00.313524 sshd[5289]: Connection closed by 139.178.89.65 port 37912 Nov 6 00:44:00.314441 sshd-session[5285]: pam_unix(sshd:session): session closed for user core Nov 6 00:44:00.316887 systemd[1]: sshd@13-139.178.70.101:22-139.178.89.65:37912.service: Deactivated successfully. Nov 6 00:44:00.318520 systemd[1]: session-16.scope: Deactivated successfully. Nov 6 00:44:00.321359 systemd-logind[1659]: Session 16 logged out. Waiting for processes to exit. Nov 6 00:44:00.322838 systemd-logind[1659]: Removed session 16. Nov 6 00:44:01.290010 kubelet[2990]: E1106 00:44:01.289980 2990 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-b6568c646-xhvbr" podUID="d2f87c6a-006e-4994-b5d3-96321f7afa74" Nov 6 00:44:03.290308 kubelet[2990]: E1106 00:44:03.290264 2990 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5b655bc9b9-hp8mt" podUID="68d221fc-9d1f-4317-856d-8104af586bb8" Nov 6 00:44:05.327103 systemd[1]: Started sshd@14-139.178.70.101:22-139.178.89.65:37922.service - OpenSSH per-connection server daemon (139.178.89.65:37922). Nov 6 00:44:05.368889 sshd[5300]: Accepted publickey for core from 139.178.89.65 port 37922 ssh2: RSA SHA256:aeJ0iyVFFcS2Dq9U+AYxygIIK9Bveb16u++2F2avGlc Nov 6 00:44:05.369525 sshd-session[5300]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:44:05.372428 systemd-logind[1659]: New session 17 of user core. Nov 6 00:44:05.377101 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 6 00:44:05.478337 sshd[5303]: Connection closed by 139.178.89.65 port 37922 Nov 6 00:44:05.479553 sshd-session[5300]: pam_unix(sshd:session): session closed for user core Nov 6 00:44:05.485254 systemd[1]: sshd@14-139.178.70.101:22-139.178.89.65:37922.service: Deactivated successfully. Nov 6 00:44:05.486264 systemd[1]: session-17.scope: Deactivated successfully. Nov 6 00:44:05.487020 systemd-logind[1659]: Session 17 logged out. Waiting for processes to exit. Nov 6 00:44:05.487668 systemd-logind[1659]: Removed session 17. Nov 6 00:44:08.290812 kubelet[2990]: E1106 00:44:08.289933 2990 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b4c854798-qdwsh" podUID="f9e94ba9-fb6e-467c-acd4-cc6ad1a73d5e" Nov 6 00:44:09.291979 kubelet[2990]: E1106 00:44:09.291925 2990 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ct65m" podUID="d1996df9-2b05-483b-a46f-e6437e23c06c" Nov 6 00:44:10.288554 kubelet[2990]: E1106 00:44:10.288482 2990 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b4c854798-5ckfk" podUID="c0e494e7-cf65-452e-8423-9173b08e8c13" Nov 6 00:44:10.288992 kubelet[2990]: E1106 00:44:10.288947 2990 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-tx6df" podUID="64e4744f-a9ed-4557-9e74-304e879412c8" Nov 6 00:44:10.292921 kubelet[2990]: E1106 00:44:10.292856 2990 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-767d5fc77b-l2scd" podUID="459c9ede-8e15-4c74-8abd-6b9e688b2183" Nov 6 00:44:10.491161 systemd[1]: Started sshd@15-139.178.70.101:22-139.178.89.65:40280.service - OpenSSH per-connection server daemon (139.178.89.65:40280). Nov 6 00:44:10.570653 sshd[5314]: Accepted publickey for core from 139.178.89.65 port 40280 ssh2: RSA SHA256:aeJ0iyVFFcS2Dq9U+AYxygIIK9Bveb16u++2F2avGlc Nov 6 00:44:10.571944 sshd-session[5314]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:44:10.575397 systemd-logind[1659]: New session 18 of user core. Nov 6 00:44:10.581952 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 6 00:44:10.708620 sshd[5317]: Connection closed by 139.178.89.65 port 40280 Nov 6 00:44:10.710627 sshd-session[5314]: pam_unix(sshd:session): session closed for user core Nov 6 00:44:10.718700 systemd[1]: sshd@15-139.178.70.101:22-139.178.89.65:40280.service: Deactivated successfully. Nov 6 00:44:10.721438 systemd[1]: session-18.scope: Deactivated successfully. Nov 6 00:44:10.723558 systemd-logind[1659]: Session 18 logged out. Waiting for processes to exit. Nov 6 00:44:10.726668 systemd[1]: Started sshd@16-139.178.70.101:22-139.178.89.65:40296.service - OpenSSH per-connection server daemon (139.178.89.65:40296). Nov 6 00:44:10.728986 systemd-logind[1659]: Removed session 18. Nov 6 00:44:10.775217 sshd[5329]: Accepted publickey for core from 139.178.89.65 port 40296 ssh2: RSA SHA256:aeJ0iyVFFcS2Dq9U+AYxygIIK9Bveb16u++2F2avGlc Nov 6 00:44:10.776364 sshd-session[5329]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:44:10.781322 systemd-logind[1659]: New session 19 of user core. Nov 6 00:44:10.784973 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 6 00:44:11.447339 sshd[5332]: Connection closed by 139.178.89.65 port 40296 Nov 6 00:44:11.447980 sshd-session[5329]: pam_unix(sshd:session): session closed for user core Nov 6 00:44:11.457253 systemd[1]: sshd@16-139.178.70.101:22-139.178.89.65:40296.service: Deactivated successfully. Nov 6 00:44:11.458697 systemd[1]: session-19.scope: Deactivated successfully. Nov 6 00:44:11.459905 systemd-logind[1659]: Session 19 logged out. Waiting for processes to exit. Nov 6 00:44:11.462017 systemd[1]: Started sshd@17-139.178.70.101:22-139.178.89.65:40302.service - OpenSSH per-connection server daemon (139.178.89.65:40302). Nov 6 00:44:11.463320 systemd-logind[1659]: Removed session 19. Nov 6 00:44:11.501153 sshd[5344]: Accepted publickey for core from 139.178.89.65 port 40302 ssh2: RSA SHA256:aeJ0iyVFFcS2Dq9U+AYxygIIK9Bveb16u++2F2avGlc Nov 6 00:44:11.502003 sshd-session[5344]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:44:11.505984 systemd-logind[1659]: New session 20 of user core. Nov 6 00:44:11.514059 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 6 00:44:12.129408 sshd[5347]: Connection closed by 139.178.89.65 port 40302 Nov 6 00:44:12.132855 sshd-session[5344]: pam_unix(sshd:session): session closed for user core Nov 6 00:44:12.143242 systemd[1]: sshd@17-139.178.70.101:22-139.178.89.65:40302.service: Deactivated successfully. Nov 6 00:44:12.145458 systemd[1]: session-20.scope: Deactivated successfully. Nov 6 00:44:12.147923 systemd-logind[1659]: Session 20 logged out. Waiting for processes to exit. Nov 6 00:44:12.149505 systemd[1]: Started sshd@18-139.178.70.101:22-139.178.89.65:40304.service - OpenSSH per-connection server daemon (139.178.89.65:40304). Nov 6 00:44:12.161319 systemd-logind[1659]: Removed session 20. Nov 6 00:44:12.247516 sshd[5362]: Accepted publickey for core from 139.178.89.65 port 40304 ssh2: RSA SHA256:aeJ0iyVFFcS2Dq9U+AYxygIIK9Bveb16u++2F2avGlc Nov 6 00:44:12.248922 sshd-session[5362]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:44:12.252087 systemd-logind[1659]: New session 21 of user core. Nov 6 00:44:12.258172 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 6 00:44:12.517309 sshd[5365]: Connection closed by 139.178.89.65 port 40304 Nov 6 00:44:12.517760 sshd-session[5362]: pam_unix(sshd:session): session closed for user core Nov 6 00:44:12.526677 systemd[1]: sshd@18-139.178.70.101:22-139.178.89.65:40304.service: Deactivated successfully. Nov 6 00:44:12.529853 systemd[1]: session-21.scope: Deactivated successfully. Nov 6 00:44:12.531139 systemd-logind[1659]: Session 21 logged out. Waiting for processes to exit. Nov 6 00:44:12.535118 systemd[1]: Started sshd@19-139.178.70.101:22-139.178.89.65:40312.service - OpenSSH per-connection server daemon (139.178.89.65:40312). Nov 6 00:44:12.536583 systemd-logind[1659]: Removed session 21. Nov 6 00:44:12.599688 sshd[5375]: Accepted publickey for core from 139.178.89.65 port 40312 ssh2: RSA SHA256:aeJ0iyVFFcS2Dq9U+AYxygIIK9Bveb16u++2F2avGlc Nov 6 00:44:12.600953 sshd-session[5375]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:44:12.606990 systemd-logind[1659]: New session 22 of user core. Nov 6 00:44:12.611195 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 6 00:44:12.754837 sshd[5378]: Connection closed by 139.178.89.65 port 40312 Nov 6 00:44:12.755231 sshd-session[5375]: pam_unix(sshd:session): session closed for user core Nov 6 00:44:12.759513 systemd[1]: sshd@19-139.178.70.101:22-139.178.89.65:40312.service: Deactivated successfully. Nov 6 00:44:12.760695 systemd[1]: session-22.scope: Deactivated successfully. Nov 6 00:44:12.761200 systemd-logind[1659]: Session 22 logged out. Waiting for processes to exit. Nov 6 00:44:12.762665 systemd-logind[1659]: Removed session 22. Nov 6 00:44:14.288810 kubelet[2990]: E1106 00:44:14.288583 2990 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5b655bc9b9-hp8mt" podUID="68d221fc-9d1f-4317-856d-8104af586bb8" Nov 6 00:44:15.294423 kubelet[2990]: E1106 00:44:15.293460 2990 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-b6568c646-xhvbr" podUID="d2f87c6a-006e-4994-b5d3-96321f7afa74" Nov 6 00:44:17.764928 systemd[1]: Started sshd@20-139.178.70.101:22-139.178.89.65:40722.service - OpenSSH per-connection server daemon (139.178.89.65:40722). Nov 6 00:44:17.851800 sshd[5402]: Accepted publickey for core from 139.178.89.65 port 40722 ssh2: RSA SHA256:aeJ0iyVFFcS2Dq9U+AYxygIIK9Bveb16u++2F2avGlc Nov 6 00:44:17.852360 sshd-session[5402]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:44:17.857843 systemd-logind[1659]: New session 23 of user core. Nov 6 00:44:17.861952 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 6 00:44:17.998183 sshd[5405]: Connection closed by 139.178.89.65 port 40722 Nov 6 00:44:17.998107 sshd-session[5402]: pam_unix(sshd:session): session closed for user core Nov 6 00:44:18.001543 systemd[1]: sshd@20-139.178.70.101:22-139.178.89.65:40722.service: Deactivated successfully. Nov 6 00:44:18.003738 systemd[1]: session-23.scope: Deactivated successfully. Nov 6 00:44:18.007200 systemd-logind[1659]: Session 23 logged out. Waiting for processes to exit. Nov 6 00:44:18.008173 systemd-logind[1659]: Removed session 23. Nov 6 00:44:21.296566 kubelet[2990]: E1106 00:44:21.296431 2990 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-767d5fc77b-l2scd" podUID="459c9ede-8e15-4c74-8abd-6b9e688b2183" Nov 6 00:44:22.296425 containerd[1685]: time="2025-11-06T00:44:22.296377143Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 6 00:44:22.651112 containerd[1685]: time="2025-11-06T00:44:22.651009235Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:44:22.651367 containerd[1685]: time="2025-11-06T00:44:22.651327592Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 6 00:44:22.651428 containerd[1685]: time="2025-11-06T00:44:22.651385761Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 6 00:44:22.656216 kubelet[2990]: E1106 00:44:22.656168 2990 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:44:22.656585 kubelet[2990]: E1106 00:44:22.656459 2990 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:44:22.656585 kubelet[2990]: E1106 00:44:22.656531 2990 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-5b4c854798-qdwsh_calico-apiserver(f9e94ba9-fb6e-467c-acd4-cc6ad1a73d5e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 6 00:44:22.656585 kubelet[2990]: E1106 00:44:22.656561 2990 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b4c854798-qdwsh" podUID="f9e94ba9-fb6e-467c-acd4-cc6ad1a73d5e" Nov 6 00:44:23.009965 systemd[1]: Started sshd@21-139.178.70.101:22-139.178.89.65:40736.service - OpenSSH per-connection server daemon (139.178.89.65:40736). Nov 6 00:44:23.084079 sshd[5425]: Accepted publickey for core from 139.178.89.65 port 40736 ssh2: RSA SHA256:aeJ0iyVFFcS2Dq9U+AYxygIIK9Bveb16u++2F2avGlc Nov 6 00:44:23.084944 sshd-session[5425]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:44:23.087897 systemd-logind[1659]: New session 24 of user core. Nov 6 00:44:23.096081 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 6 00:44:23.194647 sshd[5428]: Connection closed by 139.178.89.65 port 40736 Nov 6 00:44:23.194894 sshd-session[5425]: pam_unix(sshd:session): session closed for user core Nov 6 00:44:23.197755 systemd[1]: sshd@21-139.178.70.101:22-139.178.89.65:40736.service: Deactivated successfully. Nov 6 00:44:23.199861 systemd[1]: session-24.scope: Deactivated successfully. Nov 6 00:44:23.202208 systemd-logind[1659]: Session 24 logged out. Waiting for processes to exit. Nov 6 00:44:23.203711 systemd-logind[1659]: Removed session 24. Nov 6 00:44:23.288797 kubelet[2990]: E1106 00:44:23.288620 2990 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ct65m" podUID="d1996df9-2b05-483b-a46f-e6437e23c06c" Nov 6 00:44:24.288540 containerd[1685]: time="2025-11-06T00:44:24.288481309Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 6 00:44:24.639801 containerd[1685]: time="2025-11-06T00:44:24.639597554Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:44:24.639972 containerd[1685]: time="2025-11-06T00:44:24.639936974Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 6 00:44:24.640013 containerd[1685]: time="2025-11-06T00:44:24.639995280Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 6 00:44:24.640160 kubelet[2990]: E1106 00:44:24.640133 2990 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:44:24.640369 kubelet[2990]: E1106 00:44:24.640166 2990 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:44:24.640369 kubelet[2990]: E1106 00:44:24.640242 2990 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-5b4c854798-5ckfk_calico-apiserver(c0e494e7-cf65-452e-8423-9173b08e8c13): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 6 00:44:24.640591 kubelet[2990]: E1106 00:44:24.640264 2990 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b4c854798-5ckfk" podUID="c0e494e7-cf65-452e-8423-9173b08e8c13" Nov 6 00:44:25.289232 containerd[1685]: time="2025-11-06T00:44:25.289169402Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 6 00:44:25.640060 containerd[1685]: time="2025-11-06T00:44:25.639771565Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:44:25.640161 containerd[1685]: time="2025-11-06T00:44:25.640133273Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 6 00:44:25.640306 containerd[1685]: time="2025-11-06T00:44:25.640215258Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 6 00:44:25.640363 kubelet[2990]: E1106 00:44:25.640340 2990 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 6 00:44:25.640581 kubelet[2990]: E1106 00:44:25.640380 2990 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 6 00:44:25.640581 kubelet[2990]: E1106 00:44:25.640437 2990 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-tx6df_calico-system(64e4744f-a9ed-4557-9e74-304e879412c8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 6 00:44:25.640581 kubelet[2990]: E1106 00:44:25.640464 2990 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-tx6df" podUID="64e4744f-a9ed-4557-9e74-304e879412c8" Nov 6 00:44:26.547272 containerd[1685]: time="2025-11-06T00:44:26.547243908Z" level=info msg="TaskExit event in podsandbox handler container_id:\"11d5b65d22425718dbef4a1fa49c80b01f4beabf387719ff9e32b2595660cfaa\" id:\"dd731d2452a53107e31854598c3d178dccc087a783cf034378843c82f9ab799a\" pid:5452 exited_at:{seconds:1762389866 nanos:546959521}" Nov 6 00:44:28.207155 systemd[1]: Started sshd@22-139.178.70.101:22-139.178.89.65:43824.service - OpenSSH per-connection server daemon (139.178.89.65:43824). Nov 6 00:44:28.278065 sshd[5465]: Accepted publickey for core from 139.178.89.65 port 43824 ssh2: RSA SHA256:aeJ0iyVFFcS2Dq9U+AYxygIIK9Bveb16u++2F2avGlc Nov 6 00:44:28.279489 sshd-session[5465]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:44:28.283563 systemd-logind[1659]: New session 25 of user core. Nov 6 00:44:28.287961 systemd[1]: Started session-25.scope - Session 25 of User core. Nov 6 00:44:28.290189 containerd[1685]: time="2025-11-06T00:44:28.289953948Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 6 00:44:28.434884 sshd[5469]: Connection closed by 139.178.89.65 port 43824 Nov 6 00:44:28.435250 sshd-session[5465]: pam_unix(sshd:session): session closed for user core Nov 6 00:44:28.437439 systemd[1]: sshd@22-139.178.70.101:22-139.178.89.65:43824.service: Deactivated successfully. Nov 6 00:44:28.439645 systemd[1]: session-25.scope: Deactivated successfully. Nov 6 00:44:28.442785 systemd-logind[1659]: Session 25 logged out. Waiting for processes to exit. Nov 6 00:44:28.443552 systemd-logind[1659]: Removed session 25. Nov 6 00:44:28.647292 containerd[1685]: time="2025-11-06T00:44:28.647136877Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:44:28.648033 containerd[1685]: time="2025-11-06T00:44:28.648003757Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 6 00:44:28.648081 containerd[1685]: time="2025-11-06T00:44:28.648051945Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 6 00:44:28.648228 kubelet[2990]: E1106 00:44:28.648191 2990 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 6 00:44:28.648428 kubelet[2990]: E1106 00:44:28.648234 2990 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 6 00:44:28.648428 kubelet[2990]: E1106 00:44:28.648284 2990 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-5b655bc9b9-hp8mt_calico-system(68d221fc-9d1f-4317-856d-8104af586bb8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 6 00:44:28.648428 kubelet[2990]: E1106 00:44:28.648304 2990 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5b655bc9b9-hp8mt" podUID="68d221fc-9d1f-4317-856d-8104af586bb8" Nov 6 00:44:30.289407 containerd[1685]: time="2025-11-06T00:44:30.288650231Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 6 00:44:30.597928 containerd[1685]: time="2025-11-06T00:44:30.597701313Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:44:30.598475 containerd[1685]: time="2025-11-06T00:44:30.598354288Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 6 00:44:30.598475 containerd[1685]: time="2025-11-06T00:44:30.598416468Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 6 00:44:30.598616 kubelet[2990]: E1106 00:44:30.598594 2990 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:44:30.599135 kubelet[2990]: E1106 00:44:30.598803 2990 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:44:30.599135 kubelet[2990]: E1106 00:44:30.598861 2990 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-b6568c646-xhvbr_calico-apiserver(d2f87c6a-006e-4994-b5d3-96321f7afa74): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 6 00:44:30.599135 kubelet[2990]: E1106 00:44:30.598900 2990 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-b6568c646-xhvbr" podUID="d2f87c6a-006e-4994-b5d3-96321f7afa74"