Oct 30 05:33:03.373081 kernel: Linux version 6.12.54-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Thu Oct 30 03:45:45 -00 2025 Oct 30 05:33:03.373123 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=c9268d42679a2004cee399a64e9ca764369a7ade73fcdb4dc46afa45c3a8dab8 Oct 30 05:33:03.373131 kernel: Disabled fast string operations Oct 30 05:33:03.373136 kernel: BIOS-provided physical RAM map: Oct 30 05:33:03.373141 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ebff] usable Oct 30 05:33:03.373145 kernel: BIOS-e820: [mem 0x000000000009ec00-0x000000000009ffff] reserved Oct 30 05:33:03.373151 kernel: BIOS-e820: [mem 0x00000000000dc000-0x00000000000fffff] reserved Oct 30 05:33:03.373161 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007fedffff] usable Oct 30 05:33:03.373166 kernel: BIOS-e820: [mem 0x000000007fee0000-0x000000007fefefff] ACPI data Oct 30 05:33:03.373171 kernel: BIOS-e820: [mem 0x000000007feff000-0x000000007fefffff] ACPI NVS Oct 30 05:33:03.373176 kernel: BIOS-e820: [mem 0x000000007ff00000-0x000000007fffffff] usable Oct 30 05:33:03.373181 kernel: BIOS-e820: [mem 0x00000000f0000000-0x00000000f7ffffff] reserved Oct 30 05:33:03.373186 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec0ffff] reserved Oct 30 05:33:03.373191 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Oct 30 05:33:03.373201 kernel: BIOS-e820: [mem 0x00000000fffe0000-0x00000000ffffffff] reserved Oct 30 05:33:03.373207 kernel: NX (Execute Disable) protection: active Oct 30 05:33:03.373212 kernel: APIC: Static calls initialized Oct 30 05:33:03.373218 kernel: SMBIOS 2.7 present. Oct 30 05:33:03.373224 kernel: DMI: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 05/28/2020 Oct 30 05:33:03.373229 kernel: DMI: Memory slots populated: 1/128 Oct 30 05:33:03.373235 kernel: vmware: hypercall mode: 0x00 Oct 30 05:33:03.373247 kernel: Hypervisor detected: VMware Oct 30 05:33:03.373253 kernel: vmware: TSC freq read from hypervisor : 3408.000 MHz Oct 30 05:33:03.373259 kernel: vmware: Host bus clock speed read from hypervisor : 66000000 Hz Oct 30 05:33:03.373264 kernel: vmware: using clock offset of 3445360226 ns Oct 30 05:33:03.373270 kernel: tsc: Detected 3408.000 MHz processor Oct 30 05:33:03.373276 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Oct 30 05:33:03.373283 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Oct 30 05:33:03.373289 kernel: last_pfn = 0x80000 max_arch_pfn = 0x400000000 Oct 30 05:33:03.373295 kernel: total RAM covered: 3072M Oct 30 05:33:03.373307 kernel: Found optimal setting for mtrr clean up Oct 30 05:33:03.373313 kernel: gran_size: 64K chunk_size: 64K num_reg: 2 lose cover RAM: 0G Oct 30 05:33:03.373320 kernel: MTRR map: 6 entries (5 fixed + 1 variable; max 21), built from 8 variable MTRRs Oct 30 05:33:03.373325 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Oct 30 05:33:03.373331 kernel: Using GB pages for direct mapping Oct 30 05:33:03.373337 kernel: ACPI: Early table checksum verification disabled Oct 30 05:33:03.373343 kernel: ACPI: RSDP 0x00000000000F6A00 000024 (v02 PTLTD ) Oct 30 05:33:03.373354 kernel: ACPI: XSDT 0x000000007FEE965B 00005C (v01 INTEL 440BX 06040000 VMW 01324272) Oct 30 05:33:03.373360 kernel: ACPI: FACP 0x000000007FEFEE73 0000F4 (v04 INTEL 440BX 06040000 PTL 000F4240) Oct 30 05:33:03.373367 kernel: ACPI: DSDT 0x000000007FEEAD55 01411E (v01 PTLTD Custom 06040000 MSFT 03000001) Oct 30 05:33:03.373377 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Oct 30 05:33:03.373384 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Oct 30 05:33:03.373390 kernel: ACPI: BOOT 0x000000007FEEAD2D 000028 (v01 PTLTD $SBFTBL$ 06040000 LTP 00000001) Oct 30 05:33:03.373400 kernel: ACPI: APIC 0x000000007FEEA5EB 000742 (v01 PTLTD ? APIC 06040000 LTP 00000000) Oct 30 05:33:03.373407 kernel: ACPI: MCFG 0x000000007FEEA5AF 00003C (v01 PTLTD $PCITBL$ 06040000 LTP 00000001) Oct 30 05:33:03.373413 kernel: ACPI: SRAT 0x000000007FEE9757 0008A8 (v02 VMWARE MEMPLUG 06040000 VMW 00000001) Oct 30 05:33:03.373419 kernel: ACPI: HPET 0x000000007FEE971F 000038 (v01 VMWARE VMW HPET 06040000 VMW 00000001) Oct 30 05:33:03.373425 kernel: ACPI: WAET 0x000000007FEE96F7 000028 (v01 VMWARE VMW WAET 06040000 VMW 00000001) Oct 30 05:33:03.373432 kernel: ACPI: Reserving FACP table memory at [mem 0x7fefee73-0x7fefef66] Oct 30 05:33:03.373442 kernel: ACPI: Reserving DSDT table memory at [mem 0x7feead55-0x7fefee72] Oct 30 05:33:03.373449 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Oct 30 05:33:03.373454 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Oct 30 05:33:03.373461 kernel: ACPI: Reserving BOOT table memory at [mem 0x7feead2d-0x7feead54] Oct 30 05:33:03.373466 kernel: ACPI: Reserving APIC table memory at [mem 0x7feea5eb-0x7feead2c] Oct 30 05:33:03.373473 kernel: ACPI: Reserving MCFG table memory at [mem 0x7feea5af-0x7feea5ea] Oct 30 05:33:03.373479 kernel: ACPI: Reserving SRAT table memory at [mem 0x7fee9757-0x7fee9ffe] Oct 30 05:33:03.373489 kernel: ACPI: Reserving HPET table memory at [mem 0x7fee971f-0x7fee9756] Oct 30 05:33:03.373495 kernel: ACPI: Reserving WAET table memory at [mem 0x7fee96f7-0x7fee971e] Oct 30 05:33:03.373501 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Oct 30 05:33:03.373508 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Oct 30 05:33:03.373514 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000-0xbfffffff] hotplug Oct 30 05:33:03.373520 kernel: NUMA: Node 0 [mem 0x00001000-0x0009ffff] + [mem 0x00100000-0x7fffffff] -> [mem 0x00001000-0x7fffffff] Oct 30 05:33:03.373527 kernel: NODE_DATA(0) allocated [mem 0x7fff8dc0-0x7fffffff] Oct 30 05:33:03.373533 kernel: Zone ranges: Oct 30 05:33:03.373544 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Oct 30 05:33:03.373550 kernel: DMA32 [mem 0x0000000001000000-0x000000007fffffff] Oct 30 05:33:03.373556 kernel: Normal empty Oct 30 05:33:03.373562 kernel: Device empty Oct 30 05:33:03.373568 kernel: Movable zone start for each node Oct 30 05:33:03.373574 kernel: Early memory node ranges Oct 30 05:33:03.373580 kernel: node 0: [mem 0x0000000000001000-0x000000000009dfff] Oct 30 05:33:03.373586 kernel: node 0: [mem 0x0000000000100000-0x000000007fedffff] Oct 30 05:33:03.373596 kernel: node 0: [mem 0x000000007ff00000-0x000000007fffffff] Oct 30 05:33:03.373602 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007fffffff] Oct 30 05:33:03.373609 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Oct 30 05:33:03.373615 kernel: On node 0, zone DMA: 98 pages in unavailable ranges Oct 30 05:33:03.373621 kernel: On node 0, zone DMA32: 32 pages in unavailable ranges Oct 30 05:33:03.373627 kernel: ACPI: PM-Timer IO Port: 0x1008 Oct 30 05:33:03.373633 kernel: ACPI: LAPIC_NMI (acpi_id[0x00] high edge lint[0x1]) Oct 30 05:33:03.373644 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Oct 30 05:33:03.373650 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Oct 30 05:33:03.373656 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Oct 30 05:33:03.373662 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Oct 30 05:33:03.373668 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Oct 30 05:33:03.373674 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Oct 30 05:33:03.373680 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Oct 30 05:33:03.373686 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Oct 30 05:33:03.373696 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Oct 30 05:33:03.373702 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Oct 30 05:33:03.373709 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Oct 30 05:33:03.373714 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Oct 30 05:33:03.373720 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Oct 30 05:33:03.373726 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Oct 30 05:33:03.373731 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Oct 30 05:33:03.373738 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Oct 30 05:33:03.373759 kernel: ACPI: LAPIC_NMI (acpi_id[0x11] high edge lint[0x1]) Oct 30 05:33:03.373766 kernel: ACPI: LAPIC_NMI (acpi_id[0x12] high edge lint[0x1]) Oct 30 05:33:03.373772 kernel: ACPI: LAPIC_NMI (acpi_id[0x13] high edge lint[0x1]) Oct 30 05:33:03.373778 kernel: ACPI: LAPIC_NMI (acpi_id[0x14] high edge lint[0x1]) Oct 30 05:33:03.373783 kernel: ACPI: LAPIC_NMI (acpi_id[0x15] high edge lint[0x1]) Oct 30 05:33:03.373789 kernel: ACPI: LAPIC_NMI (acpi_id[0x16] high edge lint[0x1]) Oct 30 05:33:03.373795 kernel: ACPI: LAPIC_NMI (acpi_id[0x17] high edge lint[0x1]) Oct 30 05:33:03.373805 kernel: ACPI: LAPIC_NMI (acpi_id[0x18] high edge lint[0x1]) Oct 30 05:33:03.373821 kernel: ACPI: LAPIC_NMI (acpi_id[0x19] high edge lint[0x1]) Oct 30 05:33:03.373828 kernel: ACPI: LAPIC_NMI (acpi_id[0x1a] high edge lint[0x1]) Oct 30 05:33:03.373834 kernel: ACPI: LAPIC_NMI (acpi_id[0x1b] high edge lint[0x1]) Oct 30 05:33:03.373840 kernel: ACPI: LAPIC_NMI (acpi_id[0x1c] high edge lint[0x1]) Oct 30 05:33:03.373846 kernel: ACPI: LAPIC_NMI (acpi_id[0x1d] high edge lint[0x1]) Oct 30 05:33:03.373852 kernel: ACPI: LAPIC_NMI (acpi_id[0x1e] high edge lint[0x1]) Oct 30 05:33:03.373858 kernel: ACPI: LAPIC_NMI (acpi_id[0x1f] high edge lint[0x1]) Oct 30 05:33:03.373864 kernel: ACPI: LAPIC_NMI (acpi_id[0x20] high edge lint[0x1]) Oct 30 05:33:03.373875 kernel: ACPI: LAPIC_NMI (acpi_id[0x21] high edge lint[0x1]) Oct 30 05:33:03.373881 kernel: ACPI: LAPIC_NMI (acpi_id[0x22] high edge lint[0x1]) Oct 30 05:33:03.373887 kernel: ACPI: LAPIC_NMI (acpi_id[0x23] high edge lint[0x1]) Oct 30 05:33:03.373893 kernel: ACPI: LAPIC_NMI (acpi_id[0x24] high edge lint[0x1]) Oct 30 05:33:03.373899 kernel: ACPI: LAPIC_NMI (acpi_id[0x25] high edge lint[0x1]) Oct 30 05:33:03.373905 kernel: ACPI: LAPIC_NMI (acpi_id[0x26] high edge lint[0x1]) Oct 30 05:33:03.373911 kernel: ACPI: LAPIC_NMI (acpi_id[0x27] high edge lint[0x1]) Oct 30 05:33:03.373935 kernel: ACPI: LAPIC_NMI (acpi_id[0x28] high edge lint[0x1]) Oct 30 05:33:03.373943 kernel: ACPI: LAPIC_NMI (acpi_id[0x29] high edge lint[0x1]) Oct 30 05:33:03.373949 kernel: ACPI: LAPIC_NMI (acpi_id[0x2a] high edge lint[0x1]) Oct 30 05:33:03.373961 kernel: ACPI: LAPIC_NMI (acpi_id[0x2b] high edge lint[0x1]) Oct 30 05:33:03.373967 kernel: ACPI: LAPIC_NMI (acpi_id[0x2c] high edge lint[0x1]) Oct 30 05:33:03.373973 kernel: ACPI: LAPIC_NMI (acpi_id[0x2d] high edge lint[0x1]) Oct 30 05:33:03.373980 kernel: ACPI: LAPIC_NMI (acpi_id[0x2e] high edge lint[0x1]) Oct 30 05:33:03.373986 kernel: ACPI: LAPIC_NMI (acpi_id[0x2f] high edge lint[0x1]) Oct 30 05:33:03.373997 kernel: ACPI: LAPIC_NMI (acpi_id[0x30] high edge lint[0x1]) Oct 30 05:33:03.374003 kernel: ACPI: LAPIC_NMI (acpi_id[0x31] high edge lint[0x1]) Oct 30 05:33:03.374010 kernel: ACPI: LAPIC_NMI (acpi_id[0x32] high edge lint[0x1]) Oct 30 05:33:03.374016 kernel: ACPI: LAPIC_NMI (acpi_id[0x33] high edge lint[0x1]) Oct 30 05:33:03.374022 kernel: ACPI: LAPIC_NMI (acpi_id[0x34] high edge lint[0x1]) Oct 30 05:33:03.374028 kernel: ACPI: LAPIC_NMI (acpi_id[0x35] high edge lint[0x1]) Oct 30 05:33:03.374035 kernel: ACPI: LAPIC_NMI (acpi_id[0x36] high edge lint[0x1]) Oct 30 05:33:03.374041 kernel: ACPI: LAPIC_NMI (acpi_id[0x37] high edge lint[0x1]) Oct 30 05:33:03.374052 kernel: ACPI: LAPIC_NMI (acpi_id[0x38] high edge lint[0x1]) Oct 30 05:33:03.374058 kernel: ACPI: LAPIC_NMI (acpi_id[0x39] high edge lint[0x1]) Oct 30 05:33:03.374064 kernel: ACPI: LAPIC_NMI (acpi_id[0x3a] high edge lint[0x1]) Oct 30 05:33:03.374071 kernel: ACPI: LAPIC_NMI (acpi_id[0x3b] high edge lint[0x1]) Oct 30 05:33:03.374077 kernel: ACPI: LAPIC_NMI (acpi_id[0x3c] high edge lint[0x1]) Oct 30 05:33:03.374084 kernel: ACPI: LAPIC_NMI (acpi_id[0x3d] high edge lint[0x1]) Oct 30 05:33:03.374090 kernel: ACPI: LAPIC_NMI (acpi_id[0x3e] high edge lint[0x1]) Oct 30 05:33:03.374096 kernel: ACPI: LAPIC_NMI (acpi_id[0x3f] high edge lint[0x1]) Oct 30 05:33:03.374109 kernel: ACPI: LAPIC_NMI (acpi_id[0x40] high edge lint[0x1]) Oct 30 05:33:03.374116 kernel: ACPI: LAPIC_NMI (acpi_id[0x41] high edge lint[0x1]) Oct 30 05:33:03.374122 kernel: ACPI: LAPIC_NMI (acpi_id[0x42] high edge lint[0x1]) Oct 30 05:33:03.374129 kernel: ACPI: LAPIC_NMI (acpi_id[0x43] high edge lint[0x1]) Oct 30 05:33:03.374135 kernel: ACPI: LAPIC_NMI (acpi_id[0x44] high edge lint[0x1]) Oct 30 05:33:03.374141 kernel: ACPI: LAPIC_NMI (acpi_id[0x45] high edge lint[0x1]) Oct 30 05:33:03.374147 kernel: ACPI: LAPIC_NMI (acpi_id[0x46] high edge lint[0x1]) Oct 30 05:33:03.374154 kernel: ACPI: LAPIC_NMI (acpi_id[0x47] high edge lint[0x1]) Oct 30 05:33:03.374166 kernel: ACPI: LAPIC_NMI (acpi_id[0x48] high edge lint[0x1]) Oct 30 05:33:03.374172 kernel: ACPI: LAPIC_NMI (acpi_id[0x49] high edge lint[0x1]) Oct 30 05:33:03.374179 kernel: ACPI: LAPIC_NMI (acpi_id[0x4a] high edge lint[0x1]) Oct 30 05:33:03.374185 kernel: ACPI: LAPIC_NMI (acpi_id[0x4b] high edge lint[0x1]) Oct 30 05:33:03.374191 kernel: ACPI: LAPIC_NMI (acpi_id[0x4c] high edge lint[0x1]) Oct 30 05:33:03.374197 kernel: ACPI: LAPIC_NMI (acpi_id[0x4d] high edge lint[0x1]) Oct 30 05:33:03.374204 kernel: ACPI: LAPIC_NMI (acpi_id[0x4e] high edge lint[0x1]) Oct 30 05:33:03.374210 kernel: ACPI: LAPIC_NMI (acpi_id[0x4f] high edge lint[0x1]) Oct 30 05:33:03.374221 kernel: ACPI: LAPIC_NMI (acpi_id[0x50] high edge lint[0x1]) Oct 30 05:33:03.374227 kernel: ACPI: LAPIC_NMI (acpi_id[0x51] high edge lint[0x1]) Oct 30 05:33:03.374234 kernel: ACPI: LAPIC_NMI (acpi_id[0x52] high edge lint[0x1]) Oct 30 05:33:03.374240 kernel: ACPI: LAPIC_NMI (acpi_id[0x53] high edge lint[0x1]) Oct 30 05:33:03.374246 kernel: ACPI: LAPIC_NMI (acpi_id[0x54] high edge lint[0x1]) Oct 30 05:33:03.374252 kernel: ACPI: LAPIC_NMI (acpi_id[0x55] high edge lint[0x1]) Oct 30 05:33:03.374258 kernel: ACPI: LAPIC_NMI (acpi_id[0x56] high edge lint[0x1]) Oct 30 05:33:03.374265 kernel: ACPI: LAPIC_NMI (acpi_id[0x57] high edge lint[0x1]) Oct 30 05:33:03.374275 kernel: ACPI: LAPIC_NMI (acpi_id[0x58] high edge lint[0x1]) Oct 30 05:33:03.374281 kernel: ACPI: LAPIC_NMI (acpi_id[0x59] high edge lint[0x1]) Oct 30 05:33:03.374287 kernel: ACPI: LAPIC_NMI (acpi_id[0x5a] high edge lint[0x1]) Oct 30 05:33:03.374294 kernel: ACPI: LAPIC_NMI (acpi_id[0x5b] high edge lint[0x1]) Oct 30 05:33:03.374300 kernel: ACPI: LAPIC_NMI (acpi_id[0x5c] high edge lint[0x1]) Oct 30 05:33:03.374307 kernel: ACPI: LAPIC_NMI (acpi_id[0x5d] high edge lint[0x1]) Oct 30 05:33:03.374313 kernel: ACPI: LAPIC_NMI (acpi_id[0x5e] high edge lint[0x1]) Oct 30 05:33:03.374319 kernel: ACPI: LAPIC_NMI (acpi_id[0x5f] high edge lint[0x1]) Oct 30 05:33:03.374331 kernel: ACPI: LAPIC_NMI (acpi_id[0x60] high edge lint[0x1]) Oct 30 05:33:03.374339 kernel: ACPI: LAPIC_NMI (acpi_id[0x61] high edge lint[0x1]) Oct 30 05:33:03.374345 kernel: ACPI: LAPIC_NMI (acpi_id[0x62] high edge lint[0x1]) Oct 30 05:33:03.374351 kernel: ACPI: LAPIC_NMI (acpi_id[0x63] high edge lint[0x1]) Oct 30 05:33:03.374358 kernel: ACPI: LAPIC_NMI (acpi_id[0x64] high edge lint[0x1]) Oct 30 05:33:03.374364 kernel: ACPI: LAPIC_NMI (acpi_id[0x65] high edge lint[0x1]) Oct 30 05:33:03.374370 kernel: ACPI: LAPIC_NMI (acpi_id[0x66] high edge lint[0x1]) Oct 30 05:33:03.374376 kernel: ACPI: LAPIC_NMI (acpi_id[0x67] high edge lint[0x1]) Oct 30 05:33:03.374389 kernel: ACPI: LAPIC_NMI (acpi_id[0x68] high edge lint[0x1]) Oct 30 05:33:03.374395 kernel: ACPI: LAPIC_NMI (acpi_id[0x69] high edge lint[0x1]) Oct 30 05:33:03.374401 kernel: ACPI: LAPIC_NMI (acpi_id[0x6a] high edge lint[0x1]) Oct 30 05:33:03.374407 kernel: ACPI: LAPIC_NMI (acpi_id[0x6b] high edge lint[0x1]) Oct 30 05:33:03.374414 kernel: ACPI: LAPIC_NMI (acpi_id[0x6c] high edge lint[0x1]) Oct 30 05:33:03.374420 kernel: ACPI: LAPIC_NMI (acpi_id[0x6d] high edge lint[0x1]) Oct 30 05:33:03.374427 kernel: ACPI: LAPIC_NMI (acpi_id[0x6e] high edge lint[0x1]) Oct 30 05:33:03.374433 kernel: ACPI: LAPIC_NMI (acpi_id[0x6f] high edge lint[0x1]) Oct 30 05:33:03.374444 kernel: ACPI: LAPIC_NMI (acpi_id[0x70] high edge lint[0x1]) Oct 30 05:33:03.374450 kernel: ACPI: LAPIC_NMI (acpi_id[0x71] high edge lint[0x1]) Oct 30 05:33:03.374456 kernel: ACPI: LAPIC_NMI (acpi_id[0x72] high edge lint[0x1]) Oct 30 05:33:03.374463 kernel: ACPI: LAPIC_NMI (acpi_id[0x73] high edge lint[0x1]) Oct 30 05:33:03.374469 kernel: ACPI: LAPIC_NMI (acpi_id[0x74] high edge lint[0x1]) Oct 30 05:33:03.374476 kernel: ACPI: LAPIC_NMI (acpi_id[0x75] high edge lint[0x1]) Oct 30 05:33:03.374484 kernel: ACPI: LAPIC_NMI (acpi_id[0x76] high edge lint[0x1]) Oct 30 05:33:03.375766 kernel: ACPI: LAPIC_NMI (acpi_id[0x77] high edge lint[0x1]) Oct 30 05:33:03.375786 kernel: ACPI: LAPIC_NMI (acpi_id[0x78] high edge lint[0x1]) Oct 30 05:33:03.375793 kernel: ACPI: LAPIC_NMI (acpi_id[0x79] high edge lint[0x1]) Oct 30 05:33:03.375799 kernel: ACPI: LAPIC_NMI (acpi_id[0x7a] high edge lint[0x1]) Oct 30 05:33:03.375806 kernel: ACPI: LAPIC_NMI (acpi_id[0x7b] high edge lint[0x1]) Oct 30 05:33:03.375812 kernel: ACPI: LAPIC_NMI (acpi_id[0x7c] high edge lint[0x1]) Oct 30 05:33:03.375819 kernel: ACPI: LAPIC_NMI (acpi_id[0x7d] high edge lint[0x1]) Oct 30 05:33:03.375825 kernel: ACPI: LAPIC_NMI (acpi_id[0x7e] high edge lint[0x1]) Oct 30 05:33:03.375831 kernel: ACPI: LAPIC_NMI (acpi_id[0x7f] high edge lint[0x1]) Oct 30 05:33:03.375843 kernel: IOAPIC[0]: apic_id 1, version 17, address 0xfec00000, GSI 0-23 Oct 30 05:33:03.375850 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 high edge) Oct 30 05:33:03.375856 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Oct 30 05:33:03.375863 kernel: ACPI: HPET id: 0x8086af01 base: 0xfed00000 Oct 30 05:33:03.375869 kernel: TSC deadline timer available Oct 30 05:33:03.375876 kernel: CPU topo: Max. logical packages: 128 Oct 30 05:33:03.375882 kernel: CPU topo: Max. logical dies: 128 Oct 30 05:33:03.375889 kernel: CPU topo: Max. dies per package: 1 Oct 30 05:33:03.375899 kernel: CPU topo: Max. threads per core: 1 Oct 30 05:33:03.375906 kernel: CPU topo: Num. cores per package: 1 Oct 30 05:33:03.375912 kernel: CPU topo: Num. threads per package: 1 Oct 30 05:33:03.375919 kernel: CPU topo: Allowing 2 present CPUs plus 126 hotplug CPUs Oct 30 05:33:03.375925 kernel: [mem 0x80000000-0xefffffff] available for PCI devices Oct 30 05:33:03.375932 kernel: Booting paravirtualized kernel on VMware hypervisor Oct 30 05:33:03.375938 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Oct 30 05:33:03.375945 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:128 nr_cpu_ids:128 nr_node_ids:1 Oct 30 05:33:03.375956 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u262144 Oct 30 05:33:03.375963 kernel: pcpu-alloc: s207832 r8192 d29736 u262144 alloc=1*2097152 Oct 30 05:33:03.375969 kernel: pcpu-alloc: [0] 000 001 002 003 004 005 006 007 Oct 30 05:33:03.375976 kernel: pcpu-alloc: [0] 008 009 010 011 012 013 014 015 Oct 30 05:33:03.375982 kernel: pcpu-alloc: [0] 016 017 018 019 020 021 022 023 Oct 30 05:33:03.375988 kernel: pcpu-alloc: [0] 024 025 026 027 028 029 030 031 Oct 30 05:33:03.375995 kernel: pcpu-alloc: [0] 032 033 034 035 036 037 038 039 Oct 30 05:33:03.376005 kernel: pcpu-alloc: [0] 040 041 042 043 044 045 046 047 Oct 30 05:33:03.376012 kernel: pcpu-alloc: [0] 048 049 050 051 052 053 054 055 Oct 30 05:33:03.376018 kernel: pcpu-alloc: [0] 056 057 058 059 060 061 062 063 Oct 30 05:33:03.376025 kernel: pcpu-alloc: [0] 064 065 066 067 068 069 070 071 Oct 30 05:33:03.376031 kernel: pcpu-alloc: [0] 072 073 074 075 076 077 078 079 Oct 30 05:33:03.376037 kernel: pcpu-alloc: [0] 080 081 082 083 084 085 086 087 Oct 30 05:33:03.376043 kernel: pcpu-alloc: [0] 088 089 090 091 092 093 094 095 Oct 30 05:33:03.376054 kernel: pcpu-alloc: [0] 096 097 098 099 100 101 102 103 Oct 30 05:33:03.376061 kernel: pcpu-alloc: [0] 104 105 106 107 108 109 110 111 Oct 30 05:33:03.376067 kernel: pcpu-alloc: [0] 112 113 114 115 116 117 118 119 Oct 30 05:33:03.376073 kernel: pcpu-alloc: [0] 120 121 122 123 124 125 126 127 Oct 30 05:33:03.376081 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=c9268d42679a2004cee399a64e9ca764369a7ade73fcdb4dc46afa45c3a8dab8 Oct 30 05:33:03.376088 kernel: random: crng init done Oct 30 05:33:03.376098 kernel: printk: log_buf_len individual max cpu contribution: 4096 bytes Oct 30 05:33:03.376105 kernel: printk: log_buf_len total cpu_extra contributions: 520192 bytes Oct 30 05:33:03.376112 kernel: printk: log_buf_len min size: 262144 bytes Oct 30 05:33:03.376118 kernel: printk: log_buf_len: 1048576 bytes Oct 30 05:33:03.376125 kernel: printk: early log buf free: 245688(93%) Oct 30 05:33:03.376131 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 30 05:33:03.376138 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Oct 30 05:33:03.376144 kernel: Fallback order for Node 0: 0 Oct 30 05:33:03.376158 kernel: Built 1 zonelists, mobility grouping on. Total pages: 524157 Oct 30 05:33:03.376167 kernel: Policy zone: DMA32 Oct 30 05:33:03.376174 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 30 05:33:03.376180 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=128, Nodes=1 Oct 30 05:33:03.376187 kernel: ftrace: allocating 40092 entries in 157 pages Oct 30 05:33:03.376194 kernel: ftrace: allocated 157 pages with 5 groups Oct 30 05:33:03.376201 kernel: Dynamic Preempt: voluntary Oct 30 05:33:03.376212 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 30 05:33:03.376220 kernel: rcu: RCU event tracing is enabled. Oct 30 05:33:03.376227 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=128. Oct 30 05:33:03.376234 kernel: Trampoline variant of Tasks RCU enabled. Oct 30 05:33:03.376240 kernel: Rude variant of Tasks RCU enabled. Oct 30 05:33:03.376246 kernel: Tracing variant of Tasks RCU enabled. Oct 30 05:33:03.376253 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 30 05:33:03.376259 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=128 Oct 30 05:33:03.376270 kernel: RCU Tasks: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Oct 30 05:33:03.376277 kernel: RCU Tasks Rude: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Oct 30 05:33:03.376284 kernel: RCU Tasks Trace: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Oct 30 05:33:03.376290 kernel: NR_IRQS: 33024, nr_irqs: 1448, preallocated irqs: 16 Oct 30 05:33:03.376297 kernel: rcu: srcu_init: Setting srcu_struct sizes to big. Oct 30 05:33:03.376304 kernel: Console: colour VGA+ 80x25 Oct 30 05:33:03.376310 kernel: printk: legacy console [tty0] enabled Oct 30 05:33:03.376321 kernel: printk: legacy console [ttyS0] enabled Oct 30 05:33:03.376327 kernel: ACPI: Core revision 20240827 Oct 30 05:33:03.376334 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 133484882848 ns Oct 30 05:33:03.376341 kernel: APIC: Switch to symmetric I/O mode setup Oct 30 05:33:03.376347 kernel: x2apic enabled Oct 30 05:33:03.376354 kernel: APIC: Switched APIC routing to: physical x2apic Oct 30 05:33:03.376361 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Oct 30 05:33:03.376372 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Oct 30 05:33:03.376379 kernel: Calibrating delay loop (skipped) preset value.. 6816.00 BogoMIPS (lpj=3408000) Oct 30 05:33:03.376385 kernel: Disabled fast string operations Oct 30 05:33:03.376392 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Oct 30 05:33:03.376398 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Oct 30 05:33:03.376405 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Oct 30 05:33:03.376411 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall and VM exit Oct 30 05:33:03.376424 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Oct 30 05:33:03.376431 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Oct 30 05:33:03.376437 kernel: RETBleed: Mitigation: Enhanced IBRS Oct 30 05:33:03.376444 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Oct 30 05:33:03.376451 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Oct 30 05:33:03.376457 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Oct 30 05:33:03.376464 kernel: SRBDS: Unknown: Dependent on hypervisor status Oct 30 05:33:03.376476 kernel: GDS: Unknown: Dependent on hypervisor status Oct 30 05:33:03.376483 kernel: active return thunk: its_return_thunk Oct 30 05:33:03.376489 kernel: ITS: Mitigation: Aligned branch/return thunks Oct 30 05:33:03.376496 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Oct 30 05:33:03.376502 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Oct 30 05:33:03.376509 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Oct 30 05:33:03.376515 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Oct 30 05:33:03.376527 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Oct 30 05:33:03.376533 kernel: Freeing SMP alternatives memory: 32K Oct 30 05:33:03.376540 kernel: pid_max: default: 131072 minimum: 1024 Oct 30 05:33:03.376546 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Oct 30 05:33:03.376553 kernel: landlock: Up and running. Oct 30 05:33:03.376560 kernel: SELinux: Initializing. Oct 30 05:33:03.376566 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Oct 30 05:33:03.376577 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Oct 30 05:33:03.376584 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Oct 30 05:33:03.376590 kernel: Performance Events: Skylake events, core PMU driver. Oct 30 05:33:03.376597 kernel: core: CPUID marked event: 'cpu cycles' unavailable Oct 30 05:33:03.376604 kernel: core: CPUID marked event: 'instructions' unavailable Oct 30 05:33:03.376610 kernel: core: CPUID marked event: 'bus cycles' unavailable Oct 30 05:33:03.376616 kernel: core: CPUID marked event: 'cache references' unavailable Oct 30 05:33:03.376627 kernel: core: CPUID marked event: 'cache misses' unavailable Oct 30 05:33:03.376633 kernel: core: CPUID marked event: 'branch instructions' unavailable Oct 30 05:33:03.376640 kernel: core: CPUID marked event: 'branch misses' unavailable Oct 30 05:33:03.376646 kernel: ... version: 1 Oct 30 05:33:03.376653 kernel: ... bit width: 48 Oct 30 05:33:03.376659 kernel: ... generic registers: 4 Oct 30 05:33:03.376666 kernel: ... value mask: 0000ffffffffffff Oct 30 05:33:03.376676 kernel: ... max period: 000000007fffffff Oct 30 05:33:03.376683 kernel: ... fixed-purpose events: 0 Oct 30 05:33:03.376690 kernel: ... event mask: 000000000000000f Oct 30 05:33:03.376696 kernel: signal: max sigframe size: 1776 Oct 30 05:33:03.376703 kernel: rcu: Hierarchical SRCU implementation. Oct 30 05:33:03.376710 kernel: rcu: Max phase no-delay instances is 400. Oct 30 05:33:03.376716 kernel: Timer migration: 3 hierarchy levels; 8 children per group; 3 crossnode level Oct 30 05:33:03.376727 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Oct 30 05:33:03.376734 kernel: smp: Bringing up secondary CPUs ... Oct 30 05:33:03.376835 kernel: smpboot: x86: Booting SMP configuration: Oct 30 05:33:03.376844 kernel: .... node #0, CPUs: #1 Oct 30 05:33:03.376851 kernel: Disabled fast string operations Oct 30 05:33:03.376858 kernel: smp: Brought up 1 node, 2 CPUs Oct 30 05:33:03.376864 kernel: smpboot: Total of 2 processors activated (13632.00 BogoMIPS) Oct 30 05:33:03.376871 kernel: Memory: 1942648K/2096628K available (14336K kernel code, 2443K rwdata, 29892K rodata, 15960K init, 2084K bss, 142596K reserved, 0K cma-reserved) Oct 30 05:33:03.376886 kernel: devtmpfs: initialized Oct 30 05:33:03.376893 kernel: x86/mm: Memory block size: 128MB Oct 30 05:33:03.376899 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7feff000-0x7fefffff] (4096 bytes) Oct 30 05:33:03.376906 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 30 05:33:03.376912 kernel: futex hash table entries: 32768 (order: 9, 2097152 bytes, linear) Oct 30 05:33:03.376919 kernel: pinctrl core: initialized pinctrl subsystem Oct 30 05:33:03.376926 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 30 05:33:03.376937 kernel: audit: initializing netlink subsys (disabled) Oct 30 05:33:03.376943 kernel: audit: type=2000 audit(1761802379.289:1): state=initialized audit_enabled=0 res=1 Oct 30 05:33:03.376950 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 30 05:33:03.376956 kernel: thermal_sys: Registered thermal governor 'user_space' Oct 30 05:33:03.376963 kernel: cpuidle: using governor menu Oct 30 05:33:03.376969 kernel: Simple Boot Flag at 0x36 set to 0x80 Oct 30 05:33:03.376976 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 30 05:33:03.376990 kernel: dca service started, version 1.12.1 Oct 30 05:33:03.376997 kernel: PCI: ECAM [mem 0xf0000000-0xf7ffffff] (base 0xf0000000) for domain 0000 [bus 00-7f] Oct 30 05:33:03.377027 kernel: PCI: Using configuration type 1 for base access Oct 30 05:33:03.377040 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Oct 30 05:33:03.377048 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Oct 30 05:33:03.377055 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Oct 30 05:33:03.377061 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Oct 30 05:33:03.377074 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Oct 30 05:33:03.377081 kernel: ACPI: Added _OSI(Module Device) Oct 30 05:33:03.377088 kernel: ACPI: Added _OSI(Processor Device) Oct 30 05:33:03.377095 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 30 05:33:03.377102 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 30 05:33:03.377108 kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored Oct 30 05:33:03.377115 kernel: ACPI: Interpreter enabled Oct 30 05:33:03.377127 kernel: ACPI: PM: (supports S0 S1 S5) Oct 30 05:33:03.377134 kernel: ACPI: Using IOAPIC for interrupt routing Oct 30 05:33:03.377141 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Oct 30 05:33:03.377148 kernel: PCI: Using E820 reservations for host bridge windows Oct 30 05:33:03.377155 kernel: ACPI: Enabled 4 GPEs in block 00 to 0F Oct 30 05:33:03.377162 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-7f]) Oct 30 05:33:03.377297 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Oct 30 05:33:03.377382 kernel: acpi PNP0A03:00: _OSC: platform does not support [AER LTR] Oct 30 05:33:03.377453 kernel: acpi PNP0A03:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability] Oct 30 05:33:03.377463 kernel: PCI host bridge to bus 0000:00 Oct 30 05:33:03.377533 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Oct 30 05:33:03.377597 kernel: pci_bus 0000:00: root bus resource [mem 0x000cc000-0x000dbfff window] Oct 30 05:33:03.377669 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Oct 30 05:33:03.377730 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Oct 30 05:33:03.378120 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xfeff window] Oct 30 05:33:03.378186 kernel: pci_bus 0000:00: root bus resource [bus 00-7f] Oct 30 05:33:03.378271 kernel: pci 0000:00:00.0: [8086:7190] type 00 class 0x060000 conventional PCI endpoint Oct 30 05:33:03.378349 kernel: pci 0000:00:01.0: [8086:7191] type 01 class 0x060400 conventional PCI bridge Oct 30 05:33:03.378435 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Oct 30 05:33:03.378520 kernel: pci 0000:00:07.0: [8086:7110] type 00 class 0x060100 conventional PCI endpoint Oct 30 05:33:03.378597 kernel: pci 0000:00:07.1: [8086:7111] type 00 class 0x01018a conventional PCI endpoint Oct 30 05:33:03.378680 kernel: pci 0000:00:07.1: BAR 4 [io 0x1060-0x106f] Oct 30 05:33:03.379634 kernel: pci 0000:00:07.1: BAR 0 [io 0x01f0-0x01f7]: legacy IDE quirk Oct 30 05:33:03.379732 kernel: pci 0000:00:07.1: BAR 1 [io 0x03f6]: legacy IDE quirk Oct 30 05:33:03.379848 kernel: pci 0000:00:07.1: BAR 2 [io 0x0170-0x0177]: legacy IDE quirk Oct 30 05:33:03.379921 kernel: pci 0000:00:07.1: BAR 3 [io 0x0376]: legacy IDE quirk Oct 30 05:33:03.380014 kernel: pci 0000:00:07.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint Oct 30 05:33:03.380086 kernel: pci 0000:00:07.3: quirk: [io 0x1000-0x103f] claimed by PIIX4 ACPI Oct 30 05:33:03.380155 kernel: pci 0000:00:07.3: quirk: [io 0x1040-0x104f] claimed by PIIX4 SMB Oct 30 05:33:03.380231 kernel: pci 0000:00:07.7: [15ad:0740] type 00 class 0x088000 conventional PCI endpoint Oct 30 05:33:03.380301 kernel: pci 0000:00:07.7: BAR 0 [io 0x1080-0x10bf] Oct 30 05:33:03.380369 kernel: pci 0000:00:07.7: BAR 1 [mem 0xfebfe000-0xfebfffff 64bit] Oct 30 05:33:03.380454 kernel: pci 0000:00:0f.0: [15ad:0405] type 00 class 0x030000 conventional PCI endpoint Oct 30 05:33:03.380523 kernel: pci 0000:00:0f.0: BAR 0 [io 0x1070-0x107f] Oct 30 05:33:03.380592 kernel: pci 0000:00:0f.0: BAR 1 [mem 0xe8000000-0xefffffff pref] Oct 30 05:33:03.380660 kernel: pci 0000:00:0f.0: BAR 2 [mem 0xfe000000-0xfe7fffff] Oct 30 05:33:03.380728 kernel: pci 0000:00:0f.0: ROM [mem 0x00000000-0x00007fff pref] Oct 30 05:33:03.380807 kernel: pci 0000:00:0f.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Oct 30 05:33:03.380893 kernel: pci 0000:00:11.0: [15ad:0790] type 01 class 0x060401 conventional PCI bridge Oct 30 05:33:03.380962 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] (subtractive decode) Oct 30 05:33:03.381035 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Oct 30 05:33:03.381104 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Oct 30 05:33:03.382273 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Oct 30 05:33:03.382373 kernel: pci 0000:00:15.0: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 30 05:33:03.382447 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Oct 30 05:33:03.382519 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Oct 30 05:33:03.382589 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Oct 30 05:33:03.382659 kernel: pci 0000:00:15.0: PME# supported from D0 D3hot D3cold Oct 30 05:33:03.382735 kernel: pci 0000:00:15.1: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 30 05:33:03.382834 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Oct 30 05:33:03.382905 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Oct 30 05:33:03.382974 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Oct 30 05:33:03.383054 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Oct 30 05:33:03.383150 kernel: pci 0000:00:15.1: PME# supported from D0 D3hot D3cold Oct 30 05:33:03.383238 kernel: pci 0000:00:15.2: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 30 05:33:03.383324 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Oct 30 05:33:03.383394 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Oct 30 05:33:03.383464 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Oct 30 05:33:03.383533 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Oct 30 05:33:03.383602 kernel: pci 0000:00:15.2: PME# supported from D0 D3hot D3cold Oct 30 05:33:03.383686 kernel: pci 0000:00:15.3: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 30 05:33:03.383779 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Oct 30 05:33:03.383852 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Oct 30 05:33:03.383922 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Oct 30 05:33:03.383992 kernel: pci 0000:00:15.3: PME# supported from D0 D3hot D3cold Oct 30 05:33:03.384067 kernel: pci 0000:00:15.4: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 30 05:33:03.384149 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Oct 30 05:33:03.384217 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Oct 30 05:33:03.384287 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Oct 30 05:33:03.384355 kernel: pci 0000:00:15.4: PME# supported from D0 D3hot D3cold Oct 30 05:33:03.384431 kernel: pci 0000:00:15.5: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 30 05:33:03.384511 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Oct 30 05:33:03.384581 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Oct 30 05:33:03.384650 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Oct 30 05:33:03.384719 kernel: pci 0000:00:15.5: PME# supported from D0 D3hot D3cold Oct 30 05:33:03.384827 kernel: pci 0000:00:15.6: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 30 05:33:03.384897 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Oct 30 05:33:03.384975 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Oct 30 05:33:03.385045 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Oct 30 05:33:03.385114 kernel: pci 0000:00:15.6: PME# supported from D0 D3hot D3cold Oct 30 05:33:03.385188 kernel: pci 0000:00:15.7: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 30 05:33:03.385257 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Oct 30 05:33:03.385326 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Oct 30 05:33:03.385406 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Oct 30 05:33:03.385495 kernel: pci 0000:00:15.7: PME# supported from D0 D3hot D3cold Oct 30 05:33:03.385590 kernel: pci 0000:00:16.0: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 30 05:33:03.385680 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Oct 30 05:33:03.385788 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Oct 30 05:33:03.385881 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Oct 30 05:33:03.385979 kernel: pci 0000:00:16.0: PME# supported from D0 D3hot D3cold Oct 30 05:33:03.386057 kernel: pci 0000:00:16.1: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 30 05:33:03.386126 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Oct 30 05:33:03.386195 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Oct 30 05:33:03.386263 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Oct 30 05:33:03.386332 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Oct 30 05:33:03.386413 kernel: pci 0000:00:16.1: PME# supported from D0 D3hot D3cold Oct 30 05:33:03.386496 kernel: pci 0000:00:16.2: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 30 05:33:03.386566 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Oct 30 05:33:03.386635 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Oct 30 05:33:03.386703 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Oct 30 05:33:03.386787 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Oct 30 05:33:03.386868 kernel: pci 0000:00:16.2: PME# supported from D0 D3hot D3cold Oct 30 05:33:03.386942 kernel: pci 0000:00:16.3: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 30 05:33:03.388069 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Oct 30 05:33:03.388153 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Oct 30 05:33:03.388225 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Oct 30 05:33:03.388296 kernel: pci 0000:00:16.3: PME# supported from D0 D3hot D3cold Oct 30 05:33:03.388388 kernel: pci 0000:00:16.4: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 30 05:33:03.388459 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Oct 30 05:33:03.388528 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Oct 30 05:33:03.388596 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Oct 30 05:33:03.388665 kernel: pci 0000:00:16.4: PME# supported from D0 D3hot D3cold Oct 30 05:33:03.388738 kernel: pci 0000:00:16.5: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 30 05:33:03.388833 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Oct 30 05:33:03.388902 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Oct 30 05:33:03.388970 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Oct 30 05:33:03.389038 kernel: pci 0000:00:16.5: PME# supported from D0 D3hot D3cold Oct 30 05:33:03.389112 kernel: pci 0000:00:16.6: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 30 05:33:03.389183 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Oct 30 05:33:03.389276 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Oct 30 05:33:03.389345 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Oct 30 05:33:03.389414 kernel: pci 0000:00:16.6: PME# supported from D0 D3hot D3cold Oct 30 05:33:03.389488 kernel: pci 0000:00:16.7: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 30 05:33:03.389558 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Oct 30 05:33:03.389626 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Oct 30 05:33:03.389706 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Oct 30 05:33:03.389801 kernel: pci 0000:00:16.7: PME# supported from D0 D3hot D3cold Oct 30 05:33:03.389879 kernel: pci 0000:00:17.0: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 30 05:33:03.389948 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Oct 30 05:33:03.390017 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Oct 30 05:33:03.390085 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Oct 30 05:33:03.390165 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Oct 30 05:33:03.390236 kernel: pci 0000:00:17.0: PME# supported from D0 D3hot D3cold Oct 30 05:33:03.390309 kernel: pci 0000:00:17.1: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 30 05:33:03.390378 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Oct 30 05:33:03.390457 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Oct 30 05:33:03.390526 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Oct 30 05:33:03.390595 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Oct 30 05:33:03.390662 kernel: pci 0000:00:17.1: PME# supported from D0 D3hot D3cold Oct 30 05:33:03.390748 kernel: pci 0000:00:17.2: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 30 05:33:03.390842 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Oct 30 05:33:03.390910 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Oct 30 05:33:03.390979 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Oct 30 05:33:03.391047 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Oct 30 05:33:03.391115 kernel: pci 0000:00:17.2: PME# supported from D0 D3hot D3cold Oct 30 05:33:03.391199 kernel: pci 0000:00:17.3: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 30 05:33:03.391268 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Oct 30 05:33:03.391336 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Oct 30 05:33:03.391404 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Oct 30 05:33:03.391472 kernel: pci 0000:00:17.3: PME# supported from D0 D3hot D3cold Oct 30 05:33:03.391545 kernel: pci 0000:00:17.4: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 30 05:33:03.391626 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Oct 30 05:33:03.391696 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Oct 30 05:33:03.391822 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Oct 30 05:33:03.392150 kernel: pci 0000:00:17.4: PME# supported from D0 D3hot D3cold Oct 30 05:33:03.392228 kernel: pci 0000:00:17.5: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 30 05:33:03.392816 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Oct 30 05:33:03.392906 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Oct 30 05:33:03.392978 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Oct 30 05:33:03.393047 kernel: pci 0000:00:17.5: PME# supported from D0 D3hot D3cold Oct 30 05:33:03.393121 kernel: pci 0000:00:17.6: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 30 05:33:03.393191 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Oct 30 05:33:03.393260 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Oct 30 05:33:03.393345 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Oct 30 05:33:03.393416 kernel: pci 0000:00:17.6: PME# supported from D0 D3hot D3cold Oct 30 05:33:03.393493 kernel: pci 0000:00:17.7: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 30 05:33:03.393563 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Oct 30 05:33:03.393632 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Oct 30 05:33:03.393700 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Oct 30 05:33:03.393792 kernel: pci 0000:00:17.7: PME# supported from D0 D3hot D3cold Oct 30 05:33:03.393865 kernel: pci 0000:00:18.0: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 30 05:33:03.393934 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Oct 30 05:33:03.394003 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Oct 30 05:33:03.394071 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Oct 30 05:33:03.394142 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Oct 30 05:33:03.394221 kernel: pci 0000:00:18.0: PME# supported from D0 D3hot D3cold Oct 30 05:33:03.394295 kernel: pci 0000:00:18.1: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 30 05:33:03.394363 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Oct 30 05:33:03.394447 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Oct 30 05:33:03.394516 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Oct 30 05:33:03.394584 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Oct 30 05:33:03.394663 kernel: pci 0000:00:18.1: PME# supported from D0 D3hot D3cold Oct 30 05:33:03.394737 kernel: pci 0000:00:18.2: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 30 05:33:03.395479 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Oct 30 05:33:03.395553 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Oct 30 05:33:03.395624 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Oct 30 05:33:03.395694 kernel: pci 0000:00:18.2: PME# supported from D0 D3hot D3cold Oct 30 05:33:03.395804 kernel: pci 0000:00:18.3: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 30 05:33:03.395877 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Oct 30 05:33:03.395945 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Oct 30 05:33:03.396013 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Oct 30 05:33:03.396081 kernel: pci 0000:00:18.3: PME# supported from D0 D3hot D3cold Oct 30 05:33:03.398159 kernel: pci 0000:00:18.4: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 30 05:33:03.398267 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Oct 30 05:33:03.398339 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Oct 30 05:33:03.398409 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Oct 30 05:33:03.398479 kernel: pci 0000:00:18.4: PME# supported from D0 D3hot D3cold Oct 30 05:33:03.398554 kernel: pci 0000:00:18.5: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 30 05:33:03.398634 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Oct 30 05:33:03.398702 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Oct 30 05:33:03.401358 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Oct 30 05:33:03.401464 kernel: pci 0000:00:18.5: PME# supported from D0 D3hot D3cold Oct 30 05:33:03.401546 kernel: pci 0000:00:18.6: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 30 05:33:03.401619 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Oct 30 05:33:03.401712 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Oct 30 05:33:03.401807 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Oct 30 05:33:03.401879 kernel: pci 0000:00:18.6: PME# supported from D0 D3hot D3cold Oct 30 05:33:03.401955 kernel: pci 0000:00:18.7: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 30 05:33:03.402026 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Oct 30 05:33:03.402097 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Oct 30 05:33:03.402179 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Oct 30 05:33:03.402248 kernel: pci 0000:00:18.7: PME# supported from D0 D3hot D3cold Oct 30 05:33:03.402320 kernel: pci_bus 0000:01: extended config space not accessible Oct 30 05:33:03.402392 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Oct 30 05:33:03.402463 kernel: pci_bus 0000:02: extended config space not accessible Oct 30 05:33:03.402474 kernel: acpiphp: Slot [32] registered Oct 30 05:33:03.402489 kernel: acpiphp: Slot [33] registered Oct 30 05:33:03.402496 kernel: acpiphp: Slot [34] registered Oct 30 05:33:03.402503 kernel: acpiphp: Slot [35] registered Oct 30 05:33:03.402510 kernel: acpiphp: Slot [36] registered Oct 30 05:33:03.402517 kernel: acpiphp: Slot [37] registered Oct 30 05:33:03.402523 kernel: acpiphp: Slot [38] registered Oct 30 05:33:03.402530 kernel: acpiphp: Slot [39] registered Oct 30 05:33:03.402537 kernel: acpiphp: Slot [40] registered Oct 30 05:33:03.402549 kernel: acpiphp: Slot [41] registered Oct 30 05:33:03.402556 kernel: acpiphp: Slot [42] registered Oct 30 05:33:03.402563 kernel: acpiphp: Slot [43] registered Oct 30 05:33:03.402570 kernel: acpiphp: Slot [44] registered Oct 30 05:33:03.402577 kernel: acpiphp: Slot [45] registered Oct 30 05:33:03.402583 kernel: acpiphp: Slot [46] registered Oct 30 05:33:03.402590 kernel: acpiphp: Slot [47] registered Oct 30 05:33:03.402602 kernel: acpiphp: Slot [48] registered Oct 30 05:33:03.402609 kernel: acpiphp: Slot [49] registered Oct 30 05:33:03.402616 kernel: acpiphp: Slot [50] registered Oct 30 05:33:03.402623 kernel: acpiphp: Slot [51] registered Oct 30 05:33:03.402630 kernel: acpiphp: Slot [52] registered Oct 30 05:33:03.402637 kernel: acpiphp: Slot [53] registered Oct 30 05:33:03.402644 kernel: acpiphp: Slot [54] registered Oct 30 05:33:03.402651 kernel: acpiphp: Slot [55] registered Oct 30 05:33:03.402662 kernel: acpiphp: Slot [56] registered Oct 30 05:33:03.402669 kernel: acpiphp: Slot [57] registered Oct 30 05:33:03.402676 kernel: acpiphp: Slot [58] registered Oct 30 05:33:03.402683 kernel: acpiphp: Slot [59] registered Oct 30 05:33:03.402690 kernel: acpiphp: Slot [60] registered Oct 30 05:33:03.402697 kernel: acpiphp: Slot [61] registered Oct 30 05:33:03.402704 kernel: acpiphp: Slot [62] registered Oct 30 05:33:03.402715 kernel: acpiphp: Slot [63] registered Oct 30 05:33:03.402808 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] (subtractive decode) Oct 30 05:33:03.402889 kernel: pci 0000:00:11.0: bridge window [mem 0x000a0000-0x000bffff window] (subtractive decode) Oct 30 05:33:03.402959 kernel: pci 0000:00:11.0: bridge window [mem 0x000cc000-0x000dbfff window] (subtractive decode) Oct 30 05:33:03.403030 kernel: pci 0000:00:11.0: bridge window [mem 0xc0000000-0xfebfffff window] (subtractive decode) Oct 30 05:33:03.403190 kernel: pci 0000:00:11.0: bridge window [io 0x0000-0x0cf7 window] (subtractive decode) Oct 30 05:33:03.403277 kernel: pci 0000:00:11.0: bridge window [io 0x0d00-0xfeff window] (subtractive decode) Oct 30 05:33:03.403355 kernel: pci 0000:03:00.0: [15ad:07c0] type 00 class 0x010700 PCIe Endpoint Oct 30 05:33:03.403428 kernel: pci 0000:03:00.0: BAR 0 [io 0x4000-0x4007] Oct 30 05:33:03.403500 kernel: pci 0000:03:00.0: BAR 1 [mem 0xfd5f8000-0xfd5fffff 64bit] Oct 30 05:33:03.403572 kernel: pci 0000:03:00.0: ROM [mem 0x00000000-0x0000ffff pref] Oct 30 05:33:03.403643 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Oct 30 05:33:03.403724 kernel: pci 0000:03:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Oct 30 05:33:03.403805 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Oct 30 05:33:03.403877 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Oct 30 05:33:03.403949 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Oct 30 05:33:03.404019 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Oct 30 05:33:03.404089 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Oct 30 05:33:03.404173 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Oct 30 05:33:03.404246 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Oct 30 05:33:03.404319 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Oct 30 05:33:03.404411 kernel: pci 0000:0b:00.0: [15ad:07b0] type 00 class 0x020000 PCIe Endpoint Oct 30 05:33:03.404485 kernel: pci 0000:0b:00.0: BAR 0 [mem 0xfd4fc000-0xfd4fcfff] Oct 30 05:33:03.404556 kernel: pci 0000:0b:00.0: BAR 1 [mem 0xfd4fd000-0xfd4fdfff] Oct 30 05:33:03.404638 kernel: pci 0000:0b:00.0: BAR 2 [mem 0xfd4fe000-0xfd4fffff] Oct 30 05:33:03.404709 kernel: pci 0000:0b:00.0: BAR 3 [io 0x5000-0x500f] Oct 30 05:33:03.404796 kernel: pci 0000:0b:00.0: ROM [mem 0x00000000-0x0000ffff pref] Oct 30 05:33:03.404868 kernel: pci 0000:0b:00.0: supports D1 D2 Oct 30 05:33:03.404940 kernel: pci 0000:0b:00.0: PME# supported from D0 D1 D2 D3hot D3cold Oct 30 05:33:03.405011 kernel: pci 0000:0b:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Oct 30 05:33:03.405094 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Oct 30 05:33:03.405167 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Oct 30 05:33:03.405240 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Oct 30 05:33:03.405311 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Oct 30 05:33:03.405384 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Oct 30 05:33:03.405454 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Oct 30 05:33:03.405535 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Oct 30 05:33:03.405607 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Oct 30 05:33:03.405678 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Oct 30 05:33:03.405988 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Oct 30 05:33:03.406070 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Oct 30 05:33:03.406144 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Oct 30 05:33:03.406217 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Oct 30 05:33:03.406301 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Oct 30 05:33:03.406374 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Oct 30 05:33:03.406447 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Oct 30 05:33:03.406519 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Oct 30 05:33:03.406591 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Oct 30 05:33:03.406663 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Oct 30 05:33:03.406758 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Oct 30 05:33:03.406833 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Oct 30 05:33:03.406905 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Oct 30 05:33:03.406977 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Oct 30 05:33:03.407049 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Oct 30 05:33:03.407070 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 9 Oct 30 05:33:03.407078 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 0 Oct 30 05:33:03.407086 kernel: ACPI: PCI: Interrupt link LNKB disabled Oct 30 05:33:03.407093 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Oct 30 05:33:03.407100 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 10 Oct 30 05:33:03.407107 kernel: iommu: Default domain type: Translated Oct 30 05:33:03.407114 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Oct 30 05:33:03.407128 kernel: PCI: Using ACPI for IRQ routing Oct 30 05:33:03.407135 kernel: PCI: pci_cache_line_size set to 64 bytes Oct 30 05:33:03.407143 kernel: e820: reserve RAM buffer [mem 0x0009ec00-0x0009ffff] Oct 30 05:33:03.407150 kernel: e820: reserve RAM buffer [mem 0x7fee0000-0x7fffffff] Oct 30 05:33:03.407225 kernel: pci 0000:00:0f.0: vgaarb: setting as boot VGA device Oct 30 05:33:03.407295 kernel: pci 0000:00:0f.0: vgaarb: bridge control possible Oct 30 05:33:03.407364 kernel: pci 0000:00:0f.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Oct 30 05:33:03.407381 kernel: vgaarb: loaded Oct 30 05:33:03.407389 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 Oct 30 05:33:03.407396 kernel: hpet0: 16 comparators, 64-bit 14.318180 MHz counter Oct 30 05:33:03.407403 kernel: clocksource: Switched to clocksource tsc-early Oct 30 05:33:03.407410 kernel: VFS: Disk quotas dquot_6.6.0 Oct 30 05:33:03.407417 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 30 05:33:03.407424 kernel: pnp: PnP ACPI init Oct 30 05:33:03.407508 kernel: system 00:00: [io 0x1000-0x103f] has been reserved Oct 30 05:33:03.407576 kernel: system 00:00: [io 0x1040-0x104f] has been reserved Oct 30 05:33:03.407641 kernel: system 00:00: [io 0x0cf0-0x0cf1] has been reserved Oct 30 05:33:03.407711 kernel: system 00:04: [mem 0xfed00000-0xfed003ff] has been reserved Oct 30 05:33:03.407798 kernel: pnp 00:06: [dma 2] Oct 30 05:33:03.407875 kernel: system 00:07: [io 0xfce0-0xfcff] has been reserved Oct 30 05:33:03.407954 kernel: system 00:07: [mem 0xf0000000-0xf7ffffff] has been reserved Oct 30 05:33:03.408020 kernel: system 00:07: [mem 0xfe800000-0xfe9fffff] has been reserved Oct 30 05:33:03.408030 kernel: pnp: PnP ACPI: found 8 devices Oct 30 05:33:03.408038 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Oct 30 05:33:03.408045 kernel: NET: Registered PF_INET protocol family Oct 30 05:33:03.408052 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 30 05:33:03.408067 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Oct 30 05:33:03.408074 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 30 05:33:03.408082 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Oct 30 05:33:03.408089 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Oct 30 05:33:03.408096 kernel: TCP: Hash tables configured (established 16384 bind 16384) Oct 30 05:33:03.408103 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Oct 30 05:33:03.408110 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Oct 30 05:33:03.408122 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 30 05:33:03.408129 kernel: NET: Registered PF_XDP protocol family Oct 30 05:33:03.408203 kernel: pci 0000:00:15.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Oct 30 05:33:03.408276 kernel: pci 0000:00:15.3: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Oct 30 05:33:03.408347 kernel: pci 0000:00:15.4: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Oct 30 05:33:03.408419 kernel: pci 0000:00:15.5: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Oct 30 05:33:03.408501 kernel: pci 0000:00:15.6: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Oct 30 05:33:03.408573 kernel: pci 0000:00:15.7: bridge window [io 0x1000-0x0fff] to [bus 0a] add_size 1000 Oct 30 05:33:03.408645 kernel: pci 0000:00:16.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0b] add_size 200000 add_align 100000 Oct 30 05:33:03.408716 kernel: pci 0000:00:16.3: bridge window [io 0x1000-0x0fff] to [bus 0e] add_size 1000 Oct 30 05:33:03.408806 kernel: pci 0000:00:16.4: bridge window [io 0x1000-0x0fff] to [bus 0f] add_size 1000 Oct 30 05:33:03.408879 kernel: pci 0000:00:16.5: bridge window [io 0x1000-0x0fff] to [bus 10] add_size 1000 Oct 30 05:33:03.408953 kernel: pci 0000:00:16.6: bridge window [io 0x1000-0x0fff] to [bus 11] add_size 1000 Oct 30 05:33:03.409037 kernel: pci 0000:00:16.7: bridge window [io 0x1000-0x0fff] to [bus 12] add_size 1000 Oct 30 05:33:03.409108 kernel: pci 0000:00:17.3: bridge window [io 0x1000-0x0fff] to [bus 16] add_size 1000 Oct 30 05:33:03.409294 kernel: pci 0000:00:17.4: bridge window [io 0x1000-0x0fff] to [bus 17] add_size 1000 Oct 30 05:33:03.409371 kernel: pci 0000:00:17.5: bridge window [io 0x1000-0x0fff] to [bus 18] add_size 1000 Oct 30 05:33:03.409443 kernel: pci 0000:00:17.6: bridge window [io 0x1000-0x0fff] to [bus 19] add_size 1000 Oct 30 05:33:03.409514 kernel: pci 0000:00:17.7: bridge window [io 0x1000-0x0fff] to [bus 1a] add_size 1000 Oct 30 05:33:03.409598 kernel: pci 0000:00:18.2: bridge window [io 0x1000-0x0fff] to [bus 1d] add_size 1000 Oct 30 05:33:03.409672 kernel: pci 0000:00:18.3: bridge window [io 0x1000-0x0fff] to [bus 1e] add_size 1000 Oct 30 05:33:03.409755 kernel: pci 0000:00:18.4: bridge window [io 0x1000-0x0fff] to [bus 1f] add_size 1000 Oct 30 05:33:03.409840 kernel: pci 0000:00:18.5: bridge window [io 0x1000-0x0fff] to [bus 20] add_size 1000 Oct 30 05:33:03.409913 kernel: pci 0000:00:18.6: bridge window [io 0x1000-0x0fff] to [bus 21] add_size 1000 Oct 30 05:33:03.409985 kernel: pci 0000:00:18.7: bridge window [io 0x1000-0x0fff] to [bus 22] add_size 1000 Oct 30 05:33:03.410067 kernel: pci 0000:00:15.0: bridge window [mem 0xc0000000-0xc01fffff 64bit pref]: assigned Oct 30 05:33:03.410140 kernel: pci 0000:00:16.0: bridge window [mem 0xc0200000-0xc03fffff 64bit pref]: assigned Oct 30 05:33:03.410211 kernel: pci 0000:00:15.3: bridge window [io size 0x1000]: can't assign; no space Oct 30 05:33:03.410283 kernel: pci 0000:00:15.3: bridge window [io size 0x1000]: failed to assign Oct 30 05:33:03.410353 kernel: pci 0000:00:15.4: bridge window [io size 0x1000]: can't assign; no space Oct 30 05:33:03.410425 kernel: pci 0000:00:15.4: bridge window [io size 0x1000]: failed to assign Oct 30 05:33:03.410495 kernel: pci 0000:00:15.5: bridge window [io size 0x1000]: can't assign; no space Oct 30 05:33:03.410578 kernel: pci 0000:00:15.5: bridge window [io size 0x1000]: failed to assign Oct 30 05:33:03.410650 kernel: pci 0000:00:15.6: bridge window [io size 0x1000]: can't assign; no space Oct 30 05:33:03.410722 kernel: pci 0000:00:15.6: bridge window [io size 0x1000]: failed to assign Oct 30 05:33:03.410803 kernel: pci 0000:00:15.7: bridge window [io size 0x1000]: can't assign; no space Oct 30 05:33:03.410874 kernel: pci 0000:00:15.7: bridge window [io size 0x1000]: failed to assign Oct 30 05:33:03.410945 kernel: pci 0000:00:16.3: bridge window [io size 0x1000]: can't assign; no space Oct 30 05:33:03.411029 kernel: pci 0000:00:16.3: bridge window [io size 0x1000]: failed to assign Oct 30 05:33:03.411102 kernel: pci 0000:00:16.4: bridge window [io size 0x1000]: can't assign; no space Oct 30 05:33:03.411171 kernel: pci 0000:00:16.4: bridge window [io size 0x1000]: failed to assign Oct 30 05:33:03.411242 kernel: pci 0000:00:16.5: bridge window [io size 0x1000]: can't assign; no space Oct 30 05:33:03.411312 kernel: pci 0000:00:16.5: bridge window [io size 0x1000]: failed to assign Oct 30 05:33:03.411383 kernel: pci 0000:00:16.6: bridge window [io size 0x1000]: can't assign; no space Oct 30 05:33:03.411463 kernel: pci 0000:00:16.6: bridge window [io size 0x1000]: failed to assign Oct 30 05:33:03.411534 kernel: pci 0000:00:16.7: bridge window [io size 0x1000]: can't assign; no space Oct 30 05:33:03.411604 kernel: pci 0000:00:16.7: bridge window [io size 0x1000]: failed to assign Oct 30 05:33:03.411675 kernel: pci 0000:00:17.3: bridge window [io size 0x1000]: can't assign; no space Oct 30 05:33:03.412833 kernel: pci 0000:00:17.3: bridge window [io size 0x1000]: failed to assign Oct 30 05:33:03.412930 kernel: pci 0000:00:17.4: bridge window [io size 0x1000]: can't assign; no space Oct 30 05:33:03.413007 kernel: pci 0000:00:17.4: bridge window [io size 0x1000]: failed to assign Oct 30 05:33:03.413099 kernel: pci 0000:00:17.5: bridge window [io size 0x1000]: can't assign; no space Oct 30 05:33:03.413171 kernel: pci 0000:00:17.5: bridge window [io size 0x1000]: failed to assign Oct 30 05:33:03.413244 kernel: pci 0000:00:17.6: bridge window [io size 0x1000]: can't assign; no space Oct 30 05:33:03.413315 kernel: pci 0000:00:17.6: bridge window [io size 0x1000]: failed to assign Oct 30 05:33:03.413387 kernel: pci 0000:00:17.7: bridge window [io size 0x1000]: can't assign; no space Oct 30 05:33:03.413458 kernel: pci 0000:00:17.7: bridge window [io size 0x1000]: failed to assign Oct 30 05:33:03.413541 kernel: pci 0000:00:18.2: bridge window [io size 0x1000]: can't assign; no space Oct 30 05:33:03.413611 kernel: pci 0000:00:18.2: bridge window [io size 0x1000]: failed to assign Oct 30 05:33:03.413682 kernel: pci 0000:00:18.3: bridge window [io size 0x1000]: can't assign; no space Oct 30 05:33:03.413771 kernel: pci 0000:00:18.3: bridge window [io size 0x1000]: failed to assign Oct 30 05:33:03.413847 kernel: pci 0000:00:18.4: bridge window [io size 0x1000]: can't assign; no space Oct 30 05:33:03.413917 kernel: pci 0000:00:18.4: bridge window [io size 0x1000]: failed to assign Oct 30 05:33:03.413999 kernel: pci 0000:00:18.5: bridge window [io size 0x1000]: can't assign; no space Oct 30 05:33:03.414069 kernel: pci 0000:00:18.5: bridge window [io size 0x1000]: failed to assign Oct 30 05:33:03.414141 kernel: pci 0000:00:18.6: bridge window [io size 0x1000]: can't assign; no space Oct 30 05:33:03.414210 kernel: pci 0000:00:18.6: bridge window [io size 0x1000]: failed to assign Oct 30 05:33:03.414281 kernel: pci 0000:00:18.7: bridge window [io size 0x1000]: can't assign; no space Oct 30 05:33:03.414351 kernel: pci 0000:00:18.7: bridge window [io size 0x1000]: failed to assign Oct 30 05:33:03.414420 kernel: pci 0000:00:18.7: bridge window [io size 0x1000]: can't assign; no space Oct 30 05:33:03.415575 kernel: pci 0000:00:18.7: bridge window [io size 0x1000]: failed to assign Oct 30 05:33:03.415655 kernel: pci 0000:00:18.6: bridge window [io size 0x1000]: can't assign; no space Oct 30 05:33:03.415730 kernel: pci 0000:00:18.6: bridge window [io size 0x1000]: failed to assign Oct 30 05:33:03.415818 kernel: pci 0000:00:18.5: bridge window [io size 0x1000]: can't assign; no space Oct 30 05:33:03.416194 kernel: pci 0000:00:18.5: bridge window [io size 0x1000]: failed to assign Oct 30 05:33:03.416355 kernel: pci 0000:00:18.4: bridge window [io size 0x1000]: can't assign; no space Oct 30 05:33:03.416430 kernel: pci 0000:00:18.4: bridge window [io size 0x1000]: failed to assign Oct 30 05:33:03.416589 kernel: pci 0000:00:18.3: bridge window [io size 0x1000]: can't assign; no space Oct 30 05:33:03.416792 kernel: pci 0000:00:18.3: bridge window [io size 0x1000]: failed to assign Oct 30 05:33:03.416876 kernel: pci 0000:00:18.2: bridge window [io size 0x1000]: can't assign; no space Oct 30 05:33:03.417039 kernel: pci 0000:00:18.2: bridge window [io size 0x1000]: failed to assign Oct 30 05:33:03.417114 kernel: pci 0000:00:17.7: bridge window [io size 0x1000]: can't assign; no space Oct 30 05:33:03.417187 kernel: pci 0000:00:17.7: bridge window [io size 0x1000]: failed to assign Oct 30 05:33:03.417260 kernel: pci 0000:00:17.6: bridge window [io size 0x1000]: can't assign; no space Oct 30 05:33:03.417332 kernel: pci 0000:00:17.6: bridge window [io size 0x1000]: failed to assign Oct 30 05:33:03.417418 kernel: pci 0000:00:17.5: bridge window [io size 0x1000]: can't assign; no space Oct 30 05:33:03.418076 kernel: pci 0000:00:17.5: bridge window [io size 0x1000]: failed to assign Oct 30 05:33:03.418157 kernel: pci 0000:00:17.4: bridge window [io size 0x1000]: can't assign; no space Oct 30 05:33:03.418350 kernel: pci 0000:00:17.4: bridge window [io size 0x1000]: failed to assign Oct 30 05:33:03.418427 kernel: pci 0000:00:17.3: bridge window [io size 0x1000]: can't assign; no space Oct 30 05:33:03.418499 kernel: pci 0000:00:17.3: bridge window [io size 0x1000]: failed to assign Oct 30 05:33:03.418572 kernel: pci 0000:00:16.7: bridge window [io size 0x1000]: can't assign; no space Oct 30 05:33:03.419414 kernel: pci 0000:00:16.7: bridge window [io size 0x1000]: failed to assign Oct 30 05:33:03.419498 kernel: pci 0000:00:16.6: bridge window [io size 0x1000]: can't assign; no space Oct 30 05:33:03.419572 kernel: pci 0000:00:16.6: bridge window [io size 0x1000]: failed to assign Oct 30 05:33:03.419647 kernel: pci 0000:00:16.5: bridge window [io size 0x1000]: can't assign; no space Oct 30 05:33:03.419721 kernel: pci 0000:00:16.5: bridge window [io size 0x1000]: failed to assign Oct 30 05:33:03.422873 kernel: pci 0000:00:16.4: bridge window [io size 0x1000]: can't assign; no space Oct 30 05:33:03.422961 kernel: pci 0000:00:16.4: bridge window [io size 0x1000]: failed to assign Oct 30 05:33:03.423037 kernel: pci 0000:00:16.3: bridge window [io size 0x1000]: can't assign; no space Oct 30 05:33:03.423130 kernel: pci 0000:00:16.3: bridge window [io size 0x1000]: failed to assign Oct 30 05:33:03.423213 kernel: pci 0000:00:15.7: bridge window [io size 0x1000]: can't assign; no space Oct 30 05:33:03.423283 kernel: pci 0000:00:15.7: bridge window [io size 0x1000]: failed to assign Oct 30 05:33:03.423354 kernel: pci 0000:00:15.6: bridge window [io size 0x1000]: can't assign; no space Oct 30 05:33:03.423434 kernel: pci 0000:00:15.6: bridge window [io size 0x1000]: failed to assign Oct 30 05:33:03.423505 kernel: pci 0000:00:15.5: bridge window [io size 0x1000]: can't assign; no space Oct 30 05:33:03.423575 kernel: pci 0000:00:15.5: bridge window [io size 0x1000]: failed to assign Oct 30 05:33:03.423646 kernel: pci 0000:00:15.4: bridge window [io size 0x1000]: can't assign; no space Oct 30 05:33:03.423715 kernel: pci 0000:00:15.4: bridge window [io size 0x1000]: failed to assign Oct 30 05:33:03.423804 kernel: pci 0000:00:15.3: bridge window [io size 0x1000]: can't assign; no space Oct 30 05:33:03.423875 kernel: pci 0000:00:15.3: bridge window [io size 0x1000]: failed to assign Oct 30 05:33:03.423946 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Oct 30 05:33:03.424030 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] Oct 30 05:33:03.424099 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Oct 30 05:33:03.424168 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Oct 30 05:33:03.424237 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Oct 30 05:33:03.424315 kernel: pci 0000:03:00.0: ROM [mem 0xfd500000-0xfd50ffff pref]: assigned Oct 30 05:33:03.424386 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Oct 30 05:33:03.424466 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Oct 30 05:33:03.424534 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Oct 30 05:33:03.424603 kernel: pci 0000:00:15.0: bridge window [mem 0xc0000000-0xc01fffff 64bit pref] Oct 30 05:33:03.424675 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Oct 30 05:33:03.428345 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Oct 30 05:33:03.428445 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Oct 30 05:33:03.428518 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Oct 30 05:33:03.428592 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Oct 30 05:33:03.428681 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Oct 30 05:33:03.442099 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Oct 30 05:33:03.442195 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Oct 30 05:33:03.442270 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Oct 30 05:33:03.442342 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Oct 30 05:33:03.442412 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Oct 30 05:33:03.442505 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Oct 30 05:33:03.442575 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Oct 30 05:33:03.444686 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Oct 30 05:33:03.444812 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Oct 30 05:33:03.444889 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Oct 30 05:33:03.444961 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Oct 30 05:33:03.445380 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Oct 30 05:33:03.445454 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Oct 30 05:33:03.445525 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Oct 30 05:33:03.445598 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Oct 30 05:33:03.445668 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Oct 30 05:33:03.445737 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Oct 30 05:33:03.446510 kernel: pci 0000:0b:00.0: ROM [mem 0xfd400000-0xfd40ffff pref]: assigned Oct 30 05:33:03.446590 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Oct 30 05:33:03.446662 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Oct 30 05:33:03.446733 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Oct 30 05:33:03.446816 kernel: pci 0000:00:16.0: bridge window [mem 0xc0200000-0xc03fffff 64bit pref] Oct 30 05:33:03.446889 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Oct 30 05:33:03.446958 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Oct 30 05:33:03.447029 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Oct 30 05:33:03.447104 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Oct 30 05:33:03.447176 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Oct 30 05:33:03.447246 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Oct 30 05:33:03.447315 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Oct 30 05:33:03.447385 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Oct 30 05:33:03.447458 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Oct 30 05:33:03.447528 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Oct 30 05:33:03.447601 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Oct 30 05:33:03.447673 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Oct 30 05:33:03.447754 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Oct 30 05:33:03.447832 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Oct 30 05:33:03.447905 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Oct 30 05:33:03.447975 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Oct 30 05:33:03.448049 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Oct 30 05:33:03.448121 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Oct 30 05:33:03.448191 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Oct 30 05:33:03.448261 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Oct 30 05:33:03.448348 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Oct 30 05:33:03.448422 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Oct 30 05:33:03.448496 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Oct 30 05:33:03.448567 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Oct 30 05:33:03.448638 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Oct 30 05:33:03.448707 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Oct 30 05:33:03.448989 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Oct 30 05:33:03.449066 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Oct 30 05:33:03.449137 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Oct 30 05:33:03.449211 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Oct 30 05:33:03.449282 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Oct 30 05:33:03.449352 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Oct 30 05:33:03.449422 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Oct 30 05:33:03.449494 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Oct 30 05:33:03.449563 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Oct 30 05:33:03.449636 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Oct 30 05:33:03.449709 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Oct 30 05:33:03.449795 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Oct 30 05:33:03.449868 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Oct 30 05:33:03.449949 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Oct 30 05:33:03.450022 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Oct 30 05:33:03.450093 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Oct 30 05:33:03.450167 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Oct 30 05:33:03.450236 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Oct 30 05:33:03.450315 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Oct 30 05:33:03.450386 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Oct 30 05:33:03.450455 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Oct 30 05:33:03.450537 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Oct 30 05:33:03.450611 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Oct 30 05:33:03.450681 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Oct 30 05:33:03.451583 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Oct 30 05:33:03.451666 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Oct 30 05:33:03.451751 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Oct 30 05:33:03.451829 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Oct 30 05:33:03.451903 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Oct 30 05:33:03.451979 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Oct 30 05:33:03.452056 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Oct 30 05:33:03.452125 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Oct 30 05:33:03.452198 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Oct 30 05:33:03.452268 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Oct 30 05:33:03.452337 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Oct 30 05:33:03.452409 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Oct 30 05:33:03.452482 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Oct 30 05:33:03.452551 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Oct 30 05:33:03.452623 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Oct 30 05:33:03.452694 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Oct 30 05:33:03.452773 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Oct 30 05:33:03.452852 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Oct 30 05:33:03.452927 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Oct 30 05:33:03.452997 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Oct 30 05:33:03.453069 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Oct 30 05:33:03.453139 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Oct 30 05:33:03.453208 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Oct 30 05:33:03.453278 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Oct 30 05:33:03.453351 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Oct 30 05:33:03.453420 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Oct 30 05:33:03.453488 kernel: pci_bus 0000:00: resource 4 [mem 0x000a0000-0x000bffff window] Oct 30 05:33:03.453551 kernel: pci_bus 0000:00: resource 5 [mem 0x000cc000-0x000dbfff window] Oct 30 05:33:03.453612 kernel: pci_bus 0000:00: resource 6 [mem 0xc0000000-0xfebfffff window] Oct 30 05:33:03.453674 kernel: pci_bus 0000:00: resource 7 [io 0x0000-0x0cf7 window] Oct 30 05:33:03.453738 kernel: pci_bus 0000:00: resource 8 [io 0x0d00-0xfeff window] Oct 30 05:33:03.455158 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x3fff] Oct 30 05:33:03.455227 kernel: pci_bus 0000:02: resource 1 [mem 0xfd600000-0xfdffffff] Oct 30 05:33:03.455293 kernel: pci_bus 0000:02: resource 2 [mem 0xe7b00000-0xe7ffffff 64bit pref] Oct 30 05:33:03.455359 kernel: pci_bus 0000:02: resource 4 [mem 0x000a0000-0x000bffff window] Oct 30 05:33:03.455424 kernel: pci_bus 0000:02: resource 5 [mem 0x000cc000-0x000dbfff window] Oct 30 05:33:03.455492 kernel: pci_bus 0000:02: resource 6 [mem 0xc0000000-0xfebfffff window] Oct 30 05:33:03.455555 kernel: pci_bus 0000:02: resource 7 [io 0x0000-0x0cf7 window] Oct 30 05:33:03.455618 kernel: pci_bus 0000:02: resource 8 [io 0x0d00-0xfeff window] Oct 30 05:33:03.455689 kernel: pci_bus 0000:03: resource 0 [io 0x4000-0x4fff] Oct 30 05:33:03.455776 kernel: pci_bus 0000:03: resource 1 [mem 0xfd500000-0xfd5fffff] Oct 30 05:33:03.455849 kernel: pci_bus 0000:03: resource 2 [mem 0xc0000000-0xc01fffff 64bit pref] Oct 30 05:33:03.455923 kernel: pci_bus 0000:04: resource 0 [io 0x8000-0x8fff] Oct 30 05:33:03.455989 kernel: pci_bus 0000:04: resource 1 [mem 0xfd100000-0xfd1fffff] Oct 30 05:33:03.456053 kernel: pci_bus 0000:04: resource 2 [mem 0xe7800000-0xe78fffff 64bit pref] Oct 30 05:33:03.456122 kernel: pci_bus 0000:05: resource 0 [io 0xc000-0xcfff] Oct 30 05:33:03.456189 kernel: pci_bus 0000:05: resource 1 [mem 0xfcd00000-0xfcdfffff] Oct 30 05:33:03.456255 kernel: pci_bus 0000:05: resource 2 [mem 0xe7400000-0xe74fffff 64bit pref] Oct 30 05:33:03.456329 kernel: pci_bus 0000:06: resource 1 [mem 0xfc900000-0xfc9fffff] Oct 30 05:33:03.456392 kernel: pci_bus 0000:06: resource 2 [mem 0xe7000000-0xe70fffff 64bit pref] Oct 30 05:33:03.456461 kernel: pci_bus 0000:07: resource 1 [mem 0xfc500000-0xfc5fffff] Oct 30 05:33:03.456525 kernel: pci_bus 0000:07: resource 2 [mem 0xe6c00000-0xe6cfffff 64bit pref] Oct 30 05:33:03.456595 kernel: pci_bus 0000:08: resource 1 [mem 0xfc100000-0xfc1fffff] Oct 30 05:33:03.456662 kernel: pci_bus 0000:08: resource 2 [mem 0xe6800000-0xe68fffff 64bit pref] Oct 30 05:33:03.456731 kernel: pci_bus 0000:09: resource 1 [mem 0xfbd00000-0xfbdfffff] Oct 30 05:33:03.457347 kernel: pci_bus 0000:09: resource 2 [mem 0xe6400000-0xe64fffff 64bit pref] Oct 30 05:33:03.457519 kernel: pci_bus 0000:0a: resource 1 [mem 0xfb900000-0xfb9fffff] Oct 30 05:33:03.457592 kernel: pci_bus 0000:0a: resource 2 [mem 0xe6000000-0xe60fffff 64bit pref] Oct 30 05:33:03.457665 kernel: pci_bus 0000:0b: resource 0 [io 0x5000-0x5fff] Oct 30 05:33:03.457731 kernel: pci_bus 0000:0b: resource 1 [mem 0xfd400000-0xfd4fffff] Oct 30 05:33:03.457812 kernel: pci_bus 0000:0b: resource 2 [mem 0xc0200000-0xc03fffff 64bit pref] Oct 30 05:33:03.457882 kernel: pci_bus 0000:0c: resource 0 [io 0x9000-0x9fff] Oct 30 05:33:03.457946 kernel: pci_bus 0000:0c: resource 1 [mem 0xfd000000-0xfd0fffff] Oct 30 05:33:03.458891 kernel: pci_bus 0000:0c: resource 2 [mem 0xe7700000-0xe77fffff 64bit pref] Oct 30 05:33:03.458971 kernel: pci_bus 0000:0d: resource 0 [io 0xd000-0xdfff] Oct 30 05:33:03.459040 kernel: pci_bus 0000:0d: resource 1 [mem 0xfcc00000-0xfccfffff] Oct 30 05:33:03.459105 kernel: pci_bus 0000:0d: resource 2 [mem 0xe7300000-0xe73fffff 64bit pref] Oct 30 05:33:03.459180 kernel: pci_bus 0000:0e: resource 1 [mem 0xfc800000-0xfc8fffff] Oct 30 05:33:03.459249 kernel: pci_bus 0000:0e: resource 2 [mem 0xe6f00000-0xe6ffffff 64bit pref] Oct 30 05:33:03.459318 kernel: pci_bus 0000:0f: resource 1 [mem 0xfc400000-0xfc4fffff] Oct 30 05:33:03.459384 kernel: pci_bus 0000:0f: resource 2 [mem 0xe6b00000-0xe6bfffff 64bit pref] Oct 30 05:33:03.459455 kernel: pci_bus 0000:10: resource 1 [mem 0xfc000000-0xfc0fffff] Oct 30 05:33:03.459519 kernel: pci_bus 0000:10: resource 2 [mem 0xe6700000-0xe67fffff 64bit pref] Oct 30 05:33:03.459589 kernel: pci_bus 0000:11: resource 1 [mem 0xfbc00000-0xfbcfffff] Oct 30 05:33:03.459658 kernel: pci_bus 0000:11: resource 2 [mem 0xe6300000-0xe63fffff 64bit pref] Oct 30 05:33:03.459728 kernel: pci_bus 0000:12: resource 1 [mem 0xfb800000-0xfb8fffff] Oct 30 05:33:03.459812 kernel: pci_bus 0000:12: resource 2 [mem 0xe5f00000-0xe5ffffff 64bit pref] Oct 30 05:33:03.459885 kernel: pci_bus 0000:13: resource 0 [io 0x6000-0x6fff] Oct 30 05:33:03.459951 kernel: pci_bus 0000:13: resource 1 [mem 0xfd300000-0xfd3fffff] Oct 30 05:33:03.460240 kernel: pci_bus 0000:13: resource 2 [mem 0xe7a00000-0xe7afffff 64bit pref] Oct 30 05:33:03.460315 kernel: pci_bus 0000:14: resource 0 [io 0xa000-0xafff] Oct 30 05:33:03.460380 kernel: pci_bus 0000:14: resource 1 [mem 0xfcf00000-0xfcffffff] Oct 30 05:33:03.460444 kernel: pci_bus 0000:14: resource 2 [mem 0xe7600000-0xe76fffff 64bit pref] Oct 30 05:33:03.460528 kernel: pci_bus 0000:15: resource 0 [io 0xe000-0xefff] Oct 30 05:33:03.460598 kernel: pci_bus 0000:15: resource 1 [mem 0xfcb00000-0xfcbfffff] Oct 30 05:33:03.460853 kernel: pci_bus 0000:15: resource 2 [mem 0xe7200000-0xe72fffff 64bit pref] Oct 30 05:33:03.460924 kernel: pci_bus 0000:16: resource 1 [mem 0xfc700000-0xfc7fffff] Oct 30 05:33:03.460990 kernel: pci_bus 0000:16: resource 2 [mem 0xe6e00000-0xe6efffff 64bit pref] Oct 30 05:33:03.461060 kernel: pci_bus 0000:17: resource 1 [mem 0xfc300000-0xfc3fffff] Oct 30 05:33:03.461124 kernel: pci_bus 0000:17: resource 2 [mem 0xe6a00000-0xe6afffff 64bit pref] Oct 30 05:33:03.461197 kernel: pci_bus 0000:18: resource 1 [mem 0xfbf00000-0xfbffffff] Oct 30 05:33:03.461261 kernel: pci_bus 0000:18: resource 2 [mem 0xe6600000-0xe66fffff 64bit pref] Oct 30 05:33:03.461330 kernel: pci_bus 0000:19: resource 1 [mem 0xfbb00000-0xfbbfffff] Oct 30 05:33:03.461395 kernel: pci_bus 0000:19: resource 2 [mem 0xe6200000-0xe62fffff 64bit pref] Oct 30 05:33:03.461466 kernel: pci_bus 0000:1a: resource 1 [mem 0xfb700000-0xfb7fffff] Oct 30 05:33:03.461533 kernel: pci_bus 0000:1a: resource 2 [mem 0xe5e00000-0xe5efffff 64bit pref] Oct 30 05:33:03.461604 kernel: pci_bus 0000:1b: resource 0 [io 0x7000-0x7fff] Oct 30 05:33:03.461671 kernel: pci_bus 0000:1b: resource 1 [mem 0xfd200000-0xfd2fffff] Oct 30 05:33:03.461734 kernel: pci_bus 0000:1b: resource 2 [mem 0xe7900000-0xe79fffff 64bit pref] Oct 30 05:33:03.461827 kernel: pci_bus 0000:1c: resource 0 [io 0xb000-0xbfff] Oct 30 05:33:03.461899 kernel: pci_bus 0000:1c: resource 1 [mem 0xfce00000-0xfcefffff] Oct 30 05:33:03.461968 kernel: pci_bus 0000:1c: resource 2 [mem 0xe7500000-0xe75fffff 64bit pref] Oct 30 05:33:03.462036 kernel: pci_bus 0000:1d: resource 1 [mem 0xfca00000-0xfcafffff] Oct 30 05:33:03.462101 kernel: pci_bus 0000:1d: resource 2 [mem 0xe7100000-0xe71fffff 64bit pref] Oct 30 05:33:03.462169 kernel: pci_bus 0000:1e: resource 1 [mem 0xfc600000-0xfc6fffff] Oct 30 05:33:03.462237 kernel: pci_bus 0000:1e: resource 2 [mem 0xe6d00000-0xe6dfffff 64bit pref] Oct 30 05:33:03.462311 kernel: pci_bus 0000:1f: resource 1 [mem 0xfc200000-0xfc2fffff] Oct 30 05:33:03.462377 kernel: pci_bus 0000:1f: resource 2 [mem 0xe6900000-0xe69fffff 64bit pref] Oct 30 05:33:03.462450 kernel: pci_bus 0000:20: resource 1 [mem 0xfbe00000-0xfbefffff] Oct 30 05:33:03.462514 kernel: pci_bus 0000:20: resource 2 [mem 0xe6500000-0xe65fffff 64bit pref] Oct 30 05:33:03.462583 kernel: pci_bus 0000:21: resource 1 [mem 0xfba00000-0xfbafffff] Oct 30 05:33:03.462651 kernel: pci_bus 0000:21: resource 2 [mem 0xe6100000-0xe61fffff 64bit pref] Oct 30 05:33:03.462721 kernel: pci_bus 0000:22: resource 1 [mem 0xfb600000-0xfb6fffff] Oct 30 05:33:03.462804 kernel: pci_bus 0000:22: resource 2 [mem 0xe5d00000-0xe5dfffff 64bit pref] Oct 30 05:33:03.462883 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Oct 30 05:33:03.462894 kernel: PCI: CLS 32 bytes, default 64 Oct 30 05:33:03.462902 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Oct 30 05:33:03.462912 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Oct 30 05:33:03.462919 kernel: clocksource: Switched to clocksource tsc Oct 30 05:33:03.462927 kernel: Initialise system trusted keyrings Oct 30 05:33:03.462934 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Oct 30 05:33:03.462941 kernel: Key type asymmetric registered Oct 30 05:33:03.462948 kernel: Asymmetric key parser 'x509' registered Oct 30 05:33:03.462955 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Oct 30 05:33:03.462963 kernel: io scheduler mq-deadline registered Oct 30 05:33:03.462970 kernel: io scheduler kyber registered Oct 30 05:33:03.462978 kernel: io scheduler bfq registered Oct 30 05:33:03.463051 kernel: pcieport 0000:00:15.0: PME: Signaling with IRQ 24 Oct 30 05:33:03.463122 kernel: pcieport 0000:00:15.0: pciehp: Slot #160 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 30 05:33:03.463195 kernel: pcieport 0000:00:15.1: PME: Signaling with IRQ 25 Oct 30 05:33:03.463268 kernel: pcieport 0000:00:15.1: pciehp: Slot #161 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 30 05:33:03.463341 kernel: pcieport 0000:00:15.2: PME: Signaling with IRQ 26 Oct 30 05:33:03.463412 kernel: pcieport 0000:00:15.2: pciehp: Slot #162 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 30 05:33:03.463484 kernel: pcieport 0000:00:15.3: PME: Signaling with IRQ 27 Oct 30 05:33:03.463553 kernel: pcieport 0000:00:15.3: pciehp: Slot #163 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 30 05:33:03.463624 kernel: pcieport 0000:00:15.4: PME: Signaling with IRQ 28 Oct 30 05:33:03.463698 kernel: pcieport 0000:00:15.4: pciehp: Slot #164 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 30 05:33:03.463973 kernel: pcieport 0000:00:15.5: PME: Signaling with IRQ 29 Oct 30 05:33:03.464068 kernel: pcieport 0000:00:15.5: pciehp: Slot #165 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 30 05:33:03.464448 kernel: pcieport 0000:00:15.6: PME: Signaling with IRQ 30 Oct 30 05:33:03.464549 kernel: pcieport 0000:00:15.6: pciehp: Slot #166 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 30 05:33:03.464627 kernel: pcieport 0000:00:15.7: PME: Signaling with IRQ 31 Oct 30 05:33:03.464702 kernel: pcieport 0000:00:15.7: pciehp: Slot #167 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 30 05:33:03.464785 kernel: pcieport 0000:00:16.0: PME: Signaling with IRQ 32 Oct 30 05:33:03.464875 kernel: pcieport 0000:00:16.0: pciehp: Slot #192 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 30 05:33:03.465111 kernel: pcieport 0000:00:16.1: PME: Signaling with IRQ 33 Oct 30 05:33:03.465187 kernel: pcieport 0000:00:16.1: pciehp: Slot #193 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 30 05:33:03.465261 kernel: pcieport 0000:00:16.2: PME: Signaling with IRQ 34 Oct 30 05:33:03.465337 kernel: pcieport 0000:00:16.2: pciehp: Slot #194 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 30 05:33:03.465410 kernel: pcieport 0000:00:16.3: PME: Signaling with IRQ 35 Oct 30 05:33:03.465481 kernel: pcieport 0000:00:16.3: pciehp: Slot #195 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 30 05:33:03.465562 kernel: pcieport 0000:00:16.4: PME: Signaling with IRQ 36 Oct 30 05:33:03.465636 kernel: pcieport 0000:00:16.4: pciehp: Slot #196 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 30 05:33:03.465708 kernel: pcieport 0000:00:16.5: PME: Signaling with IRQ 37 Oct 30 05:33:03.465802 kernel: pcieport 0000:00:16.5: pciehp: Slot #197 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 30 05:33:03.465879 kernel: pcieport 0000:00:16.6: PME: Signaling with IRQ 38 Oct 30 05:33:03.465951 kernel: pcieport 0000:00:16.6: pciehp: Slot #198 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 30 05:33:03.466024 kernel: pcieport 0000:00:16.7: PME: Signaling with IRQ 39 Oct 30 05:33:03.466094 kernel: pcieport 0000:00:16.7: pciehp: Slot #199 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 30 05:33:03.466348 kernel: pcieport 0000:00:17.0: PME: Signaling with IRQ 40 Oct 30 05:33:03.466429 kernel: pcieport 0000:00:17.0: pciehp: Slot #224 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 30 05:33:03.466504 kernel: pcieport 0000:00:17.1: PME: Signaling with IRQ 41 Oct 30 05:33:03.466577 kernel: pcieport 0000:00:17.1: pciehp: Slot #225 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 30 05:33:03.466650 kernel: pcieport 0000:00:17.2: PME: Signaling with IRQ 42 Oct 30 05:33:03.466722 kernel: pcieport 0000:00:17.2: pciehp: Slot #226 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 30 05:33:03.466807 kernel: pcieport 0000:00:17.3: PME: Signaling with IRQ 43 Oct 30 05:33:03.466878 kernel: pcieport 0000:00:17.3: pciehp: Slot #227 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 30 05:33:03.466954 kernel: pcieport 0000:00:17.4: PME: Signaling with IRQ 44 Oct 30 05:33:03.467025 kernel: pcieport 0000:00:17.4: pciehp: Slot #228 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 30 05:33:03.467097 kernel: pcieport 0000:00:17.5: PME: Signaling with IRQ 45 Oct 30 05:33:03.467170 kernel: pcieport 0000:00:17.5: pciehp: Slot #229 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 30 05:33:03.467242 kernel: pcieport 0000:00:17.6: PME: Signaling with IRQ 46 Oct 30 05:33:03.467313 kernel: pcieport 0000:00:17.6: pciehp: Slot #230 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 30 05:33:03.467386 kernel: pcieport 0000:00:17.7: PME: Signaling with IRQ 47 Oct 30 05:33:03.467458 kernel: pcieport 0000:00:17.7: pciehp: Slot #231 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 30 05:33:03.467530 kernel: pcieport 0000:00:18.0: PME: Signaling with IRQ 48 Oct 30 05:33:03.467601 kernel: pcieport 0000:00:18.0: pciehp: Slot #256 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 30 05:33:03.467672 kernel: pcieport 0000:00:18.1: PME: Signaling with IRQ 49 Oct 30 05:33:03.467768 kernel: pcieport 0000:00:18.1: pciehp: Slot #257 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 30 05:33:03.467850 kernel: pcieport 0000:00:18.2: PME: Signaling with IRQ 50 Oct 30 05:33:03.467921 kernel: pcieport 0000:00:18.2: pciehp: Slot #258 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 30 05:33:03.467992 kernel: pcieport 0000:00:18.3: PME: Signaling with IRQ 51 Oct 30 05:33:03.468061 kernel: pcieport 0000:00:18.3: pciehp: Slot #259 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 30 05:33:03.468132 kernel: pcieport 0000:00:18.4: PME: Signaling with IRQ 52 Oct 30 05:33:03.468203 kernel: pcieport 0000:00:18.4: pciehp: Slot #260 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 30 05:33:03.468277 kernel: pcieport 0000:00:18.5: PME: Signaling with IRQ 53 Oct 30 05:33:03.468349 kernel: pcieport 0000:00:18.5: pciehp: Slot #261 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 30 05:33:03.468420 kernel: pcieport 0000:00:18.6: PME: Signaling with IRQ 54 Oct 30 05:33:03.468490 kernel: pcieport 0000:00:18.6: pciehp: Slot #262 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 30 05:33:03.468562 kernel: pcieport 0000:00:18.7: PME: Signaling with IRQ 55 Oct 30 05:33:03.468633 kernel: pcieport 0000:00:18.7: pciehp: Slot #263 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 30 05:33:03.468646 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Oct 30 05:33:03.468654 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 30 05:33:03.468662 kernel: 00:05: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Oct 30 05:33:03.468669 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBC,PNP0f13:MOUS] at 0x60,0x64 irq 1,12 Oct 30 05:33:03.468677 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Oct 30 05:33:03.468684 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Oct 30 05:33:03.468771 kernel: rtc_cmos 00:01: registered as rtc0 Oct 30 05:33:03.468846 kernel: rtc_cmos 00:01: setting system clock to 2025-10-30T05:33:01 UTC (1761802381) Oct 30 05:33:03.468857 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Oct 30 05:33:03.468921 kernel: rtc_cmos 00:01: alarms up to one month, y3k, 114 bytes nvram Oct 30 05:33:03.468931 kernel: intel_pstate: CPU model not supported Oct 30 05:33:03.468939 kernel: NET: Registered PF_INET6 protocol family Oct 30 05:33:03.468946 kernel: Segment Routing with IPv6 Oct 30 05:33:03.468956 kernel: In-situ OAM (IOAM) with IPv6 Oct 30 05:33:03.468963 kernel: NET: Registered PF_PACKET protocol family Oct 30 05:33:03.468970 kernel: Key type dns_resolver registered Oct 30 05:33:03.468978 kernel: IPI shorthand broadcast: enabled Oct 30 05:33:03.468986 kernel: sched_clock: Marking stable (1487003780, 178766728)->(1679245929, -13475421) Oct 30 05:33:03.468993 kernel: registered taskstats version 1 Oct 30 05:33:03.469001 kernel: Loading compiled-in X.509 certificates Oct 30 05:33:03.469009 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.54-flatcar: 5c702ca034417fe95fe5f6d2d897a0a9ba0dc5c4' Oct 30 05:33:03.469016 kernel: Demotion targets for Node 0: null Oct 30 05:33:03.469024 kernel: Key type .fscrypt registered Oct 30 05:33:03.469032 kernel: Key type fscrypt-provisioning registered Oct 30 05:33:03.469039 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 30 05:33:03.469046 kernel: ima: Allocated hash algorithm: sha1 Oct 30 05:33:03.469054 kernel: ima: No architecture policies found Oct 30 05:33:03.469062 kernel: clk: Disabling unused clocks Oct 30 05:33:03.469069 kernel: Freeing unused kernel image (initmem) memory: 15960K Oct 30 05:33:03.469076 kernel: Write protecting the kernel read-only data: 45056k Oct 30 05:33:03.469084 kernel: Freeing unused kernel image (rodata/data gap) memory: 828K Oct 30 05:33:03.469091 kernel: Run /init as init process Oct 30 05:33:03.469098 kernel: with arguments: Oct 30 05:33:03.469105 kernel: /init Oct 30 05:33:03.469113 kernel: with environment: Oct 30 05:33:03.469120 kernel: HOME=/ Oct 30 05:33:03.469127 kernel: TERM=linux Oct 30 05:33:03.469134 kernel: SCSI subsystem initialized Oct 30 05:33:03.469142 kernel: VMware PVSCSI driver - version 1.0.7.0-k Oct 30 05:33:03.469149 kernel: vmw_pvscsi: using 64bit dma Oct 30 05:33:03.469158 kernel: vmw_pvscsi: max_id: 16 Oct 30 05:33:03.469165 kernel: vmw_pvscsi: setting ring_pages to 8 Oct 30 05:33:03.469173 kernel: vmw_pvscsi: enabling reqCallThreshold Oct 30 05:33:03.469181 kernel: vmw_pvscsi: driver-based request coalescing enabled Oct 30 05:33:03.469188 kernel: vmw_pvscsi: using MSI-X Oct 30 05:33:03.469271 kernel: scsi host0: VMware PVSCSI storage adapter rev 2, req/cmp/msg rings: 8/8/1 pages, cmd_per_lun=254 Oct 30 05:33:03.469347 kernel: vmw_pvscsi 0000:03:00.0: VMware PVSCSI rev 2 host #0 Oct 30 05:33:03.469432 kernel: scsi 0:0:0:0: Direct-Access VMware Virtual disk 2.0 PQ: 0 ANSI: 6 Oct 30 05:33:03.469511 kernel: sd 0:0:0:0: [sda] 25804800 512-byte logical blocks: (13.2 GB/12.3 GiB) Oct 30 05:33:03.469586 kernel: sd 0:0:0:0: [sda] Write Protect is off Oct 30 05:33:03.469661 kernel: sd 0:0:0:0: [sda] Mode Sense: 31 00 00 00 Oct 30 05:33:03.469735 kernel: sd 0:0:0:0: [sda] Cache data unavailable Oct 30 05:33:03.469838 kernel: sd 0:0:0:0: [sda] Assuming drive cache: write through Oct 30 05:33:03.469850 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Oct 30 05:33:03.469926 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Oct 30 05:33:03.469936 kernel: libata version 3.00 loaded. Oct 30 05:33:03.470007 kernel: ata_piix 0000:00:07.1: version 2.13 Oct 30 05:33:03.470082 kernel: scsi host1: ata_piix Oct 30 05:33:03.470159 kernel: scsi host2: ata_piix Oct 30 05:33:03.470171 kernel: ata1: PATA max UDMA/33 cmd 0x1f0 ctl 0x3f6 bmdma 0x1060 irq 14 lpm-pol 0 Oct 30 05:33:03.470181 kernel: ata2: PATA max UDMA/33 cmd 0x170 ctl 0x376 bmdma 0x1068 irq 15 lpm-pol 0 Oct 30 05:33:03.470189 kernel: ata2.00: ATAPI: VMware Virtual IDE CDROM Drive, 00000001, max UDMA/33 Oct 30 05:33:03.470270 kernel: scsi 2:0:0:0: CD-ROM NECVMWar VMware IDE CDR10 1.00 PQ: 0 ANSI: 5 Oct 30 05:33:03.470346 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 1x/1x writer dvd-ram cd/rw xa/form2 cdda tray Oct 30 05:33:03.470356 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Oct 30 05:33:03.470428 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Oct 30 05:33:03.470440 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 30 05:33:03.470448 kernel: device-mapper: uevent: version 1.0.3 Oct 30 05:33:03.470456 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Oct 30 05:33:03.470464 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Oct 30 05:33:03.470472 kernel: raid6: avx2x4 gen() 45556 MB/s Oct 30 05:33:03.470480 kernel: raid6: avx2x2 gen() 49480 MB/s Oct 30 05:33:03.470487 kernel: raid6: avx2x1 gen() 44188 MB/s Oct 30 05:33:03.470496 kernel: raid6: using algorithm avx2x2 gen() 49480 MB/s Oct 30 05:33:03.470503 kernel: raid6: .... xor() 31840 MB/s, rmw enabled Oct 30 05:33:03.470511 kernel: raid6: using avx2x2 recovery algorithm Oct 30 05:33:03.470518 kernel: xor: automatically using best checksumming function avx Oct 30 05:33:03.470525 kernel: Btrfs loaded, zoned=no, fsverity=no Oct 30 05:33:03.470533 kernel: BTRFS: device fsid 7b765398-5394-455a-b2b0-895043779edc devid 1 transid 35 /dev/mapper/usr (254:0) scanned by mount (198) Oct 30 05:33:03.470540 kernel: BTRFS info (device dm-0): first mount of filesystem 7b765398-5394-455a-b2b0-895043779edc Oct 30 05:33:03.470549 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Oct 30 05:33:03.470556 kernel: BTRFS info (device dm-0): enabling ssd optimizations Oct 30 05:33:03.470563 kernel: BTRFS info (device dm-0): disabling log replay at mount time Oct 30 05:33:03.470571 kernel: BTRFS info (device dm-0): enabling free space tree Oct 30 05:33:03.470578 kernel: loop: module loaded Oct 30 05:33:03.470586 kernel: loop0: detected capacity change from 0 to 100136 Oct 30 05:33:03.470593 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Oct 30 05:33:03.470602 systemd[1]: Successfully made /usr/ read-only. Oct 30 05:33:03.470612 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Oct 30 05:33:03.470620 systemd[1]: Detected virtualization vmware. Oct 30 05:33:03.470627 systemd[1]: Detected architecture x86-64. Oct 30 05:33:03.470635 systemd[1]: Running in initrd. Oct 30 05:33:03.470642 systemd[1]: No hostname configured, using default hostname. Oct 30 05:33:03.470651 systemd[1]: Hostname set to . Oct 30 05:33:03.470659 systemd[1]: Initializing machine ID from random generator. Oct 30 05:33:03.470667 systemd[1]: Queued start job for default target initrd.target. Oct 30 05:33:03.470674 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Oct 30 05:33:03.470682 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 30 05:33:03.470690 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 30 05:33:03.470698 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Oct 30 05:33:03.470707 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 30 05:33:03.470716 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Oct 30 05:33:03.470724 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Oct 30 05:33:03.470732 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 30 05:33:03.470739 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 30 05:33:03.470757 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Oct 30 05:33:03.470765 systemd[1]: Reached target paths.target - Path Units. Oct 30 05:33:03.470773 systemd[1]: Reached target slices.target - Slice Units. Oct 30 05:33:03.470780 systemd[1]: Reached target swap.target - Swaps. Oct 30 05:33:03.470788 systemd[1]: Reached target timers.target - Timer Units. Oct 30 05:33:03.470795 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Oct 30 05:33:03.470803 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 30 05:33:03.470812 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 30 05:33:03.470820 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Oct 30 05:33:03.470828 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 30 05:33:03.470835 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 30 05:33:03.470843 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 30 05:33:03.470851 systemd[1]: Reached target sockets.target - Socket Units. Oct 30 05:33:03.470858 systemd[1]: Starting afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments... Oct 30 05:33:03.470867 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Oct 30 05:33:03.470875 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 30 05:33:03.470882 systemd[1]: Finished network-cleanup.service - Network Cleanup. Oct 30 05:33:03.470890 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Oct 30 05:33:03.470898 systemd[1]: Starting systemd-fsck-usr.service... Oct 30 05:33:03.470905 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 30 05:33:03.470915 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 30 05:33:03.470922 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 30 05:33:03.470930 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Oct 30 05:33:03.470938 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 30 05:33:03.470947 systemd[1]: Finished systemd-fsck-usr.service. Oct 30 05:33:03.470954 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 30 05:33:03.470980 systemd-journald[335]: Collecting audit messages is disabled. Oct 30 05:33:03.471000 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 30 05:33:03.471008 kernel: Bridge firewalling registered Oct 30 05:33:03.471016 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 30 05:33:03.471024 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 30 05:33:03.471032 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 30 05:33:03.471040 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 30 05:33:03.471048 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 30 05:33:03.471057 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 30 05:33:03.471065 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 30 05:33:03.471073 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 30 05:33:03.471081 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 30 05:33:03.471089 systemd[1]: Finished afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Oct 30 05:33:03.471097 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 30 05:33:03.471106 systemd-journald[335]: Journal started Oct 30 05:33:03.471122 systemd-journald[335]: Runtime Journal (/run/log/journal/ed54b471331841fdbddbf99b3ca571d2) is 4.8M, max 38.4M, 33.6M free. Oct 30 05:33:03.400891 systemd-modules-load[336]: Inserted module 'br_netfilter' Oct 30 05:33:03.476139 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Oct 30 05:33:03.476169 systemd[1]: Started systemd-journald.service - Journal Service. Oct 30 05:33:03.483867 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 30 05:33:03.489475 systemd-resolved[352]: Positive Trust Anchors: Oct 30 05:33:03.489484 systemd-resolved[352]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 30 05:33:03.489487 systemd-resolved[352]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Oct 30 05:33:03.489510 systemd-resolved[352]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 30 05:33:03.500523 dracut-cmdline[365]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 ip=139.178.70.106::139.178.70.97:28::ens192:off:1.1.1.1:1.0.0.1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=c9268d42679a2004cee399a64e9ca764369a7ade73fcdb4dc46afa45c3a8dab8 Oct 30 05:33:03.501823 systemd-tmpfiles[375]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Oct 30 05:33:03.505602 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 30 05:33:03.514168 systemd-resolved[352]: Defaulting to hostname 'linux'. Oct 30 05:33:03.514946 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 30 05:33:03.515191 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 30 05:33:03.565761 kernel: Loading iSCSI transport class v2.0-870. Oct 30 05:33:03.576771 kernel: iscsi: registered transport (tcp) Oct 30 05:33:03.602811 kernel: iscsi: registered transport (qla4xxx) Oct 30 05:33:03.602867 kernel: QLogic iSCSI HBA Driver Oct 30 05:33:03.628546 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 30 05:33:03.645469 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 30 05:33:03.646482 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 30 05:33:03.668891 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Oct 30 05:33:03.669879 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Oct 30 05:33:03.670819 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Oct 30 05:33:03.688254 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Oct 30 05:33:03.689493 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 30 05:33:03.707986 systemd-udevd[614]: Using default interface naming scheme 'v257'. Oct 30 05:33:03.714850 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 30 05:33:03.716273 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Oct 30 05:33:03.732170 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 30 05:33:03.733174 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 30 05:33:03.735904 dracut-pre-trigger[691]: rd.md=0: removing MD RAID activation Oct 30 05:33:03.752369 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Oct 30 05:33:03.753142 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 30 05:33:03.765255 systemd-networkd[721]: lo: Link UP Oct 30 05:33:03.765464 systemd-networkd[721]: lo: Gained carrier Oct 30 05:33:03.765772 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 30 05:33:03.766018 systemd[1]: Reached target network.target - Network. Oct 30 05:33:03.837052 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 30 05:33:03.838318 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Oct 30 05:33:03.904970 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_disk ROOT. Oct 30 05:33:03.911815 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_disk EFI-SYSTEM. Oct 30 05:33:03.923160 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_disk USR-A. Oct 30 05:33:03.930614 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. Oct 30 05:33:03.931797 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Oct 30 05:33:04.214764 kernel: VMware vmxnet3 virtual NIC driver - version 1.9.0.0-k-NAPI Oct 30 05:33:04.216977 kernel: vmxnet3 0000:0b:00.0: # of Tx queues : 2, # of Rx queues : 2 Oct 30 05:33:04.222769 kernel: vmxnet3 0000:0b:00.0 eth0: NIC Link is Up 10000 Mbps Oct 30 05:33:04.244772 kernel: vmxnet3 0000:0b:00.0 ens192: renamed from eth0 Oct 30 05:33:04.246037 systemd-networkd[721]: eth0: Interface name change detected, renamed to ens192. Oct 30 05:33:04.249988 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 30 05:33:04.250078 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 30 05:33:04.250301 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 30 05:33:04.259366 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 30 05:33:04.270561 (udev-worker)[759]: id: Truncating stdout of 'dmi_memory_id' up to 16384 byte. Oct 30 05:33:04.277109 systemd-networkd[721]: ens192: Configuring with /etc/systemd/network/10-dracut-cmdline-99.network. Oct 30 05:33:04.278998 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Oct 30 05:33:04.282756 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Oct 30 05:33:04.281135 systemd-networkd[721]: ens192: Link UP Oct 30 05:33:04.281138 systemd-networkd[721]: ens192: Gained carrier Oct 30 05:33:04.319462 kernel: cryptd: max_cpu_qlen set to 1000 Oct 30 05:33:04.320997 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 30 05:33:04.323807 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input2 Oct 30 05:33:04.343457 kernel: AES CTR mode by8 optimization enabled Oct 30 05:33:04.485788 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Oct 30 05:33:04.486603 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Oct 30 05:33:04.486987 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 30 05:33:04.487295 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 30 05:33:04.488507 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Oct 30 05:33:04.506144 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Oct 30 05:33:05.324385 disk-uuid[787]: Warning: The kernel is still using the old partition table. Oct 30 05:33:05.324385 disk-uuid[787]: The new table will be used at the next reboot or after you Oct 30 05:33:05.324385 disk-uuid[787]: run partprobe(8) or kpartx(8) Oct 30 05:33:05.324385 disk-uuid[787]: The operation has completed successfully. Oct 30 05:33:05.331171 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 30 05:33:05.331236 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Oct 30 05:33:05.332236 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Oct 30 05:33:05.351759 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (878) Oct 30 05:33:05.354167 kernel: BTRFS info (device sda6): first mount of filesystem 6cd1a954-c777-4af3-b53e-32d628b8ad41 Oct 30 05:33:05.354187 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Oct 30 05:33:05.357755 kernel: BTRFS info (device sda6): enabling ssd optimizations Oct 30 05:33:05.357775 kernel: BTRFS info (device sda6): enabling free space tree Oct 30 05:33:05.361760 kernel: BTRFS info (device sda6): last unmount of filesystem 6cd1a954-c777-4af3-b53e-32d628b8ad41 Oct 30 05:33:05.362316 systemd[1]: Finished ignition-setup.service - Ignition (setup). Oct 30 05:33:05.363220 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Oct 30 05:33:05.509688 ignition[897]: Ignition 2.22.0 Oct 30 05:33:05.509702 ignition[897]: Stage: fetch-offline Oct 30 05:33:05.509737 ignition[897]: no configs at "/usr/lib/ignition/base.d" Oct 30 05:33:05.509754 ignition[897]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Oct 30 05:33:05.509826 ignition[897]: parsed url from cmdline: "" Oct 30 05:33:05.509830 ignition[897]: no config URL provided Oct 30 05:33:05.509834 ignition[897]: reading system config file "/usr/lib/ignition/user.ign" Oct 30 05:33:05.509839 ignition[897]: no config at "/usr/lib/ignition/user.ign" Oct 30 05:33:05.510415 ignition[897]: config successfully fetched Oct 30 05:33:05.510434 ignition[897]: parsing config with SHA512: 03bbd3d8550a339528af3357a7de7729180f3e7261fab00bceac4a209822800aa08f2b4b1174ea67a5eeb3646b17f3007d8d235f64002815c904a2c7707859f4 Oct 30 05:33:05.513976 unknown[897]: fetched base config from "system" Oct 30 05:33:05.513986 unknown[897]: fetched user config from "vmware" Oct 30 05:33:05.514202 ignition[897]: fetch-offline: fetch-offline passed Oct 30 05:33:05.514235 ignition[897]: Ignition finished successfully Oct 30 05:33:05.515328 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Oct 30 05:33:05.515567 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Oct 30 05:33:05.516064 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Oct 30 05:33:05.531674 ignition[903]: Ignition 2.22.0 Oct 30 05:33:05.531683 ignition[903]: Stage: kargs Oct 30 05:33:05.531802 ignition[903]: no configs at "/usr/lib/ignition/base.d" Oct 30 05:33:05.531808 ignition[903]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Oct 30 05:33:05.532329 ignition[903]: kargs: kargs passed Oct 30 05:33:05.532360 ignition[903]: Ignition finished successfully Oct 30 05:33:05.534050 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Oct 30 05:33:05.534906 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Oct 30 05:33:05.552918 ignition[909]: Ignition 2.22.0 Oct 30 05:33:05.552925 ignition[909]: Stage: disks Oct 30 05:33:05.553019 ignition[909]: no configs at "/usr/lib/ignition/base.d" Oct 30 05:33:05.553024 ignition[909]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Oct 30 05:33:05.553544 ignition[909]: disks: disks passed Oct 30 05:33:05.553575 ignition[909]: Ignition finished successfully Oct 30 05:33:05.555401 systemd[1]: Finished ignition-disks.service - Ignition (disks). Oct 30 05:33:05.555774 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Oct 30 05:33:05.556029 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 30 05:33:05.556288 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 30 05:33:05.556514 systemd[1]: Reached target sysinit.target - System Initialization. Oct 30 05:33:05.556759 systemd[1]: Reached target basic.target - Basic System. Oct 30 05:33:05.557507 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Oct 30 05:33:05.584600 systemd-fsck[917]: ROOT: clean, 15/1631200 files, 112378/1617920 blocks Oct 30 05:33:05.585950 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Oct 30 05:33:05.587120 systemd[1]: Mounting sysroot.mount - /sysroot... Oct 30 05:33:05.704771 kernel: EXT4-fs (sda9): mounted filesystem 7eb44aae-6027-4622-bac3-c349cf408231 r/w with ordered data mode. Quota mode: none. Oct 30 05:33:05.705076 systemd[1]: Mounted sysroot.mount - /sysroot. Oct 30 05:33:05.705522 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Oct 30 05:33:05.707133 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 30 05:33:05.709879 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Oct 30 05:33:05.710501 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Oct 30 05:33:05.710784 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 30 05:33:05.711011 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Oct 30 05:33:05.716856 systemd-networkd[721]: ens192: Gained IPv6LL Oct 30 05:33:05.718092 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Oct 30 05:33:05.719250 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Oct 30 05:33:05.723776 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (925) Oct 30 05:33:05.725996 kernel: BTRFS info (device sda6): first mount of filesystem 6cd1a954-c777-4af3-b53e-32d628b8ad41 Oct 30 05:33:05.726039 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Oct 30 05:33:05.731340 kernel: BTRFS info (device sda6): enabling ssd optimizations Oct 30 05:33:05.731388 kernel: BTRFS info (device sda6): enabling free space tree Oct 30 05:33:05.732845 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 30 05:33:05.800364 initrd-setup-root[949]: cut: /sysroot/etc/passwd: No such file or directory Oct 30 05:33:05.803320 initrd-setup-root[956]: cut: /sysroot/etc/group: No such file or directory Oct 30 05:33:05.806346 initrd-setup-root[963]: cut: /sysroot/etc/shadow: No such file or directory Oct 30 05:33:05.808819 initrd-setup-root[970]: cut: /sysroot/etc/gshadow: No such file or directory Oct 30 05:33:05.874587 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Oct 30 05:33:05.875554 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Oct 30 05:33:05.876847 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Oct 30 05:33:05.888328 systemd[1]: sysroot-oem.mount: Deactivated successfully. Oct 30 05:33:05.890835 kernel: BTRFS info (device sda6): last unmount of filesystem 6cd1a954-c777-4af3-b53e-32d628b8ad41 Oct 30 05:33:05.907781 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Oct 30 05:33:05.915076 ignition[1038]: INFO : Ignition 2.22.0 Oct 30 05:33:05.915546 ignition[1038]: INFO : Stage: mount Oct 30 05:33:05.915546 ignition[1038]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 30 05:33:05.915546 ignition[1038]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Oct 30 05:33:05.916682 ignition[1038]: INFO : mount: mount passed Oct 30 05:33:05.916849 ignition[1038]: INFO : Ignition finished successfully Oct 30 05:33:05.917703 systemd[1]: Finished ignition-mount.service - Ignition (mount). Oct 30 05:33:05.918732 systemd[1]: Starting ignition-files.service - Ignition (files)... Oct 30 05:33:06.706669 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 30 05:33:06.772764 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (1050) Oct 30 05:33:06.776394 kernel: BTRFS info (device sda6): first mount of filesystem 6cd1a954-c777-4af3-b53e-32d628b8ad41 Oct 30 05:33:06.776420 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Oct 30 05:33:06.780539 kernel: BTRFS info (device sda6): enabling ssd optimizations Oct 30 05:33:06.780571 kernel: BTRFS info (device sda6): enabling free space tree Oct 30 05:33:06.781814 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 30 05:33:06.804761 ignition[1067]: INFO : Ignition 2.22.0 Oct 30 05:33:06.804761 ignition[1067]: INFO : Stage: files Oct 30 05:33:06.804761 ignition[1067]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 30 05:33:06.804761 ignition[1067]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Oct 30 05:33:06.805492 ignition[1067]: DEBUG : files: compiled without relabeling support, skipping Oct 30 05:33:06.806236 ignition[1067]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 30 05:33:06.806236 ignition[1067]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 30 05:33:06.808684 ignition[1067]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 30 05:33:06.808881 ignition[1067]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 30 05:33:06.809128 unknown[1067]: wrote ssh authorized keys file for user: core Oct 30 05:33:06.809341 ignition[1067]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 30 05:33:06.810642 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Oct 30 05:33:06.810854 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Oct 30 05:33:06.850959 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Oct 30 05:33:06.920671 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Oct 30 05:33:06.920671 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Oct 30 05:33:06.921232 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Oct 30 05:33:06.921232 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 30 05:33:06.921232 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 30 05:33:06.921232 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 30 05:33:06.921232 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 30 05:33:06.921232 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 30 05:33:06.921232 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 30 05:33:06.922556 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 30 05:33:06.922556 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 30 05:33:06.922556 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Oct 30 05:33:06.924365 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Oct 30 05:33:06.924365 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Oct 30 05:33:06.924932 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Oct 30 05:33:07.328113 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Oct 30 05:33:07.776968 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Oct 30 05:33:07.776968 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/systemd/network/00-vmware.network" Oct 30 05:33:07.777806 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/systemd/network/00-vmware.network" Oct 30 05:33:07.778066 ignition[1067]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Oct 30 05:33:07.778273 ignition[1067]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 30 05:33:07.778607 ignition[1067]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 30 05:33:07.778607 ignition[1067]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Oct 30 05:33:07.778607 ignition[1067]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Oct 30 05:33:07.779253 ignition[1067]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 30 05:33:07.779253 ignition[1067]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 30 05:33:07.779253 ignition[1067]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Oct 30 05:33:07.779253 ignition[1067]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Oct 30 05:33:07.802600 ignition[1067]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Oct 30 05:33:07.804949 ignition[1067]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Oct 30 05:33:07.805129 ignition[1067]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Oct 30 05:33:07.805129 ignition[1067]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Oct 30 05:33:07.805129 ignition[1067]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Oct 30 05:33:07.806324 ignition[1067]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 30 05:33:07.806324 ignition[1067]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 30 05:33:07.806324 ignition[1067]: INFO : files: files passed Oct 30 05:33:07.806324 ignition[1067]: INFO : Ignition finished successfully Oct 30 05:33:07.806365 systemd[1]: Finished ignition-files.service - Ignition (files). Oct 30 05:33:07.807817 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Oct 30 05:33:07.808476 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Oct 30 05:33:07.815491 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 30 05:33:07.815551 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Oct 30 05:33:07.821540 initrd-setup-root-after-ignition[1104]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 30 05:33:07.822097 initrd-setup-root-after-ignition[1100]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 30 05:33:07.822097 initrd-setup-root-after-ignition[1100]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Oct 30 05:33:07.823219 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 30 05:33:07.823614 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Oct 30 05:33:07.824471 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Oct 30 05:33:07.849255 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 30 05:33:07.849322 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Oct 30 05:33:07.849644 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Oct 30 05:33:07.849763 systemd[1]: Reached target initrd.target - Initrd Default Target. Oct 30 05:33:07.850120 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Oct 30 05:33:07.850670 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Oct 30 05:33:07.863623 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 30 05:33:07.864502 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Oct 30 05:33:07.878984 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Oct 30 05:33:07.879120 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Oct 30 05:33:07.879306 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 30 05:33:07.879510 systemd[1]: Stopped target timers.target - Timer Units. Oct 30 05:33:07.879686 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 30 05:33:07.879790 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 30 05:33:07.880146 systemd[1]: Stopped target initrd.target - Initrd Default Target. Oct 30 05:33:07.880310 systemd[1]: Stopped target basic.target - Basic System. Oct 30 05:33:07.880503 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Oct 30 05:33:07.880702 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Oct 30 05:33:07.880934 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Oct 30 05:33:07.881165 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Oct 30 05:33:07.881369 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Oct 30 05:33:07.881624 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Oct 30 05:33:07.881894 systemd[1]: Stopped target sysinit.target - System Initialization. Oct 30 05:33:07.882139 systemd[1]: Stopped target local-fs.target - Local File Systems. Oct 30 05:33:07.882339 systemd[1]: Stopped target swap.target - Swaps. Oct 30 05:33:07.882525 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 30 05:33:07.882611 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Oct 30 05:33:07.882991 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Oct 30 05:33:07.883163 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 30 05:33:07.883349 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Oct 30 05:33:07.883397 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 30 05:33:07.883589 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 30 05:33:07.883650 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Oct 30 05:33:07.883985 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 30 05:33:07.884091 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Oct 30 05:33:07.884374 systemd[1]: Stopped target paths.target - Path Units. Oct 30 05:33:07.884497 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 30 05:33:07.887797 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 30 05:33:07.888035 systemd[1]: Stopped target slices.target - Slice Units. Oct 30 05:33:07.888237 systemd[1]: Stopped target sockets.target - Socket Units. Oct 30 05:33:07.888435 systemd[1]: iscsid.socket: Deactivated successfully. Oct 30 05:33:07.888497 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Oct 30 05:33:07.888645 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 30 05:33:07.888694 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 30 05:33:07.888873 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 30 05:33:07.888947 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 30 05:33:07.889191 systemd[1]: ignition-files.service: Deactivated successfully. Oct 30 05:33:07.889254 systemd[1]: Stopped ignition-files.service - Ignition (files). Oct 30 05:33:07.890060 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Oct 30 05:33:07.890172 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 30 05:33:07.890238 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Oct 30 05:33:07.890819 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Oct 30 05:33:07.891792 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Oct 30 05:33:07.891886 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 30 05:33:07.892115 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 30 05:33:07.892177 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Oct 30 05:33:07.892433 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 30 05:33:07.892495 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Oct 30 05:33:07.896677 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 30 05:33:07.902037 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Oct 30 05:33:07.912726 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 30 05:33:07.916637 ignition[1124]: INFO : Ignition 2.22.0 Oct 30 05:33:07.916637 ignition[1124]: INFO : Stage: umount Oct 30 05:33:07.917196 ignition[1124]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 30 05:33:07.917196 ignition[1124]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Oct 30 05:33:07.917784 ignition[1124]: INFO : umount: umount passed Oct 30 05:33:07.917784 ignition[1124]: INFO : Ignition finished successfully Oct 30 05:33:07.918221 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 30 05:33:07.918301 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Oct 30 05:33:07.918702 systemd[1]: Stopped target network.target - Network. Oct 30 05:33:07.918839 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 30 05:33:07.918870 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Oct 30 05:33:07.919018 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 30 05:33:07.919042 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Oct 30 05:33:07.919177 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 30 05:33:07.919202 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Oct 30 05:33:07.919346 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Oct 30 05:33:07.919368 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Oct 30 05:33:07.919558 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Oct 30 05:33:07.919852 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Oct 30 05:33:07.925493 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 30 05:33:07.925724 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Oct 30 05:33:07.927324 systemd[1]: Stopped target network-pre.target - Preparation for Network. Oct 30 05:33:07.927836 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 30 05:33:07.927986 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Oct 30 05:33:07.928710 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Oct 30 05:33:07.928953 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 30 05:33:07.928981 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 30 05:33:07.929101 systemd[1]: afterburn-network-kargs.service: Deactivated successfully. Oct 30 05:33:07.929126 systemd[1]: Stopped afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Oct 30 05:33:07.929247 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 30 05:33:07.932275 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 30 05:33:07.933881 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Oct 30 05:33:07.935130 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 30 05:33:07.935161 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 30 05:33:07.935498 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 30 05:33:07.935524 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Oct 30 05:33:07.942706 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 30 05:33:07.942855 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 30 05:33:07.943181 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 30 05:33:07.943209 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Oct 30 05:33:07.943417 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 30 05:33:07.943433 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Oct 30 05:33:07.943591 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 30 05:33:07.943619 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Oct 30 05:33:07.943902 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 30 05:33:07.943928 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Oct 30 05:33:07.944215 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 30 05:33:07.944239 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 30 05:33:07.945028 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Oct 30 05:33:07.945129 systemd[1]: systemd-network-generator.service: Deactivated successfully. Oct 30 05:33:07.945158 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Oct 30 05:33:07.945278 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 30 05:33:07.945300 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 30 05:33:07.945414 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 30 05:33:07.945438 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 30 05:33:07.956579 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 30 05:33:07.956657 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Oct 30 05:33:07.988951 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 30 05:33:07.989041 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Oct 30 05:33:08.231577 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 30 05:33:08.231658 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Oct 30 05:33:08.231986 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Oct 30 05:33:08.232097 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 30 05:33:08.232129 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Oct 30 05:33:08.232734 systemd[1]: Starting initrd-switch-root.service - Switch Root... Oct 30 05:33:08.242198 systemd[1]: Switching root. Oct 30 05:33:08.286438 systemd-journald[335]: Journal stopped Oct 30 05:33:10.541694 systemd-journald[335]: Received SIGTERM from PID 1 (systemd). Oct 30 05:33:10.541715 kernel: SELinux: policy capability network_peer_controls=1 Oct 30 05:33:10.541724 kernel: SELinux: policy capability open_perms=1 Oct 30 05:33:10.541731 kernel: SELinux: policy capability extended_socket_class=1 Oct 30 05:33:10.541737 kernel: SELinux: policy capability always_check_network=0 Oct 30 05:33:10.541757 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 30 05:33:10.542326 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 30 05:33:10.542336 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 30 05:33:10.542342 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 30 05:33:10.542349 kernel: SELinux: policy capability userspace_initial_context=0 Oct 30 05:33:10.542355 kernel: audit: type=1403 audit(1761802389.088:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 30 05:33:10.542368 systemd[1]: Successfully loaded SELinux policy in 111.788ms. Oct 30 05:33:10.542378 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 3.915ms. Oct 30 05:33:10.542386 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Oct 30 05:33:10.542394 systemd[1]: Detected virtualization vmware. Oct 30 05:33:10.542401 systemd[1]: Detected architecture x86-64. Oct 30 05:33:10.542410 systemd[1]: Detected first boot. Oct 30 05:33:10.542418 systemd[1]: Initializing machine ID from random generator. Oct 30 05:33:10.542425 zram_generator::config[1168]: No configuration found. Oct 30 05:33:10.542532 kernel: vmw_vmci 0000:00:07.7: Using capabilities 0xc Oct 30 05:33:10.542544 kernel: Guest personality initialized and is active Oct 30 05:33:10.542553 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Oct 30 05:33:10.542559 kernel: Initialized host personality Oct 30 05:33:10.542567 kernel: NET: Registered PF_VSOCK protocol family Oct 30 05:33:10.542578 systemd[1]: Populated /etc with preset unit settings. Oct 30 05:33:10.542591 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Oct 30 05:33:10.542606 systemd[1]: COREOS_CUSTOM_PUBLIC_IPV4=$(ip addr show ens192 | grep -v "inet 10." | grep -Po "inet \K[\d.]+")" > ${OUTPUT}" Oct 30 05:33:10.542614 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 30 05:33:10.542622 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Oct 30 05:33:10.542629 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 30 05:33:10.542636 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Oct 30 05:33:10.542644 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Oct 30 05:33:10.542654 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Oct 30 05:33:10.542661 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Oct 30 05:33:10.542668 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Oct 30 05:33:10.542676 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Oct 30 05:33:10.542683 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Oct 30 05:33:10.542690 systemd[1]: Created slice user.slice - User and Session Slice. Oct 30 05:33:10.542703 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 30 05:33:10.542712 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 30 05:33:10.542722 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Oct 30 05:33:10.542730 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Oct 30 05:33:10.542737 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Oct 30 05:33:10.542755 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 30 05:33:10.542763 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Oct 30 05:33:10.542772 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 30 05:33:10.542780 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 30 05:33:10.542792 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Oct 30 05:33:10.542800 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Oct 30 05:33:10.542807 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Oct 30 05:33:10.542815 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Oct 30 05:33:10.542825 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 30 05:33:10.542832 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 30 05:33:10.542840 systemd[1]: Reached target slices.target - Slice Units. Oct 30 05:33:10.542847 systemd[1]: Reached target swap.target - Swaps. Oct 30 05:33:10.542855 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Oct 30 05:33:10.542862 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Oct 30 05:33:10.542871 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Oct 30 05:33:10.542879 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 30 05:33:10.542886 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 30 05:33:10.542894 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 30 05:33:10.542902 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Oct 30 05:33:10.542911 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Oct 30 05:33:10.542919 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Oct 30 05:33:10.542926 systemd[1]: Mounting media.mount - External Media Directory... Oct 30 05:33:10.542934 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 30 05:33:10.542942 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Oct 30 05:33:10.542950 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Oct 30 05:33:10.542957 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Oct 30 05:33:10.542966 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 30 05:33:10.542974 systemd[1]: Reached target machines.target - Containers. Oct 30 05:33:10.542982 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Oct 30 05:33:10.542989 systemd[1]: Starting ignition-delete-config.service - Ignition (delete config)... Oct 30 05:33:10.542997 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 30 05:33:10.543004 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Oct 30 05:33:10.543013 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 30 05:33:10.543022 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 30 05:33:10.543029 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 30 05:33:10.543037 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Oct 30 05:33:10.543044 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 30 05:33:10.543052 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 30 05:33:10.543059 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 30 05:33:10.543068 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Oct 30 05:33:10.543076 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 30 05:33:10.543084 systemd[1]: Stopped systemd-fsck-usr.service. Oct 30 05:33:10.543091 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 30 05:33:10.543099 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 30 05:33:10.543107 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 30 05:33:10.543114 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 30 05:33:10.543123 kernel: fuse: init (API version 7.41) Oct 30 05:33:10.543130 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Oct 30 05:33:10.543138 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Oct 30 05:33:10.543145 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 30 05:33:10.543153 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 30 05:33:10.543161 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Oct 30 05:33:10.543170 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Oct 30 05:33:10.543178 systemd[1]: Mounted media.mount - External Media Directory. Oct 30 05:33:10.543185 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Oct 30 05:33:10.543193 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Oct 30 05:33:10.543200 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Oct 30 05:33:10.543208 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 30 05:33:10.543215 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 30 05:33:10.543224 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Oct 30 05:33:10.543232 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 30 05:33:10.543240 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 30 05:33:10.543248 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 30 05:33:10.543255 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 30 05:33:10.543263 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 30 05:33:10.543270 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Oct 30 05:33:10.543279 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 30 05:33:10.543287 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 30 05:33:10.543294 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 30 05:33:10.543303 kernel: ACPI: bus type drm_connector registered Oct 30 05:33:10.543310 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 30 05:33:10.543317 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 30 05:33:10.543325 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 30 05:33:10.543334 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Oct 30 05:33:10.543342 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Oct 30 05:33:10.543352 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 30 05:33:10.543361 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Oct 30 05:33:10.543369 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 30 05:33:10.543377 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 30 05:33:10.543385 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Oct 30 05:33:10.543395 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 30 05:33:10.543403 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Oct 30 05:33:10.543411 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 30 05:33:10.543418 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Oct 30 05:33:10.543427 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 30 05:33:10.543450 systemd-journald[1254]: Collecting audit messages is disabled. Oct 30 05:33:10.543468 systemd-journald[1254]: Journal started Oct 30 05:33:10.543485 systemd-journald[1254]: Runtime Journal (/run/log/journal/305162d36ddf4cd1a49ee6f152da6d80) is 4.8M, max 38.4M, 33.6M free. Oct 30 05:33:10.317966 systemd[1]: Queued start job for default target multi-user.target. Oct 30 05:33:10.337887 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Oct 30 05:33:10.338173 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 30 05:33:10.543994 jq[1238]: true Oct 30 05:33:10.544535 jq[1271]: true Oct 30 05:33:10.548858 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 30 05:33:10.553710 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Oct 30 05:33:10.553766 systemd[1]: Started systemd-journald.service - Journal Service. Oct 30 05:33:10.557843 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Oct 30 05:33:10.559408 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Oct 30 05:33:10.570247 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Oct 30 05:33:10.575757 kernel: loop1: detected capacity change from 0 to 128912 Oct 30 05:33:10.576941 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Oct 30 05:33:10.580775 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Oct 30 05:33:10.587911 systemd[1]: Starting systemd-sysusers.service - Create System Users... Oct 30 05:33:10.588420 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 30 05:33:10.611801 kernel: loop2: detected capacity change from 0 to 2960 Oct 30 05:33:10.617290 systemd-journald[1254]: Time spent on flushing to /var/log/journal/305162d36ddf4cd1a49ee6f152da6d80 is 47.903ms for 1747 entries. Oct 30 05:33:10.617290 systemd-journald[1254]: System Journal (/var/log/journal/305162d36ddf4cd1a49ee6f152da6d80) is 8M, max 588.1M, 580.1M free. Oct 30 05:33:10.733111 systemd-journald[1254]: Received client request to flush runtime journal. Oct 30 05:33:10.733397 kernel: loop3: detected capacity change from 0 to 111544 Oct 30 05:33:10.618125 ignition[1291]: Ignition 2.22.0 Oct 30 05:33:10.639472 systemd[1]: Finished ignition-delete-config.service - Ignition (delete config). Oct 30 05:33:10.620885 ignition[1291]: deleting config from guestinfo properties Oct 30 05:33:10.664965 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 30 05:33:10.637028 ignition[1291]: Successfully deleted config Oct 30 05:33:10.734533 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Oct 30 05:33:10.737060 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Oct 30 05:33:10.747184 systemd[1]: Finished systemd-sysusers.service - Create System Users. Oct 30 05:33:10.749690 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 30 05:33:10.751949 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 30 05:33:10.755796 kernel: loop4: detected capacity change from 0 to 229808 Oct 30 05:33:10.768107 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Oct 30 05:33:10.780291 systemd-tmpfiles[1335]: ACLs are not supported, ignoring. Oct 30 05:33:10.780303 systemd-tmpfiles[1335]: ACLs are not supported, ignoring. Oct 30 05:33:10.783059 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 30 05:33:10.789843 kernel: loop5: detected capacity change from 0 to 128912 Oct 30 05:33:10.800774 kernel: loop6: detected capacity change from 0 to 2960 Oct 30 05:33:10.809799 kernel: loop7: detected capacity change from 0 to 111544 Oct 30 05:33:10.811492 systemd[1]: Started systemd-userdbd.service - User Database Manager. Oct 30 05:33:10.822839 kernel: loop1: detected capacity change from 0 to 229808 Oct 30 05:33:10.837464 (sd-merge)[1340]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw', 'oem-vmware.raw'. Oct 30 05:33:10.840814 (sd-merge)[1340]: Merged extensions into '/usr'. Oct 30 05:33:10.844887 systemd[1]: Reload requested from client PID 1290 ('systemd-sysext') (unit systemd-sysext.service)... Oct 30 05:33:10.844975 systemd[1]: Reloading... Oct 30 05:33:10.874836 systemd-resolved[1334]: Positive Trust Anchors: Oct 30 05:33:10.875057 systemd-resolved[1334]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 30 05:33:10.875091 systemd-resolved[1334]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Oct 30 05:33:10.875139 systemd-resolved[1334]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 30 05:33:10.878341 systemd-resolved[1334]: Defaulting to hostname 'linux'. Oct 30 05:33:10.895771 zram_generator::config[1370]: No configuration found. Oct 30 05:33:10.991952 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Oct 30 05:33:11.042524 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 30 05:33:11.042779 systemd[1]: Reloading finished in 197 ms. Oct 30 05:33:11.056335 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 30 05:33:11.057410 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Oct 30 05:33:11.059001 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 30 05:33:11.068641 systemd[1]: Starting ensure-sysext.service... Oct 30 05:33:11.071830 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 30 05:33:11.094171 systemd-tmpfiles[1430]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Oct 30 05:33:11.094423 systemd-tmpfiles[1430]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Oct 30 05:33:11.094669 systemd-tmpfiles[1430]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 30 05:33:11.094901 systemd-tmpfiles[1430]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Oct 30 05:33:11.095471 systemd-tmpfiles[1430]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 30 05:33:11.095703 systemd-tmpfiles[1430]: ACLs are not supported, ignoring. Oct 30 05:33:11.096828 systemd-tmpfiles[1430]: ACLs are not supported, ignoring. Oct 30 05:33:11.099318 systemd[1]: Reload requested from client PID 1429 ('systemctl') (unit ensure-sysext.service)... Oct 30 05:33:11.099329 systemd[1]: Reloading... Oct 30 05:33:11.138762 zram_generator::config[1463]: No configuration found. Oct 30 05:33:11.167661 systemd-tmpfiles[1430]: Detected autofs mount point /boot during canonicalization of boot. Oct 30 05:33:11.167668 systemd-tmpfiles[1430]: Skipping /boot Oct 30 05:33:11.172422 systemd-tmpfiles[1430]: Detected autofs mount point /boot during canonicalization of boot. Oct 30 05:33:11.172429 systemd-tmpfiles[1430]: Skipping /boot Oct 30 05:33:11.219869 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Oct 30 05:33:11.270691 systemd[1]: Reloading finished in 171 ms. Oct 30 05:33:11.289559 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Oct 30 05:33:11.290052 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 30 05:33:11.304848 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 30 05:33:11.306193 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Oct 30 05:33:11.307541 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Oct 30 05:33:11.311827 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Oct 30 05:33:11.313292 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 30 05:33:11.316844 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Oct 30 05:33:11.325331 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Oct 30 05:33:11.333090 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 30 05:33:11.336551 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 30 05:33:11.350181 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Oct 30 05:33:11.362021 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 30 05:33:11.364595 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 30 05:33:11.364703 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 30 05:33:11.374164 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 30 05:33:11.374417 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 30 05:33:11.374589 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 30 05:33:11.375604 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 30 05:33:11.381177 systemd-udevd[1522]: Using default interface naming scheme 'v257'. Oct 30 05:33:11.382254 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 30 05:33:11.388799 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 30 05:33:11.389047 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 30 05:33:11.389122 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 30 05:33:11.389230 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 30 05:33:11.391968 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Oct 30 05:33:11.397068 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 30 05:33:11.397730 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 30 05:33:11.398914 systemd[1]: Finished ensure-sysext.service. Oct 30 05:33:11.411339 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 30 05:33:11.412938 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 30 05:33:11.413185 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Oct 30 05:33:11.413585 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 30 05:33:11.413945 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Oct 30 05:33:11.418370 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 30 05:33:11.419053 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 30 05:33:11.429127 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Oct 30 05:33:11.433574 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Oct 30 05:33:11.440264 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 30 05:33:11.444870 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Oct 30 05:33:11.446037 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Oct 30 05:33:11.446571 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 30 05:33:11.448681 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 30 05:33:11.449336 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 30 05:33:11.449507 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 30 05:33:11.453697 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 30 05:33:11.456827 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 30 05:33:11.461971 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 30 05:33:11.462170 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Oct 30 05:33:11.463337 augenrules[1578]: No rules Oct 30 05:33:11.467388 systemd[1]: audit-rules.service: Deactivated successfully. Oct 30 05:33:11.467621 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 30 05:33:11.468296 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 30 05:33:11.468527 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Oct 30 05:33:11.474826 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Oct 30 05:33:11.477983 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Oct 30 05:33:11.492948 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Oct 30 05:33:11.494413 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Oct 30 05:33:11.498206 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Oct 30 05:33:11.499548 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Oct 30 05:33:11.499922 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 30 05:33:11.590214 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Oct 30 05:33:11.590862 systemd[1]: Reached target time-set.target - System Time Set. Oct 30 05:33:11.591636 systemd-networkd[1568]: lo: Link UP Oct 30 05:33:11.591811 systemd-networkd[1568]: lo: Gained carrier Oct 30 05:33:11.603661 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 30 05:33:11.603850 systemd[1]: Reached target network.target - Network. Oct 30 05:33:11.607337 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Oct 30 05:33:11.607511 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Oct 30 05:33:11.604658 systemd-networkd[1568]: ens192: Configuring with /etc/systemd/network/00-vmware.network. Oct 30 05:33:11.605823 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Oct 30 05:33:11.607433 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Oct 30 05:33:11.618481 systemd-networkd[1568]: ens192: Link UP Oct 30 05:33:11.620720 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Oct 30 05:33:11.621189 systemd-networkd[1568]: ens192: Gained carrier Oct 30 05:33:11.625219 systemd-timesyncd[1569]: Network configuration changed, trying to establish connection. Oct 30 05:33:11.637771 kernel: mousedev: PS/2 mouse device common for all mice Oct 30 05:33:11.635345 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Oct 30 05:33:11.645768 kernel: ACPI: button: Power Button [PWRF] Oct 30 05:34:49.748455 systemd-resolved[1334]: Clock change detected. Flushing caches. Oct 30 05:34:49.748530 systemd-timesyncd[1569]: Contacted time server 144.202.41.38:123 (2.flatcar.pool.ntp.org). Oct 30 05:34:49.748637 systemd-timesyncd[1569]: Initial clock synchronization to Thu 2025-10-30 05:34:49.748427 UTC. Oct 30 05:34:49.754808 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. Oct 30 05:34:49.756618 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Oct 30 05:34:49.760289 kernel: piix4_smbus 0000:00:07.3: SMBus Host Controller not enabled! Oct 30 05:34:49.788724 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Oct 30 05:34:49.802862 (udev-worker)[1556]: id: Truncating stdout of 'dmi_memory_id' up to 16384 byte. Oct 30 05:34:49.837664 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 30 05:34:49.916606 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 30 05:34:50.118346 ldconfig[1520]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 30 05:34:50.120589 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Oct 30 05:34:50.122197 systemd[1]: Starting systemd-update-done.service - Update is Completed... Oct 30 05:34:50.137414 systemd[1]: Finished systemd-update-done.service - Update is Completed. Oct 30 05:34:50.138012 systemd[1]: Reached target sysinit.target - System Initialization. Oct 30 05:34:50.138247 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Oct 30 05:34:50.138432 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Oct 30 05:34:50.138590 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Oct 30 05:34:50.138842 systemd[1]: Started logrotate.timer - Daily rotation of log files. Oct 30 05:34:50.139025 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Oct 30 05:34:50.139202 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Oct 30 05:34:50.139409 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 30 05:34:50.139471 systemd[1]: Reached target paths.target - Path Units. Oct 30 05:34:50.139597 systemd[1]: Reached target timers.target - Timer Units. Oct 30 05:34:50.140514 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Oct 30 05:34:50.141639 systemd[1]: Starting docker.socket - Docker Socket for the API... Oct 30 05:34:50.143072 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Oct 30 05:34:50.143266 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Oct 30 05:34:50.143454 systemd[1]: Reached target ssh-access.target - SSH Access Available. Oct 30 05:34:50.145346 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Oct 30 05:34:50.145619 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Oct 30 05:34:50.146165 systemd[1]: Listening on docker.socket - Docker Socket for the API. Oct 30 05:34:50.146714 systemd[1]: Reached target sockets.target - Socket Units. Oct 30 05:34:50.146813 systemd[1]: Reached target basic.target - Basic System. Oct 30 05:34:50.146938 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Oct 30 05:34:50.146959 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Oct 30 05:34:50.147795 systemd[1]: Starting containerd.service - containerd container runtime... Oct 30 05:34:50.150403 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Oct 30 05:34:50.152230 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Oct 30 05:34:50.154284 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Oct 30 05:34:50.155427 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Oct 30 05:34:50.155542 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Oct 30 05:34:50.163617 jq[1644]: false Oct 30 05:34:50.163218 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Oct 30 05:34:50.165433 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Oct 30 05:34:50.168369 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Oct 30 05:34:50.169699 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Oct 30 05:34:50.174557 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Oct 30 05:34:50.177066 systemd[1]: Starting systemd-logind.service - User Login Management... Oct 30 05:34:50.177349 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 30 05:34:50.177854 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 30 05:34:50.181605 systemd[1]: Starting update-engine.service - Update Engine... Oct 30 05:34:50.182161 extend-filesystems[1645]: Found /dev/sda6 Oct 30 05:34:50.185500 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Oct 30 05:34:50.187223 google_oslogin_nss_cache[1646]: oslogin_cache_refresh[1646]: Refreshing passwd entry cache Oct 30 05:34:50.188583 oslogin_cache_refresh[1646]: Refreshing passwd entry cache Oct 30 05:34:50.190336 systemd[1]: Starting vgauthd.service - VGAuth Service for open-vm-tools... Oct 30 05:34:50.195370 extend-filesystems[1645]: Found /dev/sda9 Oct 30 05:34:50.195928 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Oct 30 05:34:50.196309 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 30 05:34:50.196448 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Oct 30 05:34:50.200308 extend-filesystems[1645]: Checking size of /dev/sda9 Oct 30 05:34:50.204463 google_oslogin_nss_cache[1646]: oslogin_cache_refresh[1646]: Failure getting users, quitting Oct 30 05:34:50.204463 google_oslogin_nss_cache[1646]: oslogin_cache_refresh[1646]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Oct 30 05:34:50.204463 google_oslogin_nss_cache[1646]: oslogin_cache_refresh[1646]: Refreshing group entry cache Oct 30 05:34:50.203363 oslogin_cache_refresh[1646]: Failure getting users, quitting Oct 30 05:34:50.203375 oslogin_cache_refresh[1646]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Oct 30 05:34:50.203407 oslogin_cache_refresh[1646]: Refreshing group entry cache Oct 30 05:34:50.204957 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 30 05:34:50.208315 google_oslogin_nss_cache[1646]: oslogin_cache_refresh[1646]: Failure getting groups, quitting Oct 30 05:34:50.208315 google_oslogin_nss_cache[1646]: oslogin_cache_refresh[1646]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Oct 30 05:34:50.208243 oslogin_cache_refresh[1646]: Failure getting groups, quitting Oct 30 05:34:50.208251 oslogin_cache_refresh[1646]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Oct 30 05:34:50.208443 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Oct 30 05:34:50.210718 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Oct 30 05:34:50.210859 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Oct 30 05:34:50.221251 jq[1657]: true Oct 30 05:34:50.228418 update_engine[1655]: I20251030 05:34:50.227748 1655 main.cc:92] Flatcar Update Engine starting Oct 30 05:34:50.233015 extend-filesystems[1645]: Resized partition /dev/sda9 Oct 30 05:34:50.237698 systemd[1]: motdgen.service: Deactivated successfully. Oct 30 05:34:50.245284 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Oct 30 05:34:50.246057 extend-filesystems[1692]: resize2fs 1.47.3 (8-Jul-2025) Oct 30 05:34:50.254292 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 1635323 blocks Oct 30 05:34:50.258916 kernel: EXT4-fs (sda9): resized filesystem to 1635323 Oct 30 05:34:50.254599 systemd[1]: Started vgauthd.service - VGAuth Service for open-vm-tools. Oct 30 05:34:50.259022 jq[1687]: true Oct 30 05:34:50.259205 extend-filesystems[1692]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Oct 30 05:34:50.259205 extend-filesystems[1692]: old_desc_blocks = 1, new_desc_blocks = 1 Oct 30 05:34:50.259205 extend-filesystems[1692]: The filesystem on /dev/sda9 is now 1635323 (4k) blocks long. Oct 30 05:34:50.263262 extend-filesystems[1645]: Resized filesystem in /dev/sda9 Oct 30 05:34:50.273741 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 30 05:34:50.276538 tar[1668]: linux-amd64/LICENSE Oct 30 05:34:50.276538 tar[1668]: linux-amd64/helm Oct 30 05:34:50.274304 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Oct 30 05:34:50.284345 systemd[1]: Starting vmtoolsd.service - Service for virtual machines hosted on VMware... Oct 30 05:34:50.287785 dbus-daemon[1642]: [system] SELinux support is enabled Oct 30 05:34:50.287896 systemd[1]: Started dbus.service - D-Bus System Message Bus. Oct 30 05:34:50.290687 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 30 05:34:50.290707 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Oct 30 05:34:50.290870 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 30 05:34:50.290880 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Oct 30 05:34:50.315619 systemd[1]: Started update-engine.service - Update Engine. Oct 30 05:34:50.315771 update_engine[1655]: I20251030 05:34:50.315743 1655 update_check_scheduler.cc:74] Next update check in 7m48s Oct 30 05:34:50.320321 systemd[1]: Started locksmithd.service - Cluster reboot manager. Oct 30 05:34:50.327752 unknown[1697]: Pref_Init: Using '/etc/vmware-tools/vgauth.conf' as preferences filepath Oct 30 05:34:50.330717 unknown[1697]: Core dump limit set to -1 Oct 30 05:34:50.340223 systemd-logind[1654]: Watching system buttons on /dev/input/event2 (Power Button) Oct 30 05:34:50.340240 systemd-logind[1654]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Oct 30 05:34:50.343598 systemd-logind[1654]: New seat seat0. Oct 30 05:34:50.345551 systemd[1]: Started systemd-logind.service - User Login Management. Oct 30 05:34:50.354421 systemd[1]: Started vmtoolsd.service - Service for virtual machines hosted on VMware. Oct 30 05:34:50.411201 bash[1720]: Updated "/home/core/.ssh/authorized_keys" Oct 30 05:34:50.412894 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Oct 30 05:34:50.413520 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Oct 30 05:34:50.489351 locksmithd[1703]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 30 05:34:50.575766 containerd[1688]: time="2025-10-30T05:34:50Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Oct 30 05:34:50.578401 containerd[1688]: time="2025-10-30T05:34:50.576045726Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Oct 30 05:34:50.581832 containerd[1688]: time="2025-10-30T05:34:50.581801113Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="6.165µs" Oct 30 05:34:50.581832 containerd[1688]: time="2025-10-30T05:34:50.581827296Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Oct 30 05:34:50.581891 containerd[1688]: time="2025-10-30T05:34:50.581843241Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Oct 30 05:34:50.581949 containerd[1688]: time="2025-10-30T05:34:50.581935915Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Oct 30 05:34:50.581972 containerd[1688]: time="2025-10-30T05:34:50.581949064Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Oct 30 05:34:50.581986 containerd[1688]: time="2025-10-30T05:34:50.581971645Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Oct 30 05:34:50.582034 containerd[1688]: time="2025-10-30T05:34:50.582021396Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Oct 30 05:34:50.582053 containerd[1688]: time="2025-10-30T05:34:50.582034634Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Oct 30 05:34:50.582184 containerd[1688]: time="2025-10-30T05:34:50.582168502Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Oct 30 05:34:50.582184 containerd[1688]: time="2025-10-30T05:34:50.582182536Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Oct 30 05:34:50.582218 containerd[1688]: time="2025-10-30T05:34:50.582192616Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Oct 30 05:34:50.582218 containerd[1688]: time="2025-10-30T05:34:50.582200660Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Oct 30 05:34:50.582260 containerd[1688]: time="2025-10-30T05:34:50.582248751Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Oct 30 05:34:50.582593 containerd[1688]: time="2025-10-30T05:34:50.582579634Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Oct 30 05:34:50.582625 containerd[1688]: time="2025-10-30T05:34:50.582601907Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Oct 30 05:34:50.582625 containerd[1688]: time="2025-10-30T05:34:50.582608744Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Oct 30 05:34:50.582625 containerd[1688]: time="2025-10-30T05:34:50.582623006Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Oct 30 05:34:50.582785 containerd[1688]: time="2025-10-30T05:34:50.582772616Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Oct 30 05:34:50.583086 containerd[1688]: time="2025-10-30T05:34:50.582810930Z" level=info msg="metadata content store policy set" policy=shared Oct 30 05:34:50.586444 containerd[1688]: time="2025-10-30T05:34:50.586425988Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Oct 30 05:34:50.586478 containerd[1688]: time="2025-10-30T05:34:50.586455415Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Oct 30 05:34:50.586478 containerd[1688]: time="2025-10-30T05:34:50.586466428Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Oct 30 05:34:50.586478 containerd[1688]: time="2025-10-30T05:34:50.586473318Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Oct 30 05:34:50.586530 containerd[1688]: time="2025-10-30T05:34:50.586481626Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Oct 30 05:34:50.586530 containerd[1688]: time="2025-10-30T05:34:50.586488061Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Oct 30 05:34:50.586530 containerd[1688]: time="2025-10-30T05:34:50.586496259Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Oct 30 05:34:50.586530 containerd[1688]: time="2025-10-30T05:34:50.586504131Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Oct 30 05:34:50.586530 containerd[1688]: time="2025-10-30T05:34:50.586510685Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Oct 30 05:34:50.586530 containerd[1688]: time="2025-10-30T05:34:50.586516672Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Oct 30 05:34:50.586530 containerd[1688]: time="2025-10-30T05:34:50.586522239Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Oct 30 05:34:50.586530 containerd[1688]: time="2025-10-30T05:34:50.586529146Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Oct 30 05:34:50.586630 containerd[1688]: time="2025-10-30T05:34:50.586596317Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Oct 30 05:34:50.586630 containerd[1688]: time="2025-10-30T05:34:50.586612876Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Oct 30 05:34:50.586630 containerd[1688]: time="2025-10-30T05:34:50.586622248Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Oct 30 05:34:50.586672 containerd[1688]: time="2025-10-30T05:34:50.586630229Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Oct 30 05:34:50.586672 containerd[1688]: time="2025-10-30T05:34:50.586637035Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Oct 30 05:34:50.586672 containerd[1688]: time="2025-10-30T05:34:50.586642648Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Oct 30 05:34:50.586672 containerd[1688]: time="2025-10-30T05:34:50.586648737Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Oct 30 05:34:50.586672 containerd[1688]: time="2025-10-30T05:34:50.586654192Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Oct 30 05:34:50.586672 containerd[1688]: time="2025-10-30T05:34:50.586661659Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Oct 30 05:34:50.586751 containerd[1688]: time="2025-10-30T05:34:50.586672029Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Oct 30 05:34:50.586751 containerd[1688]: time="2025-10-30T05:34:50.586689375Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Oct 30 05:34:50.586751 containerd[1688]: time="2025-10-30T05:34:50.586732549Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Oct 30 05:34:50.586751 containerd[1688]: time="2025-10-30T05:34:50.586741615Z" level=info msg="Start snapshots syncer" Oct 30 05:34:50.586809 containerd[1688]: time="2025-10-30T05:34:50.586755182Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Oct 30 05:34:50.589252 containerd[1688]: time="2025-10-30T05:34:50.586901660Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Oct 30 05:34:50.589252 containerd[1688]: time="2025-10-30T05:34:50.586937084Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Oct 30 05:34:50.589377 containerd[1688]: time="2025-10-30T05:34:50.587808652Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Oct 30 05:34:50.589377 containerd[1688]: time="2025-10-30T05:34:50.587870041Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Oct 30 05:34:50.589377 containerd[1688]: time="2025-10-30T05:34:50.587891182Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Oct 30 05:34:50.589377 containerd[1688]: time="2025-10-30T05:34:50.587899523Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Oct 30 05:34:50.589377 containerd[1688]: time="2025-10-30T05:34:50.587905636Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Oct 30 05:34:50.589377 containerd[1688]: time="2025-10-30T05:34:50.587913304Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Oct 30 05:34:50.589377 containerd[1688]: time="2025-10-30T05:34:50.587919586Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Oct 30 05:34:50.589377 containerd[1688]: time="2025-10-30T05:34:50.587926084Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Oct 30 05:34:50.589377 containerd[1688]: time="2025-10-30T05:34:50.587939806Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Oct 30 05:34:50.589377 containerd[1688]: time="2025-10-30T05:34:50.587946448Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Oct 30 05:34:50.589377 containerd[1688]: time="2025-10-30T05:34:50.587953438Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Oct 30 05:34:50.589377 containerd[1688]: time="2025-10-30T05:34:50.587970468Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Oct 30 05:34:50.589377 containerd[1688]: time="2025-10-30T05:34:50.587979213Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Oct 30 05:34:50.589377 containerd[1688]: time="2025-10-30T05:34:50.587984928Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Oct 30 05:34:50.589587 containerd[1688]: time="2025-10-30T05:34:50.587990348Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Oct 30 05:34:50.589587 containerd[1688]: time="2025-10-30T05:34:50.587995269Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Oct 30 05:34:50.589587 containerd[1688]: time="2025-10-30T05:34:50.588002214Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Oct 30 05:34:50.589587 containerd[1688]: time="2025-10-30T05:34:50.588008434Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Oct 30 05:34:50.589587 containerd[1688]: time="2025-10-30T05:34:50.588017684Z" level=info msg="runtime interface created" Oct 30 05:34:50.589587 containerd[1688]: time="2025-10-30T05:34:50.588020726Z" level=info msg="created NRI interface" Oct 30 05:34:50.589587 containerd[1688]: time="2025-10-30T05:34:50.588025115Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Oct 30 05:34:50.589587 containerd[1688]: time="2025-10-30T05:34:50.588032254Z" level=info msg="Connect containerd service" Oct 30 05:34:50.589587 containerd[1688]: time="2025-10-30T05:34:50.588050915Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Oct 30 05:34:50.589587 containerd[1688]: time="2025-10-30T05:34:50.588528405Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 30 05:34:50.633450 sshd_keygen[1691]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 30 05:34:50.662423 systemd-networkd[1568]: ens192: Gained IPv6LL Oct 30 05:34:50.673332 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Oct 30 05:34:50.673900 systemd[1]: Reached target network-online.target - Network is Online. Oct 30 05:34:50.677969 systemd[1]: Starting coreos-metadata.service - VMware metadata agent... Oct 30 05:34:50.680697 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 30 05:34:50.684083 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Oct 30 05:34:50.684738 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Oct 30 05:34:50.689981 systemd[1]: Starting issuegen.service - Generate /run/issue... Oct 30 05:34:50.711521 containerd[1688]: time="2025-10-30T05:34:50.711493744Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 30 05:34:50.711589 containerd[1688]: time="2025-10-30T05:34:50.711554008Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 30 05:34:50.711589 containerd[1688]: time="2025-10-30T05:34:50.711574692Z" level=info msg="Start subscribing containerd event" Oct 30 05:34:50.711642 containerd[1688]: time="2025-10-30T05:34:50.711597026Z" level=info msg="Start recovering state" Oct 30 05:34:50.712611 containerd[1688]: time="2025-10-30T05:34:50.711675877Z" level=info msg="Start event monitor" Oct 30 05:34:50.712611 containerd[1688]: time="2025-10-30T05:34:50.711687864Z" level=info msg="Start cni network conf syncer for default" Oct 30 05:34:50.712611 containerd[1688]: time="2025-10-30T05:34:50.711706892Z" level=info msg="Start streaming server" Oct 30 05:34:50.712611 containerd[1688]: time="2025-10-30T05:34:50.711712750Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Oct 30 05:34:50.712611 containerd[1688]: time="2025-10-30T05:34:50.711717216Z" level=info msg="runtime interface starting up..." Oct 30 05:34:50.712611 containerd[1688]: time="2025-10-30T05:34:50.711720693Z" level=info msg="starting plugins..." Oct 30 05:34:50.712611 containerd[1688]: time="2025-10-30T05:34:50.711730300Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Oct 30 05:34:50.712611 containerd[1688]: time="2025-10-30T05:34:50.712548753Z" level=info msg="containerd successfully booted in 0.137022s" Oct 30 05:34:50.711872 systemd[1]: Started containerd.service - containerd container runtime. Oct 30 05:34:50.714887 systemd[1]: issuegen.service: Deactivated successfully. Oct 30 05:34:50.715209 systemd[1]: Finished issuegen.service - Generate /run/issue. Oct 30 05:34:50.718403 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Oct 30 05:34:50.745311 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Oct 30 05:34:50.745961 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Oct 30 05:34:50.749612 systemd[1]: Started getty@tty1.service - Getty on tty1. Oct 30 05:34:50.752638 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Oct 30 05:34:50.752863 systemd[1]: Reached target getty.target - Login Prompts. Oct 30 05:34:50.767228 systemd[1]: coreos-metadata.service: Deactivated successfully. Oct 30 05:34:50.767431 systemd[1]: Finished coreos-metadata.service - VMware metadata agent. Oct 30 05:34:50.767813 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Oct 30 05:34:50.839724 tar[1668]: linux-amd64/README.md Oct 30 05:34:50.849626 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Oct 30 05:34:52.257266 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 30 05:34:52.257800 systemd[1]: Reached target multi-user.target - Multi-User System. Oct 30 05:34:52.258299 systemd[1]: Startup finished in 3.002s (kernel) + 5.954s (initrd) + 5.246s (userspace) = 14.204s. Oct 30 05:34:52.269574 (kubelet)[1849]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 30 05:34:52.635576 login[1836]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Oct 30 05:34:52.637343 login[1837]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Oct 30 05:34:52.642715 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Oct 30 05:34:52.645439 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Oct 30 05:34:52.650551 systemd-logind[1654]: New session 1 of user core. Oct 30 05:34:52.652753 systemd-logind[1654]: New session 2 of user core. Oct 30 05:34:52.665060 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Oct 30 05:34:52.666823 systemd[1]: Starting user@500.service - User Manager for UID 500... Oct 30 05:34:52.680635 (systemd)[1860]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 30 05:34:52.682728 systemd-logind[1654]: New session c1 of user core. Oct 30 05:34:52.771089 kubelet[1849]: E1030 05:34:52.771061 1849 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 30 05:34:52.773006 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 30 05:34:52.773169 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 30 05:34:52.773483 systemd[1]: kubelet.service: Consumed 650ms CPU time, 267.5M memory peak. Oct 30 05:34:52.777140 systemd[1860]: Queued start job for default target default.target. Oct 30 05:34:52.787599 systemd[1860]: Created slice app.slice - User Application Slice. Oct 30 05:34:52.787621 systemd[1860]: Reached target paths.target - Paths. Oct 30 05:34:52.787649 systemd[1860]: Reached target timers.target - Timers. Oct 30 05:34:52.789340 systemd[1860]: Starting dbus.socket - D-Bus User Message Bus Socket... Oct 30 05:34:52.794877 systemd[1860]: Listening on dbus.socket - D-Bus User Message Bus Socket. Oct 30 05:34:52.794966 systemd[1860]: Reached target sockets.target - Sockets. Oct 30 05:34:52.795043 systemd[1860]: Reached target basic.target - Basic System. Oct 30 05:34:52.795114 systemd[1860]: Reached target default.target - Main User Target. Oct 30 05:34:52.795141 systemd[1]: Started user@500.service - User Manager for UID 500. Oct 30 05:34:52.795197 systemd[1860]: Startup finished in 105ms. Oct 30 05:34:52.799470 systemd[1]: Started session-1.scope - Session 1 of User core. Oct 30 05:34:52.800171 systemd[1]: Started session-2.scope - Session 2 of User core. Oct 30 05:35:03.023502 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Oct 30 05:35:03.024695 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 30 05:35:03.292584 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 30 05:35:03.306543 (kubelet)[1899]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 30 05:35:03.343974 kubelet[1899]: E1030 05:35:03.343948 1899 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 30 05:35:03.346395 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 30 05:35:03.346493 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 30 05:35:03.346741 systemd[1]: kubelet.service: Consumed 101ms CPU time, 108.8M memory peak. Oct 30 05:35:13.421139 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Oct 30 05:35:13.422318 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 30 05:35:13.770014 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 30 05:35:13.781564 (kubelet)[1915]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 30 05:35:13.839340 kubelet[1915]: E1030 05:35:13.839304 1915 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 30 05:35:13.840881 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 30 05:35:13.841018 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 30 05:35:13.841363 systemd[1]: kubelet.service: Consumed 94ms CPU time, 109.1M memory peak. Oct 30 05:35:20.460247 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Oct 30 05:35:20.461680 systemd[1]: Started sshd@0-139.178.70.106:22-139.178.68.195:55010.service - OpenSSH per-connection server daemon (139.178.68.195:55010). Oct 30 05:35:20.524832 sshd[1923]: Accepted publickey for core from 139.178.68.195 port 55010 ssh2: RSA SHA256:KIpSDL70YldeAtV2+Yri8KI6blHeuWNqKvGCxAMYgRQ Oct 30 05:35:20.525729 sshd-session[1923]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 05:35:20.529406 systemd-logind[1654]: New session 3 of user core. Oct 30 05:35:20.536432 systemd[1]: Started session-3.scope - Session 3 of User core. Oct 30 05:35:20.548879 systemd[1]: Started sshd@1-139.178.70.106:22-139.178.68.195:55016.service - OpenSSH per-connection server daemon (139.178.68.195:55016). Oct 30 05:35:20.594378 sshd[1929]: Accepted publickey for core from 139.178.68.195 port 55016 ssh2: RSA SHA256:KIpSDL70YldeAtV2+Yri8KI6blHeuWNqKvGCxAMYgRQ Oct 30 05:35:20.595175 sshd-session[1929]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 05:35:20.598765 systemd-logind[1654]: New session 4 of user core. Oct 30 05:35:20.606442 systemd[1]: Started session-4.scope - Session 4 of User core. Oct 30 05:35:20.615198 sshd[1932]: Connection closed by 139.178.68.195 port 55016 Oct 30 05:35:20.616364 sshd-session[1929]: pam_unix(sshd:session): session closed for user core Oct 30 05:35:20.620992 systemd[1]: sshd@1-139.178.70.106:22-139.178.68.195:55016.service: Deactivated successfully. Oct 30 05:35:20.622023 systemd[1]: session-4.scope: Deactivated successfully. Oct 30 05:35:20.622959 systemd-logind[1654]: Session 4 logged out. Waiting for processes to exit. Oct 30 05:35:20.623974 systemd[1]: Started sshd@2-139.178.70.106:22-139.178.68.195:55020.service - OpenSSH per-connection server daemon (139.178.68.195:55020). Oct 30 05:35:20.625372 systemd-logind[1654]: Removed session 4. Oct 30 05:35:20.665379 sshd[1938]: Accepted publickey for core from 139.178.68.195 port 55020 ssh2: RSA SHA256:KIpSDL70YldeAtV2+Yri8KI6blHeuWNqKvGCxAMYgRQ Oct 30 05:35:20.665984 sshd-session[1938]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 05:35:20.669111 systemd-logind[1654]: New session 5 of user core. Oct 30 05:35:20.678374 systemd[1]: Started session-5.scope - Session 5 of User core. Oct 30 05:35:20.682734 sshd[1941]: Connection closed by 139.178.68.195 port 55020 Oct 30 05:35:20.682958 sshd-session[1938]: pam_unix(sshd:session): session closed for user core Oct 30 05:35:20.693520 systemd[1]: sshd@2-139.178.70.106:22-139.178.68.195:55020.service: Deactivated successfully. Oct 30 05:35:20.694490 systemd[1]: session-5.scope: Deactivated successfully. Oct 30 05:35:20.694962 systemd-logind[1654]: Session 5 logged out. Waiting for processes to exit. Oct 30 05:35:20.696386 systemd[1]: Started sshd@3-139.178.70.106:22-139.178.68.195:55026.service - OpenSSH per-connection server daemon (139.178.68.195:55026). Oct 30 05:35:20.696967 systemd-logind[1654]: Removed session 5. Oct 30 05:35:20.727971 sshd[1947]: Accepted publickey for core from 139.178.68.195 port 55026 ssh2: RSA SHA256:KIpSDL70YldeAtV2+Yri8KI6blHeuWNqKvGCxAMYgRQ Oct 30 05:35:20.729054 sshd-session[1947]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 05:35:20.731866 systemd-logind[1654]: New session 6 of user core. Oct 30 05:35:20.739378 systemd[1]: Started session-6.scope - Session 6 of User core. Oct 30 05:35:20.746455 sshd[1950]: Connection closed by 139.178.68.195 port 55026 Oct 30 05:35:20.746742 sshd-session[1947]: pam_unix(sshd:session): session closed for user core Oct 30 05:35:20.755177 systemd[1]: sshd@3-139.178.70.106:22-139.178.68.195:55026.service: Deactivated successfully. Oct 30 05:35:20.756042 systemd[1]: session-6.scope: Deactivated successfully. Oct 30 05:35:20.756496 systemd-logind[1654]: Session 6 logged out. Waiting for processes to exit. Oct 30 05:35:20.757647 systemd[1]: Started sshd@4-139.178.70.106:22-139.178.68.195:55030.service - OpenSSH per-connection server daemon (139.178.68.195:55030). Oct 30 05:35:20.758249 systemd-logind[1654]: Removed session 6. Oct 30 05:35:20.787061 sshd[1956]: Accepted publickey for core from 139.178.68.195 port 55030 ssh2: RSA SHA256:KIpSDL70YldeAtV2+Yri8KI6blHeuWNqKvGCxAMYgRQ Oct 30 05:35:20.787907 sshd-session[1956]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 05:35:20.791080 systemd-logind[1654]: New session 7 of user core. Oct 30 05:35:20.798391 systemd[1]: Started session-7.scope - Session 7 of User core. Oct 30 05:35:20.834899 sudo[1960]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 30 05:35:20.835118 sudo[1960]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 30 05:35:20.853537 sudo[1960]: pam_unix(sudo:session): session closed for user root Oct 30 05:35:20.854400 sshd[1959]: Connection closed by 139.178.68.195 port 55030 Oct 30 05:35:20.854738 sshd-session[1956]: pam_unix(sshd:session): session closed for user core Oct 30 05:35:20.864380 systemd[1]: sshd@4-139.178.70.106:22-139.178.68.195:55030.service: Deactivated successfully. Oct 30 05:35:20.865227 systemd[1]: session-7.scope: Deactivated successfully. Oct 30 05:35:20.865721 systemd-logind[1654]: Session 7 logged out. Waiting for processes to exit. Oct 30 05:35:20.866746 systemd[1]: Started sshd@5-139.178.70.106:22-139.178.68.195:55046.service - OpenSSH per-connection server daemon (139.178.68.195:55046). Oct 30 05:35:20.868449 systemd-logind[1654]: Removed session 7. Oct 30 05:35:20.898972 sshd[1966]: Accepted publickey for core from 139.178.68.195 port 55046 ssh2: RSA SHA256:KIpSDL70YldeAtV2+Yri8KI6blHeuWNqKvGCxAMYgRQ Oct 30 05:35:20.899729 sshd-session[1966]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 05:35:20.902790 systemd-logind[1654]: New session 8 of user core. Oct 30 05:35:20.909363 systemd[1]: Started session-8.scope - Session 8 of User core. Oct 30 05:35:20.916659 sudo[1972]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 30 05:35:20.916978 sudo[1972]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 30 05:35:20.919383 sudo[1972]: pam_unix(sudo:session): session closed for user root Oct 30 05:35:20.922907 sudo[1971]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Oct 30 05:35:20.923053 sudo[1971]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 30 05:35:20.928835 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 30 05:35:20.950176 augenrules[1994]: No rules Oct 30 05:35:20.950481 systemd[1]: audit-rules.service: Deactivated successfully. Oct 30 05:35:20.950700 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 30 05:35:20.951303 sudo[1971]: pam_unix(sudo:session): session closed for user root Oct 30 05:35:20.951988 sshd[1970]: Connection closed by 139.178.68.195 port 55046 Oct 30 05:35:20.952455 sshd-session[1966]: pam_unix(sshd:session): session closed for user core Oct 30 05:35:20.964181 systemd[1]: sshd@5-139.178.70.106:22-139.178.68.195:55046.service: Deactivated successfully. Oct 30 05:35:20.965086 systemd[1]: session-8.scope: Deactivated successfully. Oct 30 05:35:20.965538 systemd-logind[1654]: Session 8 logged out. Waiting for processes to exit. Oct 30 05:35:20.966704 systemd[1]: Started sshd@6-139.178.70.106:22-139.178.68.195:55062.service - OpenSSH per-connection server daemon (139.178.68.195:55062). Oct 30 05:35:20.967322 systemd-logind[1654]: Removed session 8. Oct 30 05:35:20.998699 sshd[2003]: Accepted publickey for core from 139.178.68.195 port 55062 ssh2: RSA SHA256:KIpSDL70YldeAtV2+Yri8KI6blHeuWNqKvGCxAMYgRQ Oct 30 05:35:20.999973 sshd-session[2003]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 05:35:21.002925 systemd-logind[1654]: New session 9 of user core. Oct 30 05:35:21.014395 systemd[1]: Started session-9.scope - Session 9 of User core. Oct 30 05:35:21.021869 sudo[2007]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 30 05:35:21.022023 sudo[2007]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 30 05:35:21.405834 systemd[1]: Starting docker.service - Docker Application Container Engine... Oct 30 05:35:21.414474 (dockerd)[2026]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Oct 30 05:35:21.676875 dockerd[2026]: time="2025-10-30T05:35:21.676722213Z" level=info msg="Starting up" Oct 30 05:35:21.677766 dockerd[2026]: time="2025-10-30T05:35:21.677560202Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Oct 30 05:35:21.686049 dockerd[2026]: time="2025-10-30T05:35:21.686015339Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Oct 30 05:35:21.706436 systemd[1]: var-lib-docker-metacopy\x2dcheck2403280108-merged.mount: Deactivated successfully. Oct 30 05:35:21.724170 dockerd[2026]: time="2025-10-30T05:35:21.724142353Z" level=info msg="Loading containers: start." Oct 30 05:35:21.756327 kernel: Initializing XFRM netlink socket Oct 30 05:35:21.950969 systemd-networkd[1568]: docker0: Link UP Oct 30 05:35:21.952593 dockerd[2026]: time="2025-10-30T05:35:21.952553729Z" level=info msg="Loading containers: done." Oct 30 05:35:21.962723 dockerd[2026]: time="2025-10-30T05:35:21.962519543Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Oct 30 05:35:21.962723 dockerd[2026]: time="2025-10-30T05:35:21.962570487Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Oct 30 05:35:21.962723 dockerd[2026]: time="2025-10-30T05:35:21.962614456Z" level=info msg="Initializing buildkit" Oct 30 05:35:21.986072 dockerd[2026]: time="2025-10-30T05:35:21.986056258Z" level=info msg="Completed buildkit initialization" Oct 30 05:35:21.992684 dockerd[2026]: time="2025-10-30T05:35:21.992664136Z" level=info msg="Daemon has completed initialization" Oct 30 05:35:21.992785 dockerd[2026]: time="2025-10-30T05:35:21.992693459Z" level=info msg="API listen on /run/docker.sock" Oct 30 05:35:21.992913 systemd[1]: Started docker.service - Docker Application Container Engine. Oct 30 05:35:22.691685 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1775859270-merged.mount: Deactivated successfully. Oct 30 05:35:23.056249 containerd[1688]: time="2025-10-30T05:35:23.055842921Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\"" Oct 30 05:35:23.921077 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Oct 30 05:35:23.922396 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 30 05:35:24.227692 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 30 05:35:24.230394 (kubelet)[2246]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 30 05:35:24.268145 kubelet[2246]: E1030 05:35:24.268111 2246 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 30 05:35:24.270203 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 30 05:35:24.270528 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 30 05:35:24.270917 systemd[1]: kubelet.service: Consumed 94ms CPU time, 110.7M memory peak. Oct 30 05:35:24.271889 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2724782448.mount: Deactivated successfully. Oct 30 05:35:25.469696 containerd[1688]: time="2025-10-30T05:35:25.469194118Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 05:35:25.470030 containerd[1688]: time="2025-10-30T05:35:25.470010629Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.5: active requests=0, bytes read=30114893" Oct 30 05:35:25.470307 containerd[1688]: time="2025-10-30T05:35:25.470294549Z" level=info msg="ImageCreate event name:\"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 05:35:25.471962 containerd[1688]: time="2025-10-30T05:35:25.471941797Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 05:35:25.472403 containerd[1688]: time="2025-10-30T05:35:25.472388979Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.5\" with image id \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\", size \"30111492\" in 2.41650099s" Oct 30 05:35:25.472485 containerd[1688]: time="2025-10-30T05:35:25.472476538Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\" returns image reference \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\"" Oct 30 05:35:25.472844 containerd[1688]: time="2025-10-30T05:35:25.472828639Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\"" Oct 30 05:35:27.454041 containerd[1688]: time="2025-10-30T05:35:27.453641778Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 05:35:27.454293 containerd[1688]: time="2025-10-30T05:35:27.454280959Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.5: active requests=0, bytes read=26020844" Oct 30 05:35:27.456355 containerd[1688]: time="2025-10-30T05:35:27.456341388Z" level=info msg="ImageCreate event name:\"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 05:35:27.461171 containerd[1688]: time="2025-10-30T05:35:27.461157579Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 05:35:27.461641 containerd[1688]: time="2025-10-30T05:35:27.461559141Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.5\" with image id \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\", size \"27681301\" in 1.988649701s" Oct 30 05:35:27.461799 containerd[1688]: time="2025-10-30T05:35:27.461689322Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\" returns image reference \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\"" Oct 30 05:35:27.462212 containerd[1688]: time="2025-10-30T05:35:27.462090544Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\"" Oct 30 05:35:28.493724 containerd[1688]: time="2025-10-30T05:35:28.493176073Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 05:35:28.493724 containerd[1688]: time="2025-10-30T05:35:28.493586916Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.5: active requests=0, bytes read=20155568" Oct 30 05:35:28.493724 containerd[1688]: time="2025-10-30T05:35:28.493700623Z" level=info msg="ImageCreate event name:\"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 05:35:28.495084 containerd[1688]: time="2025-10-30T05:35:28.495068854Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 05:35:28.495659 containerd[1688]: time="2025-10-30T05:35:28.495646320Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.5\" with image id \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\", size \"21816043\" in 1.033541376s" Oct 30 05:35:28.495706 containerd[1688]: time="2025-10-30T05:35:28.495699048Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\" returns image reference \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\"" Oct 30 05:35:28.495972 containerd[1688]: time="2025-10-30T05:35:28.495963621Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\"" Oct 30 05:35:29.411354 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1744017208.mount: Deactivated successfully. Oct 30 05:35:29.785530 containerd[1688]: time="2025-10-30T05:35:29.785498938Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 05:35:29.788830 containerd[1688]: time="2025-10-30T05:35:29.788807600Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.5: active requests=0, bytes read=31929469" Oct 30 05:35:29.794324 containerd[1688]: time="2025-10-30T05:35:29.794293270Z" level=info msg="ImageCreate event name:\"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 05:35:29.803038 containerd[1688]: time="2025-10-30T05:35:29.803017811Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 05:35:29.803406 containerd[1688]: time="2025-10-30T05:35:29.803206164Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.5\" with image id \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\", repo tag \"registry.k8s.io/kube-proxy:v1.33.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\", size \"31928488\" in 1.307197825s" Oct 30 05:35:29.803406 containerd[1688]: time="2025-10-30T05:35:29.803221707Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\" returns image reference \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\"" Oct 30 05:35:29.803678 containerd[1688]: time="2025-10-30T05:35:29.803622455Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Oct 30 05:35:30.275101 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3711306531.mount: Deactivated successfully. Oct 30 05:35:31.027641 containerd[1688]: time="2025-10-30T05:35:31.027600718Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 05:35:31.036761 containerd[1688]: time="2025-10-30T05:35:31.036723421Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Oct 30 05:35:31.037554 containerd[1688]: time="2025-10-30T05:35:31.037526595Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 05:35:31.039063 containerd[1688]: time="2025-10-30T05:35:31.039038278Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 05:35:31.039703 containerd[1688]: time="2025-10-30T05:35:31.039614357Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.235976076s" Oct 30 05:35:31.039703 containerd[1688]: time="2025-10-30T05:35:31.039635855Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Oct 30 05:35:31.039970 containerd[1688]: time="2025-10-30T05:35:31.039953756Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Oct 30 05:35:31.631974 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount458071171.mount: Deactivated successfully. Oct 30 05:35:31.637356 containerd[1688]: time="2025-10-30T05:35:31.637327576Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 30 05:35:31.638426 containerd[1688]: time="2025-10-30T05:35:31.638405794Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Oct 30 05:35:31.640286 containerd[1688]: time="2025-10-30T05:35:31.638764436Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 30 05:35:31.642844 containerd[1688]: time="2025-10-30T05:35:31.642811713Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 30 05:35:31.643212 containerd[1688]: time="2025-10-30T05:35:31.643196299Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 603.18359ms" Oct 30 05:35:31.643243 containerd[1688]: time="2025-10-30T05:35:31.643213521Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Oct 30 05:35:31.647806 containerd[1688]: time="2025-10-30T05:35:31.647772415Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Oct 30 05:35:32.216433 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2190534612.mount: Deactivated successfully. Oct 30 05:35:34.047154 containerd[1688]: time="2025-10-30T05:35:34.046561644Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 05:35:34.047154 containerd[1688]: time="2025-10-30T05:35:34.046997476Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58378433" Oct 30 05:35:34.047154 containerd[1688]: time="2025-10-30T05:35:34.047128900Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 05:35:34.048644 containerd[1688]: time="2025-10-30T05:35:34.048627828Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 05:35:34.049291 containerd[1688]: time="2025-10-30T05:35:34.049269347Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 2.401467503s" Oct 30 05:35:34.049324 containerd[1688]: time="2025-10-30T05:35:34.049293184Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Oct 30 05:35:34.421099 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Oct 30 05:35:34.422148 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 30 05:35:35.170348 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 30 05:35:35.176530 (kubelet)[2467]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 30 05:35:35.251619 kubelet[2467]: E1030 05:35:35.251586 2467 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 30 05:35:35.253634 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 30 05:35:35.253722 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 30 05:35:35.253918 systemd[1]: kubelet.service: Consumed 111ms CPU time, 109.6M memory peak. Oct 30 05:35:36.054876 update_engine[1655]: I20251030 05:35:36.054839 1655 update_attempter.cc:509] Updating boot flags... Oct 30 05:35:36.706531 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 30 05:35:36.706794 systemd[1]: kubelet.service: Consumed 111ms CPU time, 109.6M memory peak. Oct 30 05:35:36.713723 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 30 05:35:36.727780 systemd[1]: Reload requested from client PID 2501 ('systemctl') (unit session-9.scope)... Oct 30 05:35:36.727860 systemd[1]: Reloading... Oct 30 05:35:36.782290 zram_generator::config[2544]: No configuration found. Oct 30 05:35:36.861134 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Oct 30 05:35:36.929644 systemd[1]: Reloading finished in 201 ms. Oct 30 05:35:36.971074 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Oct 30 05:35:36.971127 systemd[1]: kubelet.service: Failed with result 'signal'. Oct 30 05:35:36.971369 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 30 05:35:36.972636 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 30 05:35:37.339907 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 30 05:35:37.347978 (kubelet)[2612]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 30 05:35:37.393611 kubelet[2612]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 30 05:35:37.393821 kubelet[2612]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 30 05:35:37.393851 kubelet[2612]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 30 05:35:37.393936 kubelet[2612]: I1030 05:35:37.393912 2612 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 30 05:35:38.101385 kubelet[2612]: I1030 05:35:38.101356 2612 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Oct 30 05:35:38.101385 kubelet[2612]: I1030 05:35:38.101383 2612 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 30 05:35:38.101532 kubelet[2612]: I1030 05:35:38.101520 2612 server.go:956] "Client rotation is on, will bootstrap in background" Oct 30 05:35:38.139879 kubelet[2612]: E1030 05:35:38.139836 2612 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://139.178.70.106:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 139.178.70.106:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Oct 30 05:35:38.140518 kubelet[2612]: I1030 05:35:38.140501 2612 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 30 05:35:38.159599 kubelet[2612]: I1030 05:35:38.159575 2612 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 30 05:35:38.164682 kubelet[2612]: I1030 05:35:38.164601 2612 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 30 05:35:38.169823 kubelet[2612]: I1030 05:35:38.169671 2612 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 30 05:35:38.172083 kubelet[2612]: I1030 05:35:38.169706 2612 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 30 05:35:38.172410 kubelet[2612]: I1030 05:35:38.172218 2612 topology_manager.go:138] "Creating topology manager with none policy" Oct 30 05:35:38.172410 kubelet[2612]: I1030 05:35:38.172229 2612 container_manager_linux.go:303] "Creating device plugin manager" Oct 30 05:35:38.172995 kubelet[2612]: I1030 05:35:38.172988 2612 state_mem.go:36] "Initialized new in-memory state store" Oct 30 05:35:38.175290 kubelet[2612]: I1030 05:35:38.175281 2612 kubelet.go:480] "Attempting to sync node with API server" Oct 30 05:35:38.175341 kubelet[2612]: I1030 05:35:38.175335 2612 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 30 05:35:38.175384 kubelet[2612]: I1030 05:35:38.175380 2612 kubelet.go:386] "Adding apiserver pod source" Oct 30 05:35:38.176838 kubelet[2612]: I1030 05:35:38.176794 2612 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 30 05:35:38.185307 kubelet[2612]: E1030 05:35:38.185104 2612 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://139.178.70.106:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 139.178.70.106:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Oct 30 05:35:38.188597 kubelet[2612]: E1030 05:35:38.188523 2612 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://139.178.70.106:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.70.106:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Oct 30 05:35:38.188597 kubelet[2612]: I1030 05:35:38.188591 2612 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Oct 30 05:35:38.189078 kubelet[2612]: I1030 05:35:38.189059 2612 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Oct 30 05:35:38.192192 kubelet[2612]: W1030 05:35:38.191896 2612 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 30 05:35:38.197870 kubelet[2612]: I1030 05:35:38.197763 2612 watchdog_linux.go:99] "Systemd watchdog is not enabled" Oct 30 05:35:38.197870 kubelet[2612]: I1030 05:35:38.197799 2612 server.go:1289] "Started kubelet" Oct 30 05:35:38.203668 kubelet[2612]: I1030 05:35:38.203217 2612 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 30 05:35:38.209877 kubelet[2612]: I1030 05:35:38.209841 2612 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Oct 30 05:35:38.212256 kubelet[2612]: E1030 05:35:38.207354 2612 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://139.178.70.106:6443/api/v1/namespaces/default/events\": dial tcp 139.178.70.106:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18732e12f5a8fde2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-10-30 05:35:38.19777789 +0000 UTC m=+0.847400416,LastTimestamp:2025-10-30 05:35:38.19777789 +0000 UTC m=+0.847400416,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Oct 30 05:35:38.214966 kubelet[2612]: I1030 05:35:38.214927 2612 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 30 05:35:38.215413 kubelet[2612]: I1030 05:35:38.215404 2612 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 30 05:35:38.217545 kubelet[2612]: I1030 05:35:38.217520 2612 server.go:317] "Adding debug handlers to kubelet server" Oct 30 05:35:38.220312 kubelet[2612]: I1030 05:35:38.218243 2612 volume_manager.go:297] "Starting Kubelet Volume Manager" Oct 30 05:35:38.220312 kubelet[2612]: E1030 05:35:38.218415 2612 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 30 05:35:38.220312 kubelet[2612]: I1030 05:35:38.218671 2612 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 30 05:35:38.220605 kubelet[2612]: I1030 05:35:38.220594 2612 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Oct 30 05:35:38.220631 kubelet[2612]: I1030 05:35:38.220627 2612 reconciler.go:26] "Reconciler: start to sync state" Oct 30 05:35:38.227523 kubelet[2612]: E1030 05:35:38.227492 2612 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.106:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.106:6443: connect: connection refused" interval="200ms" Oct 30 05:35:38.227698 kubelet[2612]: E1030 05:35:38.227685 2612 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://139.178.70.106:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.70.106:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Oct 30 05:35:38.229840 kubelet[2612]: E1030 05:35:38.228791 2612 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 30 05:35:38.229840 kubelet[2612]: I1030 05:35:38.229838 2612 factory.go:223] Registration of the containerd container factory successfully Oct 30 05:35:38.229924 kubelet[2612]: I1030 05:35:38.229849 2612 factory.go:223] Registration of the systemd container factory successfully Oct 30 05:35:38.229924 kubelet[2612]: I1030 05:35:38.229907 2612 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 30 05:35:38.238201 kubelet[2612]: I1030 05:35:38.238173 2612 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Oct 30 05:35:38.239060 kubelet[2612]: I1030 05:35:38.239052 2612 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Oct 30 05:35:38.239114 kubelet[2612]: I1030 05:35:38.239109 2612 status_manager.go:230] "Starting to sync pod status with apiserver" Oct 30 05:35:38.239153 kubelet[2612]: I1030 05:35:38.239148 2612 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 30 05:35:38.239183 kubelet[2612]: I1030 05:35:38.239179 2612 kubelet.go:2436] "Starting kubelet main sync loop" Oct 30 05:35:38.239238 kubelet[2612]: E1030 05:35:38.239229 2612 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 30 05:35:38.243681 kubelet[2612]: E1030 05:35:38.243664 2612 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://139.178.70.106:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 139.178.70.106:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Oct 30 05:35:38.244534 kubelet[2612]: I1030 05:35:38.244518 2612 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 30 05:35:38.244534 kubelet[2612]: I1030 05:35:38.244528 2612 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 30 05:35:38.244609 kubelet[2612]: I1030 05:35:38.244544 2612 state_mem.go:36] "Initialized new in-memory state store" Oct 30 05:35:38.245628 kubelet[2612]: I1030 05:35:38.245616 2612 policy_none.go:49] "None policy: Start" Oct 30 05:35:38.245628 kubelet[2612]: I1030 05:35:38.245627 2612 memory_manager.go:186] "Starting memorymanager" policy="None" Oct 30 05:35:38.245684 kubelet[2612]: I1030 05:35:38.245639 2612 state_mem.go:35] "Initializing new in-memory state store" Oct 30 05:35:38.250241 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Oct 30 05:35:38.261144 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Oct 30 05:35:38.263578 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Oct 30 05:35:38.271434 kubelet[2612]: E1030 05:35:38.271234 2612 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Oct 30 05:35:38.272752 kubelet[2612]: I1030 05:35:38.272494 2612 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 30 05:35:38.272752 kubelet[2612]: I1030 05:35:38.272508 2612 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 30 05:35:38.272825 kubelet[2612]: I1030 05:35:38.272811 2612 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 30 05:35:38.274752 kubelet[2612]: E1030 05:35:38.274738 2612 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 30 05:35:38.274806 kubelet[2612]: E1030 05:35:38.274761 2612 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Oct 30 05:35:38.371218 systemd[1]: Created slice kubepods-burstable-podd13d96f639b65e57f439b4396b605564.slice - libcontainer container kubepods-burstable-podd13d96f639b65e57f439b4396b605564.slice. Oct 30 05:35:38.382859 kubelet[2612]: I1030 05:35:38.382832 2612 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 30 05:35:38.383240 kubelet[2612]: E1030 05:35:38.383223 2612 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.70.106:6443/api/v1/nodes\": dial tcp 139.178.70.106:6443: connect: connection refused" node="localhost" Oct 30 05:35:38.383777 kubelet[2612]: E1030 05:35:38.383750 2612 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 30 05:35:38.386542 systemd[1]: Created slice kubepods-burstable-pod698cc08d1a1ee66f201a2e6b00a28097.slice - libcontainer container kubepods-burstable-pod698cc08d1a1ee66f201a2e6b00a28097.slice. Oct 30 05:35:38.387765 kubelet[2612]: E1030 05:35:38.387749 2612 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 30 05:35:38.410740 systemd[1]: Created slice kubepods-burstable-pod20c890a246d840d308022312da9174cb.slice - libcontainer container kubepods-burstable-pod20c890a246d840d308022312da9174cb.slice. Oct 30 05:35:38.412012 kubelet[2612]: E1030 05:35:38.411902 2612 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 30 05:35:38.427995 kubelet[2612]: E1030 05:35:38.427962 2612 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.106:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.106:6443: connect: connection refused" interval="400ms" Oct 30 05:35:38.522566 kubelet[2612]: I1030 05:35:38.522538 2612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/698cc08d1a1ee66f201a2e6b00a28097-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"698cc08d1a1ee66f201a2e6b00a28097\") " pod="kube-system/kube-apiserver-localhost" Oct 30 05:35:38.522566 kubelet[2612]: I1030 05:35:38.522566 2612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 30 05:35:38.522680 kubelet[2612]: I1030 05:35:38.522578 2612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 30 05:35:38.522680 kubelet[2612]: I1030 05:35:38.522588 2612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 30 05:35:38.522680 kubelet[2612]: I1030 05:35:38.522607 2612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d13d96f639b65e57f439b4396b605564-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d13d96f639b65e57f439b4396b605564\") " pod="kube-system/kube-scheduler-localhost" Oct 30 05:35:38.522680 kubelet[2612]: I1030 05:35:38.522616 2612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/698cc08d1a1ee66f201a2e6b00a28097-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"698cc08d1a1ee66f201a2e6b00a28097\") " pod="kube-system/kube-apiserver-localhost" Oct 30 05:35:38.522680 kubelet[2612]: I1030 05:35:38.522623 2612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/698cc08d1a1ee66f201a2e6b00a28097-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"698cc08d1a1ee66f201a2e6b00a28097\") " pod="kube-system/kube-apiserver-localhost" Oct 30 05:35:38.522763 kubelet[2612]: I1030 05:35:38.522631 2612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 30 05:35:38.522763 kubelet[2612]: I1030 05:35:38.522638 2612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 30 05:35:38.584831 kubelet[2612]: I1030 05:35:38.584811 2612 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 30 05:35:38.585054 kubelet[2612]: E1030 05:35:38.585034 2612 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.70.106:6443/api/v1/nodes\": dial tcp 139.178.70.106:6443: connect: connection refused" node="localhost" Oct 30 05:35:38.687291 containerd[1688]: time="2025-10-30T05:35:38.687244092Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d13d96f639b65e57f439b4396b605564,Namespace:kube-system,Attempt:0,}" Oct 30 05:35:38.690165 containerd[1688]: time="2025-10-30T05:35:38.690146516Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:698cc08d1a1ee66f201a2e6b00a28097,Namespace:kube-system,Attempt:0,}" Oct 30 05:35:38.717286 containerd[1688]: time="2025-10-30T05:35:38.716843913Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:20c890a246d840d308022312da9174cb,Namespace:kube-system,Attempt:0,}" Oct 30 05:35:38.829562 kubelet[2612]: E1030 05:35:38.829537 2612 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.106:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.106:6443: connect: connection refused" interval="800ms" Oct 30 05:35:38.830256 containerd[1688]: time="2025-10-30T05:35:38.830235640Z" level=info msg="connecting to shim 0ba49fe248dd5468a73ff253f9b86ee09996b14e0f901d2831602d0f5f64c06d" address="unix:///run/containerd/s/dfceba0594628f99196e4ebefd37d2066f59cfc861f06354c50eef4b4fe59050" namespace=k8s.io protocol=ttrpc version=3 Oct 30 05:35:38.830734 containerd[1688]: time="2025-10-30T05:35:38.830578208Z" level=info msg="connecting to shim c0b4df958378a910989b9734d906050ccc721ed04f44681064f366afa6a1a80e" address="unix:///run/containerd/s/28dc7c8cc078024c55311a629619489354ed3f5e3766cd41e64bde4fc42594e7" namespace=k8s.io protocol=ttrpc version=3 Oct 30 05:35:38.835251 containerd[1688]: time="2025-10-30T05:35:38.834436769Z" level=info msg="connecting to shim 008833c42a1216f15aeb87bbd74a903ccc7cde58e4d88b6467ca016a0de4bc04" address="unix:///run/containerd/s/f80261320444a8fba484293850d290f90c2c8d14ea37b7c54d9f8bf61d4a39f6" namespace=k8s.io protocol=ttrpc version=3 Oct 30 05:35:38.918466 systemd[1]: Started cri-containerd-008833c42a1216f15aeb87bbd74a903ccc7cde58e4d88b6467ca016a0de4bc04.scope - libcontainer container 008833c42a1216f15aeb87bbd74a903ccc7cde58e4d88b6467ca016a0de4bc04. Oct 30 05:35:38.920059 systemd[1]: Started cri-containerd-0ba49fe248dd5468a73ff253f9b86ee09996b14e0f901d2831602d0f5f64c06d.scope - libcontainer container 0ba49fe248dd5468a73ff253f9b86ee09996b14e0f901d2831602d0f5f64c06d. Oct 30 05:35:38.922159 systemd[1]: Started cri-containerd-c0b4df958378a910989b9734d906050ccc721ed04f44681064f366afa6a1a80e.scope - libcontainer container c0b4df958378a910989b9734d906050ccc721ed04f44681064f366afa6a1a80e. Oct 30 05:35:38.971577 containerd[1688]: time="2025-10-30T05:35:38.971504188Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:20c890a246d840d308022312da9174cb,Namespace:kube-system,Attempt:0,} returns sandbox id \"c0b4df958378a910989b9734d906050ccc721ed04f44681064f366afa6a1a80e\"" Oct 30 05:35:38.974737 containerd[1688]: time="2025-10-30T05:35:38.974697452Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:698cc08d1a1ee66f201a2e6b00a28097,Namespace:kube-system,Attempt:0,} returns sandbox id \"0ba49fe248dd5468a73ff253f9b86ee09996b14e0f901d2831602d0f5f64c06d\"" Oct 30 05:35:38.978518 containerd[1688]: time="2025-10-30T05:35:38.978260229Z" level=info msg="CreateContainer within sandbox \"0ba49fe248dd5468a73ff253f9b86ee09996b14e0f901d2831602d0f5f64c06d\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Oct 30 05:35:38.978779 containerd[1688]: time="2025-10-30T05:35:38.978415655Z" level=info msg="CreateContainer within sandbox \"c0b4df958378a910989b9734d906050ccc721ed04f44681064f366afa6a1a80e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Oct 30 05:35:38.987728 kubelet[2612]: I1030 05:35:38.987570 2612 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 30 05:35:38.988511 kubelet[2612]: E1030 05:35:38.988487 2612 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.70.106:6443/api/v1/nodes\": dial tcp 139.178.70.106:6443: connect: connection refused" node="localhost" Oct 30 05:35:38.993955 containerd[1688]: time="2025-10-30T05:35:38.993920356Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d13d96f639b65e57f439b4396b605564,Namespace:kube-system,Attempt:0,} returns sandbox id \"008833c42a1216f15aeb87bbd74a903ccc7cde58e4d88b6467ca016a0de4bc04\"" Oct 30 05:35:38.996614 containerd[1688]: time="2025-10-30T05:35:38.996584562Z" level=info msg="CreateContainer within sandbox \"008833c42a1216f15aeb87bbd74a903ccc7cde58e4d88b6467ca016a0de4bc04\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Oct 30 05:35:39.002487 containerd[1688]: time="2025-10-30T05:35:39.002422665Z" level=info msg="Container eb58617ef4c6d4b8d89b17e6856782e5c25020a6f4683bb22d24aaf315096ad8: CDI devices from CRI Config.CDIDevices: []" Oct 30 05:35:39.003449 containerd[1688]: time="2025-10-30T05:35:39.003430568Z" level=info msg="Container bc045a006ce7873ba10df15e77815a3481f53020445ba13836883fad9951845b: CDI devices from CRI Config.CDIDevices: []" Oct 30 05:35:39.008697 containerd[1688]: time="2025-10-30T05:35:39.008581637Z" level=info msg="CreateContainer within sandbox \"0ba49fe248dd5468a73ff253f9b86ee09996b14e0f901d2831602d0f5f64c06d\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"eb58617ef4c6d4b8d89b17e6856782e5c25020a6f4683bb22d24aaf315096ad8\"" Oct 30 05:35:39.008917 containerd[1688]: time="2025-10-30T05:35:39.008892282Z" level=info msg="Container fd168624e4d8d8c680d04ec059f60370a04481984137ee8eb4505594b416b47a: CDI devices from CRI Config.CDIDevices: []" Oct 30 05:35:39.009230 containerd[1688]: time="2025-10-30T05:35:39.009217391Z" level=info msg="StartContainer for \"eb58617ef4c6d4b8d89b17e6856782e5c25020a6f4683bb22d24aaf315096ad8\"" Oct 30 05:35:39.009902 containerd[1688]: time="2025-10-30T05:35:39.009870219Z" level=info msg="connecting to shim eb58617ef4c6d4b8d89b17e6856782e5c25020a6f4683bb22d24aaf315096ad8" address="unix:///run/containerd/s/dfceba0594628f99196e4ebefd37d2066f59cfc861f06354c50eef4b4fe59050" protocol=ttrpc version=3 Oct 30 05:35:39.011553 containerd[1688]: time="2025-10-30T05:35:39.011533224Z" level=info msg="CreateContainer within sandbox \"c0b4df958378a910989b9734d906050ccc721ed04f44681064f366afa6a1a80e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"bc045a006ce7873ba10df15e77815a3481f53020445ba13836883fad9951845b\"" Oct 30 05:35:39.011775 containerd[1688]: time="2025-10-30T05:35:39.011754781Z" level=info msg="StartContainer for \"bc045a006ce7873ba10df15e77815a3481f53020445ba13836883fad9951845b\"" Oct 30 05:35:39.012600 containerd[1688]: time="2025-10-30T05:35:39.012520079Z" level=info msg="connecting to shim bc045a006ce7873ba10df15e77815a3481f53020445ba13836883fad9951845b" address="unix:///run/containerd/s/28dc7c8cc078024c55311a629619489354ed3f5e3766cd41e64bde4fc42594e7" protocol=ttrpc version=3 Oct 30 05:35:39.013534 containerd[1688]: time="2025-10-30T05:35:39.013516497Z" level=info msg="CreateContainer within sandbox \"008833c42a1216f15aeb87bbd74a903ccc7cde58e4d88b6467ca016a0de4bc04\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"fd168624e4d8d8c680d04ec059f60370a04481984137ee8eb4505594b416b47a\"" Oct 30 05:35:39.013807 containerd[1688]: time="2025-10-30T05:35:39.013787308Z" level=info msg="StartContainer for \"fd168624e4d8d8c680d04ec059f60370a04481984137ee8eb4505594b416b47a\"" Oct 30 05:35:39.014324 containerd[1688]: time="2025-10-30T05:35:39.014307944Z" level=info msg="connecting to shim fd168624e4d8d8c680d04ec059f60370a04481984137ee8eb4505594b416b47a" address="unix:///run/containerd/s/f80261320444a8fba484293850d290f90c2c8d14ea37b7c54d9f8bf61d4a39f6" protocol=ttrpc version=3 Oct 30 05:35:39.028678 systemd[1]: Started cri-containerd-eb58617ef4c6d4b8d89b17e6856782e5c25020a6f4683bb22d24aaf315096ad8.scope - libcontainer container eb58617ef4c6d4b8d89b17e6856782e5c25020a6f4683bb22d24aaf315096ad8. Oct 30 05:35:39.037471 systemd[1]: Started cri-containerd-bc045a006ce7873ba10df15e77815a3481f53020445ba13836883fad9951845b.scope - libcontainer container bc045a006ce7873ba10df15e77815a3481f53020445ba13836883fad9951845b. Oct 30 05:35:39.040509 systemd[1]: Started cri-containerd-fd168624e4d8d8c680d04ec059f60370a04481984137ee8eb4505594b416b47a.scope - libcontainer container fd168624e4d8d8c680d04ec059f60370a04481984137ee8eb4505594b416b47a. Oct 30 05:35:39.066787 kubelet[2612]: E1030 05:35:39.066667 2612 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://139.178.70.106:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 139.178.70.106:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Oct 30 05:35:39.089420 containerd[1688]: time="2025-10-30T05:35:39.088871322Z" level=info msg="StartContainer for \"fd168624e4d8d8c680d04ec059f60370a04481984137ee8eb4505594b416b47a\" returns successfully" Oct 30 05:35:39.094219 containerd[1688]: time="2025-10-30T05:35:39.094193455Z" level=info msg="StartContainer for \"eb58617ef4c6d4b8d89b17e6856782e5c25020a6f4683bb22d24aaf315096ad8\" returns successfully" Oct 30 05:35:39.109263 containerd[1688]: time="2025-10-30T05:35:39.109035465Z" level=info msg="StartContainer for \"bc045a006ce7873ba10df15e77815a3481f53020445ba13836883fad9951845b\" returns successfully" Oct 30 05:35:39.190679 kubelet[2612]: E1030 05:35:39.190654 2612 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://139.178.70.106:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.70.106:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Oct 30 05:35:39.253079 kubelet[2612]: E1030 05:35:39.252502 2612 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 30 05:35:39.254474 kubelet[2612]: E1030 05:35:39.254464 2612 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 30 05:35:39.256129 kubelet[2612]: E1030 05:35:39.256065 2612 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 30 05:35:39.435834 kubelet[2612]: E1030 05:35:39.435812 2612 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://139.178.70.106:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 139.178.70.106:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Oct 30 05:35:39.630844 kubelet[2612]: E1030 05:35:39.630566 2612 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.106:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.106:6443: connect: connection refused" interval="1.6s" Oct 30 05:35:39.713394 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3801689644.mount: Deactivated successfully. Oct 30 05:35:39.790232 kubelet[2612]: I1030 05:35:39.790061 2612 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 30 05:35:40.256973 kubelet[2612]: E1030 05:35:40.256845 2612 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 30 05:35:40.257614 kubelet[2612]: E1030 05:35:40.257554 2612 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 30 05:35:40.984958 kubelet[2612]: I1030 05:35:40.984840 2612 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Oct 30 05:35:40.984958 kubelet[2612]: E1030 05:35:40.984867 2612 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Oct 30 05:35:41.001379 kubelet[2612]: E1030 05:35:41.001355 2612 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 30 05:35:41.102093 kubelet[2612]: E1030 05:35:41.102061 2612 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 30 05:35:41.202537 kubelet[2612]: E1030 05:35:41.202509 2612 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 30 05:35:41.258247 kubelet[2612]: E1030 05:35:41.258175 2612 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 30 05:35:41.302596 kubelet[2612]: E1030 05:35:41.302571 2612 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 30 05:35:41.403564 kubelet[2612]: E1030 05:35:41.403535 2612 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 30 05:35:41.519712 kubelet[2612]: I1030 05:35:41.519494 2612 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 30 05:35:41.532169 kubelet[2612]: E1030 05:35:41.532141 2612 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Oct 30 05:35:41.532169 kubelet[2612]: I1030 05:35:41.532161 2612 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 30 05:35:41.533247 kubelet[2612]: E1030 05:35:41.533226 2612 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Oct 30 05:35:41.533247 kubelet[2612]: I1030 05:35:41.533240 2612 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 30 05:35:41.534240 kubelet[2612]: E1030 05:35:41.534219 2612 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Oct 30 05:35:42.187187 kubelet[2612]: I1030 05:35:42.187123 2612 apiserver.go:52] "Watching apiserver" Oct 30 05:35:42.220776 kubelet[2612]: I1030 05:35:42.220754 2612 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Oct 30 05:35:42.499363 systemd[1]: Reload requested from client PID 2887 ('systemctl') (unit session-9.scope)... Oct 30 05:35:42.499374 systemd[1]: Reloading... Oct 30 05:35:42.552287 zram_generator::config[2931]: No configuration found. Oct 30 05:35:42.633046 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Oct 30 05:35:42.708735 systemd[1]: Reloading finished in 209 ms. Oct 30 05:35:42.732300 kubelet[2612]: I1030 05:35:42.731203 2612 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 30 05:35:42.731370 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 30 05:35:42.746550 systemd[1]: kubelet.service: Deactivated successfully. Oct 30 05:35:42.746713 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 30 05:35:42.746747 systemd[1]: kubelet.service: Consumed 913ms CPU time, 131M memory peak. Oct 30 05:35:42.748380 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 30 05:35:42.906932 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 30 05:35:42.914572 (kubelet)[2999]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 30 05:35:42.990141 kubelet[2999]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 30 05:35:42.990141 kubelet[2999]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 30 05:35:42.990141 kubelet[2999]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 30 05:35:42.990386 kubelet[2999]: I1030 05:35:42.990167 2999 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 30 05:35:43.004541 kubelet[2999]: I1030 05:35:43.003776 2999 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Oct 30 05:35:43.004541 kubelet[2999]: I1030 05:35:43.003796 2999 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 30 05:35:43.004541 kubelet[2999]: I1030 05:35:43.003909 2999 server.go:956] "Client rotation is on, will bootstrap in background" Oct 30 05:35:43.004737 kubelet[2999]: I1030 05:35:43.004729 2999 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Oct 30 05:35:43.013775 kubelet[2999]: I1030 05:35:43.013760 2999 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 30 05:35:43.022713 kubelet[2999]: I1030 05:35:43.022701 2999 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 30 05:35:43.025806 kubelet[2999]: I1030 05:35:43.025792 2999 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 30 05:35:43.025943 kubelet[2999]: I1030 05:35:43.025909 2999 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 30 05:35:43.026099 kubelet[2999]: I1030 05:35:43.025944 2999 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 30 05:35:43.026159 kubelet[2999]: I1030 05:35:43.026104 2999 topology_manager.go:138] "Creating topology manager with none policy" Oct 30 05:35:43.026159 kubelet[2999]: I1030 05:35:43.026115 2999 container_manager_linux.go:303] "Creating device plugin manager" Oct 30 05:35:43.032954 kubelet[2999]: I1030 05:35:43.032944 2999 state_mem.go:36] "Initialized new in-memory state store" Oct 30 05:35:43.038306 kubelet[2999]: I1030 05:35:43.038295 2999 kubelet.go:480] "Attempting to sync node with API server" Oct 30 05:35:43.038336 kubelet[2999]: I1030 05:35:43.038314 2999 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 30 05:35:43.038336 kubelet[2999]: I1030 05:35:43.038332 2999 kubelet.go:386] "Adding apiserver pod source" Oct 30 05:35:43.038368 kubelet[2999]: I1030 05:35:43.038343 2999 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 30 05:35:43.054257 kubelet[2999]: I1030 05:35:43.054241 2999 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Oct 30 05:35:43.054566 kubelet[2999]: I1030 05:35:43.054552 2999 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Oct 30 05:35:43.071119 kubelet[2999]: I1030 05:35:43.070769 2999 watchdog_linux.go:99] "Systemd watchdog is not enabled" Oct 30 05:35:43.071119 kubelet[2999]: I1030 05:35:43.070804 2999 server.go:1289] "Started kubelet" Oct 30 05:35:43.071445 kubelet[2999]: I1030 05:35:43.071412 2999 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Oct 30 05:35:43.081292 kubelet[2999]: I1030 05:35:43.081227 2999 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 30 05:35:43.081568 kubelet[2999]: I1030 05:35:43.081560 2999 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 30 05:35:43.086188 kubelet[2999]: I1030 05:35:43.086175 2999 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 30 05:35:43.086862 kubelet[2999]: I1030 05:35:43.086844 2999 server.go:317] "Adding debug handlers to kubelet server" Oct 30 05:35:43.087940 kubelet[2999]: I1030 05:35:43.087645 2999 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 30 05:35:43.095420 kubelet[2999]: I1030 05:35:43.095233 2999 volume_manager.go:297] "Starting Kubelet Volume Manager" Oct 30 05:35:43.095420 kubelet[2999]: I1030 05:35:43.095299 2999 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Oct 30 05:35:43.095420 kubelet[2999]: I1030 05:35:43.095349 2999 reconciler.go:26] "Reconciler: start to sync state" Oct 30 05:35:43.096447 kubelet[2999]: I1030 05:35:43.096433 2999 factory.go:223] Registration of the systemd container factory successfully Oct 30 05:35:43.096498 kubelet[2999]: I1030 05:35:43.096486 2999 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 30 05:35:43.096836 kubelet[2999]: E1030 05:35:43.096815 2999 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 30 05:35:43.097951 kubelet[2999]: I1030 05:35:43.097936 2999 factory.go:223] Registration of the containerd container factory successfully Oct 30 05:35:43.098944 kubelet[2999]: I1030 05:35:43.098928 2999 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Oct 30 05:35:43.099756 kubelet[2999]: I1030 05:35:43.099597 2999 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Oct 30 05:35:43.099756 kubelet[2999]: I1030 05:35:43.099608 2999 status_manager.go:230] "Starting to sync pod status with apiserver" Oct 30 05:35:43.099756 kubelet[2999]: I1030 05:35:43.099620 2999 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 30 05:35:43.099756 kubelet[2999]: I1030 05:35:43.099626 2999 kubelet.go:2436] "Starting kubelet main sync loop" Oct 30 05:35:43.099756 kubelet[2999]: E1030 05:35:43.099646 2999 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 30 05:35:43.127095 kubelet[2999]: I1030 05:35:43.127080 2999 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 30 05:35:43.127759 kubelet[2999]: I1030 05:35:43.127217 2999 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 30 05:35:43.127759 kubelet[2999]: I1030 05:35:43.127239 2999 state_mem.go:36] "Initialized new in-memory state store" Oct 30 05:35:43.127759 kubelet[2999]: I1030 05:35:43.127336 2999 state_mem.go:88] "Updated default CPUSet" cpuSet="" Oct 30 05:35:43.127759 kubelet[2999]: I1030 05:35:43.127342 2999 state_mem.go:96] "Updated CPUSet assignments" assignments={} Oct 30 05:35:43.127759 kubelet[2999]: I1030 05:35:43.127353 2999 policy_none.go:49] "None policy: Start" Oct 30 05:35:43.127759 kubelet[2999]: I1030 05:35:43.127358 2999 memory_manager.go:186] "Starting memorymanager" policy="None" Oct 30 05:35:43.127759 kubelet[2999]: I1030 05:35:43.127364 2999 state_mem.go:35] "Initializing new in-memory state store" Oct 30 05:35:43.127759 kubelet[2999]: I1030 05:35:43.127418 2999 state_mem.go:75] "Updated machine memory state" Oct 30 05:35:43.129706 kubelet[2999]: E1030 05:35:43.129690 2999 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Oct 30 05:35:43.129793 kubelet[2999]: I1030 05:35:43.129778 2999 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 30 05:35:43.129818 kubelet[2999]: I1030 05:35:43.129788 2999 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 30 05:35:43.130722 kubelet[2999]: I1030 05:35:43.130604 2999 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 30 05:35:43.131578 kubelet[2999]: E1030 05:35:43.131464 2999 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 30 05:35:43.201193 kubelet[2999]: I1030 05:35:43.200802 2999 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 30 05:35:43.201193 kubelet[2999]: I1030 05:35:43.200854 2999 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 30 05:35:43.201193 kubelet[2999]: I1030 05:35:43.201175 2999 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 30 05:35:43.232451 kubelet[2999]: I1030 05:35:43.232427 2999 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 30 05:35:43.236392 kubelet[2999]: I1030 05:35:43.236372 2999 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Oct 30 05:35:43.236469 kubelet[2999]: I1030 05:35:43.236436 2999 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Oct 30 05:35:43.297802 kubelet[2999]: I1030 05:35:43.297766 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/698cc08d1a1ee66f201a2e6b00a28097-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"698cc08d1a1ee66f201a2e6b00a28097\") " pod="kube-system/kube-apiserver-localhost" Oct 30 05:35:43.297802 kubelet[2999]: I1030 05:35:43.297799 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/698cc08d1a1ee66f201a2e6b00a28097-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"698cc08d1a1ee66f201a2e6b00a28097\") " pod="kube-system/kube-apiserver-localhost" Oct 30 05:35:43.297936 kubelet[2999]: I1030 05:35:43.297822 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 30 05:35:43.297936 kubelet[2999]: I1030 05:35:43.297836 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 30 05:35:43.297936 kubelet[2999]: I1030 05:35:43.297852 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 30 05:35:43.297936 kubelet[2999]: I1030 05:35:43.297869 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 30 05:35:43.297936 kubelet[2999]: I1030 05:35:43.297895 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 30 05:35:43.298050 kubelet[2999]: I1030 05:35:43.297920 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d13d96f639b65e57f439b4396b605564-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d13d96f639b65e57f439b4396b605564\") " pod="kube-system/kube-scheduler-localhost" Oct 30 05:35:43.298050 kubelet[2999]: I1030 05:35:43.297939 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/698cc08d1a1ee66f201a2e6b00a28097-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"698cc08d1a1ee66f201a2e6b00a28097\") " pod="kube-system/kube-apiserver-localhost" Oct 30 05:35:44.049830 kubelet[2999]: I1030 05:35:44.049665 2999 apiserver.go:52] "Watching apiserver" Oct 30 05:35:44.095740 kubelet[2999]: I1030 05:35:44.095712 2999 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Oct 30 05:35:44.122732 kubelet[2999]: I1030 05:35:44.122706 2999 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 30 05:35:44.125286 kubelet[2999]: I1030 05:35:44.123633 2999 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 30 05:35:44.128116 kubelet[2999]: E1030 05:35:44.127982 2999 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Oct 30 05:35:44.131342 kubelet[2999]: E1030 05:35:44.131319 2999 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Oct 30 05:35:44.140090 kubelet[2999]: I1030 05:35:44.140043 2999 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.140028557 podStartE2EDuration="1.140028557s" podCreationTimestamp="2025-10-30 05:35:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-30 05:35:44.138884658 +0000 UTC m=+1.210842988" watchObservedRunningTime="2025-10-30 05:35:44.140028557 +0000 UTC m=+1.211986879" Oct 30 05:35:44.157732 kubelet[2999]: I1030 05:35:44.157693 2999 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.157682881 podStartE2EDuration="1.157682881s" podCreationTimestamp="2025-10-30 05:35:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-30 05:35:44.149553639 +0000 UTC m=+1.221511962" watchObservedRunningTime="2025-10-30 05:35:44.157682881 +0000 UTC m=+1.229641203" Oct 30 05:35:44.165026 kubelet[2999]: I1030 05:35:44.164982 2999 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.16497201 podStartE2EDuration="1.16497201s" podCreationTimestamp="2025-10-30 05:35:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-30 05:35:44.157841223 +0000 UTC m=+1.229799553" watchObservedRunningTime="2025-10-30 05:35:44.16497201 +0000 UTC m=+1.236930335" Oct 30 05:35:49.155420 kubelet[2999]: I1030 05:35:49.155397 2999 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Oct 30 05:35:49.155679 containerd[1688]: time="2025-10-30T05:35:49.155612104Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 30 05:35:49.155806 kubelet[2999]: I1030 05:35:49.155757 2999 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Oct 30 05:35:50.309824 systemd[1]: Created slice kubepods-besteffort-pod3c66e19b_f56b_4a66_918a_be36ed86b385.slice - libcontainer container kubepods-besteffort-pod3c66e19b_f56b_4a66_918a_be36ed86b385.slice. Oct 30 05:35:50.343748 kubelet[2999]: I1030 05:35:50.343688 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/3c66e19b-f56b-4a66-918a-be36ed86b385-var-lib-calico\") pod \"tigera-operator-7dcd859c48-6xmwp\" (UID: \"3c66e19b-f56b-4a66-918a-be36ed86b385\") " pod="tigera-operator/tigera-operator-7dcd859c48-6xmwp" Oct 30 05:35:50.344068 kubelet[2999]: I1030 05:35:50.343807 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffk5k\" (UniqueName: \"kubernetes.io/projected/3c66e19b-f56b-4a66-918a-be36ed86b385-kube-api-access-ffk5k\") pod \"tigera-operator-7dcd859c48-6xmwp\" (UID: \"3c66e19b-f56b-4a66-918a-be36ed86b385\") " pod="tigera-operator/tigera-operator-7dcd859c48-6xmwp" Oct 30 05:35:50.401464 systemd[1]: Created slice kubepods-besteffort-pod2916b388_6212_45de_9dc2_92702196c25f.slice - libcontainer container kubepods-besteffort-pod2916b388_6212_45de_9dc2_92702196c25f.slice. Oct 30 05:35:50.445028 kubelet[2999]: I1030 05:35:50.444925 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2916b388-6212-45de-9dc2-92702196c25f-xtables-lock\") pod \"kube-proxy-kdz8r\" (UID: \"2916b388-6212-45de-9dc2-92702196c25f\") " pod="kube-system/kube-proxy-kdz8r" Oct 30 05:35:50.445028 kubelet[2999]: I1030 05:35:50.444988 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2916b388-6212-45de-9dc2-92702196c25f-kube-proxy\") pod \"kube-proxy-kdz8r\" (UID: \"2916b388-6212-45de-9dc2-92702196c25f\") " pod="kube-system/kube-proxy-kdz8r" Oct 30 05:35:50.445241 kubelet[2999]: I1030 05:35:50.445012 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2916b388-6212-45de-9dc2-92702196c25f-lib-modules\") pod \"kube-proxy-kdz8r\" (UID: \"2916b388-6212-45de-9dc2-92702196c25f\") " pod="kube-system/kube-proxy-kdz8r" Oct 30 05:35:50.445241 kubelet[2999]: I1030 05:35:50.445183 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kz52s\" (UniqueName: \"kubernetes.io/projected/2916b388-6212-45de-9dc2-92702196c25f-kube-api-access-kz52s\") pod \"kube-proxy-kdz8r\" (UID: \"2916b388-6212-45de-9dc2-92702196c25f\") " pod="kube-system/kube-proxy-kdz8r" Oct 30 05:35:50.622358 containerd[1688]: time="2025-10-30T05:35:50.622102310Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-6xmwp,Uid:3c66e19b-f56b-4a66-918a-be36ed86b385,Namespace:tigera-operator,Attempt:0,}" Oct 30 05:35:50.634525 containerd[1688]: time="2025-10-30T05:35:50.634482177Z" level=info msg="connecting to shim 4f10832ea8415859d3aa4b313ae63c23e0784babf1c5f7ea1cc28c15b8319472" address="unix:///run/containerd/s/99d4e0bd1f6c221436ff1e06738459900dc5c5e04f3a251169eea2cfca6e5fae" namespace=k8s.io protocol=ttrpc version=3 Oct 30 05:35:50.659494 systemd[1]: Started cri-containerd-4f10832ea8415859d3aa4b313ae63c23e0784babf1c5f7ea1cc28c15b8319472.scope - libcontainer container 4f10832ea8415859d3aa4b313ae63c23e0784babf1c5f7ea1cc28c15b8319472. Oct 30 05:35:50.689402 containerd[1688]: time="2025-10-30T05:35:50.689370993Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-6xmwp,Uid:3c66e19b-f56b-4a66-918a-be36ed86b385,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"4f10832ea8415859d3aa4b313ae63c23e0784babf1c5f7ea1cc28c15b8319472\"" Oct 30 05:35:50.690458 containerd[1688]: time="2025-10-30T05:35:50.690435445Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Oct 30 05:35:50.705613 containerd[1688]: time="2025-10-30T05:35:50.705574635Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kdz8r,Uid:2916b388-6212-45de-9dc2-92702196c25f,Namespace:kube-system,Attempt:0,}" Oct 30 05:35:50.717818 containerd[1688]: time="2025-10-30T05:35:50.717789996Z" level=info msg="connecting to shim 411acebda2ede7b07c1457cc2e08542e2d37860cd00c657e266846bc83b6fe7f" address="unix:///run/containerd/s/5dd45c5f83562d3701e2d31a459e106a452e7848fb72ca9e243db5e9a5cfb485" namespace=k8s.io protocol=ttrpc version=3 Oct 30 05:35:50.739393 systemd[1]: Started cri-containerd-411acebda2ede7b07c1457cc2e08542e2d37860cd00c657e266846bc83b6fe7f.scope - libcontainer container 411acebda2ede7b07c1457cc2e08542e2d37860cd00c657e266846bc83b6fe7f. Oct 30 05:35:50.761868 containerd[1688]: time="2025-10-30T05:35:50.761821312Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kdz8r,Uid:2916b388-6212-45de-9dc2-92702196c25f,Namespace:kube-system,Attempt:0,} returns sandbox id \"411acebda2ede7b07c1457cc2e08542e2d37860cd00c657e266846bc83b6fe7f\"" Oct 30 05:35:50.770883 containerd[1688]: time="2025-10-30T05:35:50.770851255Z" level=info msg="CreateContainer within sandbox \"411acebda2ede7b07c1457cc2e08542e2d37860cd00c657e266846bc83b6fe7f\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 30 05:35:50.776475 containerd[1688]: time="2025-10-30T05:35:50.776414824Z" level=info msg="Container 3680cde77c13c03d87d02009999a7b530fbcf1b41360eab4d63fee32ce41c1bb: CDI devices from CRI Config.CDIDevices: []" Oct 30 05:35:50.779667 containerd[1688]: time="2025-10-30T05:35:50.779647511Z" level=info msg="CreateContainer within sandbox \"411acebda2ede7b07c1457cc2e08542e2d37860cd00c657e266846bc83b6fe7f\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"3680cde77c13c03d87d02009999a7b530fbcf1b41360eab4d63fee32ce41c1bb\"" Oct 30 05:35:50.780166 containerd[1688]: time="2025-10-30T05:35:50.780156507Z" level=info msg="StartContainer for \"3680cde77c13c03d87d02009999a7b530fbcf1b41360eab4d63fee32ce41c1bb\"" Oct 30 05:35:50.781139 containerd[1688]: time="2025-10-30T05:35:50.781110928Z" level=info msg="connecting to shim 3680cde77c13c03d87d02009999a7b530fbcf1b41360eab4d63fee32ce41c1bb" address="unix:///run/containerd/s/5dd45c5f83562d3701e2d31a459e106a452e7848fb72ca9e243db5e9a5cfb485" protocol=ttrpc version=3 Oct 30 05:35:50.795489 systemd[1]: Started cri-containerd-3680cde77c13c03d87d02009999a7b530fbcf1b41360eab4d63fee32ce41c1bb.scope - libcontainer container 3680cde77c13c03d87d02009999a7b530fbcf1b41360eab4d63fee32ce41c1bb. Oct 30 05:35:50.817914 containerd[1688]: time="2025-10-30T05:35:50.817879584Z" level=info msg="StartContainer for \"3680cde77c13c03d87d02009999a7b530fbcf1b41360eab4d63fee32ce41c1bb\" returns successfully" Oct 30 05:35:51.290706 kubelet[2999]: I1030 05:35:51.290650 2999 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-kdz8r" podStartSLOduration=1.290641108 podStartE2EDuration="1.290641108s" podCreationTimestamp="2025-10-30 05:35:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-30 05:35:51.138673415 +0000 UTC m=+8.210631753" watchObservedRunningTime="2025-10-30 05:35:51.290641108 +0000 UTC m=+8.362599437" Oct 30 05:35:51.931725 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1440817047.mount: Deactivated successfully. Oct 30 05:35:52.408586 containerd[1688]: time="2025-10-30T05:35:52.408515072Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 05:35:52.409084 containerd[1688]: time="2025-10-30T05:35:52.408947016Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Oct 30 05:35:52.409880 containerd[1688]: time="2025-10-30T05:35:52.409236806Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 05:35:52.410118 containerd[1688]: time="2025-10-30T05:35:52.410103703Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 05:35:52.410548 containerd[1688]: time="2025-10-30T05:35:52.410534671Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 1.720008038s" Oct 30 05:35:52.410602 containerd[1688]: time="2025-10-30T05:35:52.410593473Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Oct 30 05:35:52.412416 containerd[1688]: time="2025-10-30T05:35:52.412404000Z" level=info msg="CreateContainer within sandbox \"4f10832ea8415859d3aa4b313ae63c23e0784babf1c5f7ea1cc28c15b8319472\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Oct 30 05:35:52.423709 containerd[1688]: time="2025-10-30T05:35:52.423692419Z" level=info msg="Container 6adc1453f97054e022489d65d3125c69ac8c9176c0e598ebb6a7c346c96cef81: CDI devices from CRI Config.CDIDevices: []" Oct 30 05:35:52.430981 containerd[1688]: time="2025-10-30T05:35:52.430966637Z" level=info msg="CreateContainer within sandbox \"4f10832ea8415859d3aa4b313ae63c23e0784babf1c5f7ea1cc28c15b8319472\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"6adc1453f97054e022489d65d3125c69ac8c9176c0e598ebb6a7c346c96cef81\"" Oct 30 05:35:52.431235 containerd[1688]: time="2025-10-30T05:35:52.431223421Z" level=info msg="StartContainer for \"6adc1453f97054e022489d65d3125c69ac8c9176c0e598ebb6a7c346c96cef81\"" Oct 30 05:35:52.432571 containerd[1688]: time="2025-10-30T05:35:52.432533055Z" level=info msg="connecting to shim 6adc1453f97054e022489d65d3125c69ac8c9176c0e598ebb6a7c346c96cef81" address="unix:///run/containerd/s/99d4e0bd1f6c221436ff1e06738459900dc5c5e04f3a251169eea2cfca6e5fae" protocol=ttrpc version=3 Oct 30 05:35:52.446502 systemd[1]: Started cri-containerd-6adc1453f97054e022489d65d3125c69ac8c9176c0e598ebb6a7c346c96cef81.scope - libcontainer container 6adc1453f97054e022489d65d3125c69ac8c9176c0e598ebb6a7c346c96cef81. Oct 30 05:35:52.473092 containerd[1688]: time="2025-10-30T05:35:52.473065632Z" level=info msg="StartContainer for \"6adc1453f97054e022489d65d3125c69ac8c9176c0e598ebb6a7c346c96cef81\" returns successfully" Oct 30 05:35:55.023600 kubelet[2999]: I1030 05:35:55.023401 2999 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-6xmwp" podStartSLOduration=3.302346205 podStartE2EDuration="5.023389356s" podCreationTimestamp="2025-10-30 05:35:50 +0000 UTC" firstStartedPulling="2025-10-30 05:35:50.690106045 +0000 UTC m=+7.762064362" lastFinishedPulling="2025-10-30 05:35:52.411149193 +0000 UTC m=+9.483107513" observedRunningTime="2025-10-30 05:35:53.143685266 +0000 UTC m=+10.215643602" watchObservedRunningTime="2025-10-30 05:35:55.023389356 +0000 UTC m=+12.095347679" Oct 30 05:35:57.277181 sudo[2007]: pam_unix(sudo:session): session closed for user root Oct 30 05:35:57.279613 sshd[2006]: Connection closed by 139.178.68.195 port 55062 Oct 30 05:35:57.280844 sshd-session[2003]: pam_unix(sshd:session): session closed for user core Oct 30 05:35:57.284552 systemd-logind[1654]: Session 9 logged out. Waiting for processes to exit. Oct 30 05:35:57.285776 systemd[1]: sshd@6-139.178.70.106:22-139.178.68.195:55062.service: Deactivated successfully. Oct 30 05:35:57.286923 systemd[1]: session-9.scope: Deactivated successfully. Oct 30 05:35:57.287038 systemd[1]: session-9.scope: Consumed 3.507s CPU time, 154.4M memory peak. Oct 30 05:35:57.290227 systemd-logind[1654]: Removed session 9. Oct 30 05:36:01.222000 systemd[1]: Created slice kubepods-besteffort-pod1ec6cb8d_2c41_403f_b09c_9c9d69038213.slice - libcontainer container kubepods-besteffort-pod1ec6cb8d_2c41_403f_b09c_9c9d69038213.slice. Oct 30 05:36:01.314235 kubelet[2999]: I1030 05:36:01.314206 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/1ec6cb8d-2c41-403f-b09c-9c9d69038213-typha-certs\") pod \"calico-typha-77789666b6-f7q5d\" (UID: \"1ec6cb8d-2c41-403f-b09c-9c9d69038213\") " pod="calico-system/calico-typha-77789666b6-f7q5d" Oct 30 05:36:01.314650 kubelet[2999]: I1030 05:36:01.314577 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1ec6cb8d-2c41-403f-b09c-9c9d69038213-tigera-ca-bundle\") pod \"calico-typha-77789666b6-f7q5d\" (UID: \"1ec6cb8d-2c41-403f-b09c-9c9d69038213\") " pod="calico-system/calico-typha-77789666b6-f7q5d" Oct 30 05:36:01.314650 kubelet[2999]: I1030 05:36:01.314605 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cpq8v\" (UniqueName: \"kubernetes.io/projected/1ec6cb8d-2c41-403f-b09c-9c9d69038213-kube-api-access-cpq8v\") pod \"calico-typha-77789666b6-f7q5d\" (UID: \"1ec6cb8d-2c41-403f-b09c-9c9d69038213\") " pod="calico-system/calico-typha-77789666b6-f7q5d" Oct 30 05:36:01.441821 systemd[1]: Created slice kubepods-besteffort-pod35308386_d9de_4441_9567_277e1d4ad6ec.slice - libcontainer container kubepods-besteffort-pod35308386_d9de_4441_9567_277e1d4ad6ec.slice. Oct 30 05:36:01.515777 kubelet[2999]: I1030 05:36:01.515712 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/35308386-d9de-4441-9567-277e1d4ad6ec-xtables-lock\") pod \"calico-node-7mqxk\" (UID: \"35308386-d9de-4441-9567-277e1d4ad6ec\") " pod="calico-system/calico-node-7mqxk" Oct 30 05:36:01.515777 kubelet[2999]: I1030 05:36:01.515737 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/35308386-d9de-4441-9567-277e1d4ad6ec-cni-bin-dir\") pod \"calico-node-7mqxk\" (UID: \"35308386-d9de-4441-9567-277e1d4ad6ec\") " pod="calico-system/calico-node-7mqxk" Oct 30 05:36:01.515777 kubelet[2999]: I1030 05:36:01.515747 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6wls\" (UniqueName: \"kubernetes.io/projected/35308386-d9de-4441-9567-277e1d4ad6ec-kube-api-access-d6wls\") pod \"calico-node-7mqxk\" (UID: \"35308386-d9de-4441-9567-277e1d4ad6ec\") " pod="calico-system/calico-node-7mqxk" Oct 30 05:36:01.515777 kubelet[2999]: I1030 05:36:01.515760 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/35308386-d9de-4441-9567-277e1d4ad6ec-cni-net-dir\") pod \"calico-node-7mqxk\" (UID: \"35308386-d9de-4441-9567-277e1d4ad6ec\") " pod="calico-system/calico-node-7mqxk" Oct 30 05:36:01.516115 kubelet[2999]: I1030 05:36:01.515978 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/35308386-d9de-4441-9567-277e1d4ad6ec-cni-log-dir\") pod \"calico-node-7mqxk\" (UID: \"35308386-d9de-4441-9567-277e1d4ad6ec\") " pod="calico-system/calico-node-7mqxk" Oct 30 05:36:01.516115 kubelet[2999]: I1030 05:36:01.515993 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/35308386-d9de-4441-9567-277e1d4ad6ec-flexvol-driver-host\") pod \"calico-node-7mqxk\" (UID: \"35308386-d9de-4441-9567-277e1d4ad6ec\") " pod="calico-system/calico-node-7mqxk" Oct 30 05:36:01.516115 kubelet[2999]: I1030 05:36:01.516002 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/35308386-d9de-4441-9567-277e1d4ad6ec-lib-modules\") pod \"calico-node-7mqxk\" (UID: \"35308386-d9de-4441-9567-277e1d4ad6ec\") " pod="calico-system/calico-node-7mqxk" Oct 30 05:36:01.516115 kubelet[2999]: I1030 05:36:01.516011 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/35308386-d9de-4441-9567-277e1d4ad6ec-tigera-ca-bundle\") pod \"calico-node-7mqxk\" (UID: \"35308386-d9de-4441-9567-277e1d4ad6ec\") " pod="calico-system/calico-node-7mqxk" Oct 30 05:36:01.516115 kubelet[2999]: I1030 05:36:01.516024 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/35308386-d9de-4441-9567-277e1d4ad6ec-var-lib-calico\") pod \"calico-node-7mqxk\" (UID: \"35308386-d9de-4441-9567-277e1d4ad6ec\") " pod="calico-system/calico-node-7mqxk" Oct 30 05:36:01.516218 kubelet[2999]: I1030 05:36:01.516034 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/35308386-d9de-4441-9567-277e1d4ad6ec-var-run-calico\") pod \"calico-node-7mqxk\" (UID: \"35308386-d9de-4441-9567-277e1d4ad6ec\") " pod="calico-system/calico-node-7mqxk" Oct 30 05:36:01.516218 kubelet[2999]: I1030 05:36:01.516042 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/35308386-d9de-4441-9567-277e1d4ad6ec-node-certs\") pod \"calico-node-7mqxk\" (UID: \"35308386-d9de-4441-9567-277e1d4ad6ec\") " pod="calico-system/calico-node-7mqxk" Oct 30 05:36:01.516218 kubelet[2999]: I1030 05:36:01.516052 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/35308386-d9de-4441-9567-277e1d4ad6ec-policysync\") pod \"calico-node-7mqxk\" (UID: \"35308386-d9de-4441-9567-277e1d4ad6ec\") " pod="calico-system/calico-node-7mqxk" Oct 30 05:36:01.529010 containerd[1688]: time="2025-10-30T05:36:01.528985780Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-77789666b6-f7q5d,Uid:1ec6cb8d-2c41-403f-b09c-9c9d69038213,Namespace:calico-system,Attempt:0,}" Oct 30 05:36:01.619379 kubelet[2999]: E1030 05:36:01.619325 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 05:36:01.619379 kubelet[2999]: W1030 05:36:01.619342 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 05:36:01.619379 kubelet[2999]: E1030 05:36:01.619356 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 05:36:01.619916 kubelet[2999]: E1030 05:36:01.619479 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 05:36:01.619916 kubelet[2999]: W1030 05:36:01.619485 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 05:36:01.619916 kubelet[2999]: E1030 05:36:01.619492 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 05:36:01.619916 kubelet[2999]: E1030 05:36:01.619746 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 05:36:01.619916 kubelet[2999]: W1030 05:36:01.619753 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 05:36:01.619916 kubelet[2999]: E1030 05:36:01.619760 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 05:36:01.620468 kubelet[2999]: E1030 05:36:01.620286 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 05:36:01.620468 kubelet[2999]: W1030 05:36:01.620429 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 05:36:01.620468 kubelet[2999]: E1030 05:36:01.620439 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 05:36:01.620906 kubelet[2999]: E1030 05:36:01.620891 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 05:36:01.620906 kubelet[2999]: W1030 05:36:01.620898 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 05:36:01.620906 kubelet[2999]: E1030 05:36:01.620905 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 05:36:01.625782 kubelet[2999]: E1030 05:36:01.625746 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 05:36:01.625782 kubelet[2999]: W1030 05:36:01.625758 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 05:36:01.625939 kubelet[2999]: E1030 05:36:01.625897 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 05:36:01.626430 kubelet[2999]: E1030 05:36:01.626396 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 05:36:01.626430 kubelet[2999]: W1030 05:36:01.626407 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 05:36:01.626430 kubelet[2999]: E1030 05:36:01.626415 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 05:36:01.627049 kubelet[2999]: E1030 05:36:01.626935 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 05:36:01.627049 kubelet[2999]: W1030 05:36:01.626944 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 05:36:01.627049 kubelet[2999]: E1030 05:36:01.626952 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 05:36:01.627390 kubelet[2999]: E1030 05:36:01.627368 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 05:36:01.627390 kubelet[2999]: W1030 05:36:01.627379 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 05:36:01.627390 kubelet[2999]: E1030 05:36:01.627388 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 05:36:01.698938 kubelet[2999]: E1030 05:36:01.698912 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 05:36:01.699052 kubelet[2999]: W1030 05:36:01.699014 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 05:36:01.699052 kubelet[2999]: E1030 05:36:01.699031 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 05:36:01.748294 containerd[1688]: time="2025-10-30T05:36:01.748241566Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-7mqxk,Uid:35308386-d9de-4441-9567-277e1d4ad6ec,Namespace:calico-system,Attempt:0,}" Oct 30 05:36:01.774998 kubelet[2999]: E1030 05:36:01.774855 2999 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wzqzl" podUID="9e840872-e6a6-422f-a0c7-b6b186a24394" Oct 30 05:36:01.814235 kubelet[2999]: E1030 05:36:01.814196 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 05:36:01.814235 kubelet[2999]: W1030 05:36:01.814212 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 05:36:01.814235 kubelet[2999]: E1030 05:36:01.814243 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 05:36:01.814464 kubelet[2999]: E1030 05:36:01.814347 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 05:36:01.814464 kubelet[2999]: W1030 05:36:01.814363 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 05:36:01.814464 kubelet[2999]: E1030 05:36:01.814374 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 05:36:01.814610 kubelet[2999]: E1030 05:36:01.814589 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 05:36:01.814610 kubelet[2999]: W1030 05:36:01.814595 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 05:36:01.814610 kubelet[2999]: E1030 05:36:01.814600 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 05:36:01.814734 kubelet[2999]: E1030 05:36:01.814725 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 05:36:01.814734 kubelet[2999]: W1030 05:36:01.814733 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 05:36:01.814799 kubelet[2999]: E1030 05:36:01.814740 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 05:36:01.814822 kubelet[2999]: E1030 05:36:01.814815 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 05:36:01.814822 kubelet[2999]: W1030 05:36:01.814819 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 05:36:01.814889 kubelet[2999]: E1030 05:36:01.814824 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 05:36:01.817263 kubelet[2999]: E1030 05:36:01.816358 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 05:36:01.817263 kubelet[2999]: W1030 05:36:01.816367 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 05:36:01.817263 kubelet[2999]: E1030 05:36:01.816377 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 05:36:01.817432 kubelet[2999]: E1030 05:36:01.817353 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 05:36:01.817432 kubelet[2999]: W1030 05:36:01.817366 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 05:36:01.817432 kubelet[2999]: E1030 05:36:01.817376 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 05:36:01.817546 kubelet[2999]: E1030 05:36:01.817465 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 05:36:01.817546 kubelet[2999]: W1030 05:36:01.817469 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 05:36:01.817546 kubelet[2999]: E1030 05:36:01.817474 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 05:36:01.817663 kubelet[2999]: E1030 05:36:01.817563 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 05:36:01.817663 kubelet[2999]: W1030 05:36:01.817568 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 05:36:01.817663 kubelet[2999]: E1030 05:36:01.817572 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 05:36:01.818082 kubelet[2999]: E1030 05:36:01.817944 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 05:36:01.818082 kubelet[2999]: W1030 05:36:01.817950 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 05:36:01.818082 kubelet[2999]: E1030 05:36:01.817958 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 05:36:01.818199 kubelet[2999]: E1030 05:36:01.818106 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 05:36:01.818199 kubelet[2999]: W1030 05:36:01.818111 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 05:36:01.818199 kubelet[2999]: E1030 05:36:01.818116 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 05:36:01.818322 kubelet[2999]: E1030 05:36:01.818239 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 05:36:01.820305 kubelet[2999]: W1030 05:36:01.819289 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 05:36:01.820305 kubelet[2999]: E1030 05:36:01.819306 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 05:36:01.820305 kubelet[2999]: E1030 05:36:01.819426 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 05:36:01.820305 kubelet[2999]: W1030 05:36:01.819433 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 05:36:01.820305 kubelet[2999]: E1030 05:36:01.819440 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 05:36:01.820305 kubelet[2999]: E1030 05:36:01.819536 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 05:36:01.820305 kubelet[2999]: W1030 05:36:01.819541 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 05:36:01.820305 kubelet[2999]: E1030 05:36:01.819551 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 05:36:01.821710 kubelet[2999]: E1030 05:36:01.820847 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 05:36:01.821710 kubelet[2999]: W1030 05:36:01.820858 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 05:36:01.821710 kubelet[2999]: E1030 05:36:01.820868 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 05:36:01.821710 kubelet[2999]: E1030 05:36:01.820953 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 05:36:01.821710 kubelet[2999]: W1030 05:36:01.820960 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 05:36:01.821710 kubelet[2999]: E1030 05:36:01.820965 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 05:36:01.821710 kubelet[2999]: E1030 05:36:01.821132 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 05:36:01.821710 kubelet[2999]: W1030 05:36:01.821136 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 05:36:01.821710 kubelet[2999]: E1030 05:36:01.821141 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 05:36:01.821710 kubelet[2999]: E1030 05:36:01.821254 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 05:36:01.821918 kubelet[2999]: W1030 05:36:01.821258 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 05:36:01.821918 kubelet[2999]: E1030 05:36:01.821264 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 05:36:01.821918 kubelet[2999]: E1030 05:36:01.821441 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 05:36:01.821918 kubelet[2999]: W1030 05:36:01.821445 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 05:36:01.821918 kubelet[2999]: E1030 05:36:01.821451 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 05:36:01.821918 kubelet[2999]: E1030 05:36:01.821600 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 05:36:01.821918 kubelet[2999]: W1030 05:36:01.821605 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 05:36:01.821918 kubelet[2999]: E1030 05:36:01.821610 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 05:36:01.823704 kubelet[2999]: E1030 05:36:01.823344 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 05:36:01.823704 kubelet[2999]: W1030 05:36:01.823353 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 05:36:01.823704 kubelet[2999]: E1030 05:36:01.823362 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 05:36:01.823704 kubelet[2999]: I1030 05:36:01.823381 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7nqxs\" (UniqueName: \"kubernetes.io/projected/9e840872-e6a6-422f-a0c7-b6b186a24394-kube-api-access-7nqxs\") pod \"csi-node-driver-wzqzl\" (UID: \"9e840872-e6a6-422f-a0c7-b6b186a24394\") " pod="calico-system/csi-node-driver-wzqzl" Oct 30 05:36:01.823704 kubelet[2999]: E1030 05:36:01.823481 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 05:36:01.823704 kubelet[2999]: W1030 05:36:01.823487 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 05:36:01.823704 kubelet[2999]: E1030 05:36:01.823492 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 05:36:01.823704 kubelet[2999]: I1030 05:36:01.823500 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/9e840872-e6a6-422f-a0c7-b6b186a24394-varrun\") pod \"csi-node-driver-wzqzl\" (UID: \"9e840872-e6a6-422f-a0c7-b6b186a24394\") " pod="calico-system/csi-node-driver-wzqzl" Oct 30 05:36:01.823704 kubelet[2999]: E1030 05:36:01.823579 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 05:36:01.824544 kubelet[2999]: W1030 05:36:01.823584 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 05:36:01.824544 kubelet[2999]: E1030 05:36:01.823589 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 05:36:01.824544 kubelet[2999]: I1030 05:36:01.823597 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/9e840872-e6a6-422f-a0c7-b6b186a24394-registration-dir\") pod \"csi-node-driver-wzqzl\" (UID: \"9e840872-e6a6-422f-a0c7-b6b186a24394\") " pod="calico-system/csi-node-driver-wzqzl" Oct 30 05:36:01.824544 kubelet[2999]: E1030 05:36:01.823670 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 05:36:01.824544 kubelet[2999]: W1030 05:36:01.823675 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 05:36:01.824544 kubelet[2999]: E1030 05:36:01.823679 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 05:36:01.824544 kubelet[2999]: I1030 05:36:01.823688 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/9e840872-e6a6-422f-a0c7-b6b186a24394-socket-dir\") pod \"csi-node-driver-wzqzl\" (UID: \"9e840872-e6a6-422f-a0c7-b6b186a24394\") " pod="calico-system/csi-node-driver-wzqzl" Oct 30 05:36:01.824544 kubelet[2999]: E1030 05:36:01.823761 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 05:36:01.824745 kubelet[2999]: W1030 05:36:01.823765 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 05:36:01.824745 kubelet[2999]: E1030 05:36:01.823772 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 05:36:01.824745 kubelet[2999]: I1030 05:36:01.823784 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9e840872-e6a6-422f-a0c7-b6b186a24394-kubelet-dir\") pod \"csi-node-driver-wzqzl\" (UID: \"9e840872-e6a6-422f-a0c7-b6b186a24394\") " pod="calico-system/csi-node-driver-wzqzl" Oct 30 05:36:01.824745 kubelet[2999]: E1030 05:36:01.823884 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 05:36:01.824745 kubelet[2999]: W1030 05:36:01.823888 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 05:36:01.824745 kubelet[2999]: E1030 05:36:01.823893 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 05:36:01.825353 kubelet[2999]: E1030 05:36:01.825339 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 05:36:01.825353 kubelet[2999]: W1030 05:36:01.825349 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 05:36:01.827683 kubelet[2999]: E1030 05:36:01.825357 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 05:36:01.827683 kubelet[2999]: E1030 05:36:01.825530 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 05:36:01.827683 kubelet[2999]: W1030 05:36:01.825537 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 05:36:01.827683 kubelet[2999]: E1030 05:36:01.825544 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 05:36:01.827683 kubelet[2999]: E1030 05:36:01.825631 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 05:36:01.827683 kubelet[2999]: W1030 05:36:01.825637 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 05:36:01.827683 kubelet[2999]: E1030 05:36:01.825646 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 05:36:01.827683 kubelet[2999]: E1030 05:36:01.825730 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 05:36:01.827683 kubelet[2999]: W1030 05:36:01.825736 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 05:36:01.827683 kubelet[2999]: E1030 05:36:01.825743 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 05:36:01.827936 kubelet[2999]: E1030 05:36:01.825835 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 05:36:01.827936 kubelet[2999]: W1030 05:36:01.825840 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 05:36:01.827936 kubelet[2999]: E1030 05:36:01.825846 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 05:36:01.827936 kubelet[2999]: E1030 05:36:01.825920 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 05:36:01.827936 kubelet[2999]: W1030 05:36:01.825926 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 05:36:01.827936 kubelet[2999]: E1030 05:36:01.825932 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 05:36:01.827936 kubelet[2999]: E1030 05:36:01.826018 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 05:36:01.827936 kubelet[2999]: W1030 05:36:01.826022 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 05:36:01.827936 kubelet[2999]: E1030 05:36:01.826034 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 05:36:01.827936 kubelet[2999]: E1030 05:36:01.827218 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 05:36:01.828152 kubelet[2999]: W1030 05:36:01.827223 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 05:36:01.828152 kubelet[2999]: E1030 05:36:01.827230 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 05:36:01.828152 kubelet[2999]: E1030 05:36:01.827692 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 05:36:01.828152 kubelet[2999]: W1030 05:36:01.827699 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 05:36:01.828152 kubelet[2999]: E1030 05:36:01.827708 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 05:36:01.844679 containerd[1688]: time="2025-10-30T05:36:01.844625568Z" level=info msg="connecting to shim cd511a3b1c1ced468c8a2c194f49d309b2a4c4c2d6a76fd92eada4a93d4da6fa" address="unix:///run/containerd/s/51bf54fa0f91fb80f76ec41290e0b776aa2097ddfd377870b77767280312a309" namespace=k8s.io protocol=ttrpc version=3 Oct 30 05:36:01.846165 containerd[1688]: time="2025-10-30T05:36:01.846142310Z" level=info msg="connecting to shim ebf396814b1d89f75dcaf1f06391964b114ddf5b206e6125c15b8748af8e7fa8" address="unix:///run/containerd/s/1979801019f7245d0212605a95324da7937aa2c1e62a78da90ff121d25a9cf6b" namespace=k8s.io protocol=ttrpc version=3 Oct 30 05:36:01.870564 systemd[1]: Started cri-containerd-cd511a3b1c1ced468c8a2c194f49d309b2a4c4c2d6a76fd92eada4a93d4da6fa.scope - libcontainer container cd511a3b1c1ced468c8a2c194f49d309b2a4c4c2d6a76fd92eada4a93d4da6fa. Oct 30 05:36:01.877025 systemd[1]: Started cri-containerd-ebf396814b1d89f75dcaf1f06391964b114ddf5b206e6125c15b8748af8e7fa8.scope - libcontainer container ebf396814b1d89f75dcaf1f06391964b114ddf5b206e6125c15b8748af8e7fa8. Oct 30 05:36:01.924827 kubelet[2999]: E1030 05:36:01.924414 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 05:36:01.924827 kubelet[2999]: W1030 05:36:01.924427 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 05:36:01.924827 kubelet[2999]: E1030 05:36:01.924438 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 05:36:01.928470 kubelet[2999]: E1030 05:36:01.925150 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 05:36:01.928470 kubelet[2999]: W1030 05:36:01.925156 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 05:36:01.928470 kubelet[2999]: E1030 05:36:01.925162 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 05:36:01.928470 kubelet[2999]: E1030 05:36:01.925255 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 05:36:01.928470 kubelet[2999]: W1030 05:36:01.925259 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 05:36:01.928470 kubelet[2999]: E1030 05:36:01.925264 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 05:36:01.928470 kubelet[2999]: E1030 05:36:01.925393 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 05:36:01.928470 kubelet[2999]: W1030 05:36:01.925399 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 05:36:01.928470 kubelet[2999]: E1030 05:36:01.925405 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 05:36:01.928470 kubelet[2999]: E1030 05:36:01.925540 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 05:36:01.929645 kubelet[2999]: W1030 05:36:01.925548 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 05:36:01.929645 kubelet[2999]: E1030 05:36:01.925558 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 05:36:01.929645 kubelet[2999]: E1030 05:36:01.925655 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 05:36:01.929645 kubelet[2999]: W1030 05:36:01.925660 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 05:36:01.929645 kubelet[2999]: E1030 05:36:01.925665 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 05:36:01.929645 kubelet[2999]: E1030 05:36:01.925759 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 05:36:01.929645 kubelet[2999]: W1030 05:36:01.925764 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 05:36:01.929645 kubelet[2999]: E1030 05:36:01.925768 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 05:36:01.929645 kubelet[2999]: E1030 05:36:01.926102 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 05:36:01.929645 kubelet[2999]: W1030 05:36:01.926107 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 05:36:01.929800 kubelet[2999]: E1030 05:36:01.926113 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 05:36:01.929800 kubelet[2999]: E1030 05:36:01.926208 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 05:36:01.929800 kubelet[2999]: W1030 05:36:01.926212 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 05:36:01.929800 kubelet[2999]: E1030 05:36:01.926217 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 05:36:01.929800 kubelet[2999]: E1030 05:36:01.926392 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 05:36:01.929800 kubelet[2999]: W1030 05:36:01.926398 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 05:36:01.929800 kubelet[2999]: E1030 05:36:01.926405 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 05:36:01.929800 kubelet[2999]: E1030 05:36:01.926511 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 05:36:01.929800 kubelet[2999]: W1030 05:36:01.926521 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 05:36:01.929800 kubelet[2999]: E1030 05:36:01.926526 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 05:36:01.929945 kubelet[2999]: E1030 05:36:01.926624 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 05:36:01.929945 kubelet[2999]: W1030 05:36:01.926630 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 05:36:01.929945 kubelet[2999]: E1030 05:36:01.926709 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 05:36:01.929945 kubelet[2999]: E1030 05:36:01.926964 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 05:36:01.929945 kubelet[2999]: W1030 05:36:01.926969 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 05:36:01.929945 kubelet[2999]: E1030 05:36:01.926975 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 05:36:01.929945 kubelet[2999]: E1030 05:36:01.927615 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 05:36:01.929945 kubelet[2999]: W1030 05:36:01.927620 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 05:36:01.929945 kubelet[2999]: E1030 05:36:01.927625 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 05:36:01.929945 kubelet[2999]: E1030 05:36:01.927733 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 05:36:01.930116 kubelet[2999]: W1030 05:36:01.927737 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 05:36:01.930116 kubelet[2999]: E1030 05:36:01.927742 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 05:36:01.930116 kubelet[2999]: E1030 05:36:01.927816 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 05:36:01.930116 kubelet[2999]: W1030 05:36:01.927821 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 05:36:01.930116 kubelet[2999]: E1030 05:36:01.927825 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 05:36:01.930116 kubelet[2999]: E1030 05:36:01.927899 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 05:36:01.930116 kubelet[2999]: W1030 05:36:01.927904 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 05:36:01.930116 kubelet[2999]: E1030 05:36:01.927908 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 05:36:01.930116 kubelet[2999]: E1030 05:36:01.927991 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 05:36:01.930116 kubelet[2999]: W1030 05:36:01.927995 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 05:36:01.930953 kubelet[2999]: E1030 05:36:01.928000 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 05:36:01.930953 kubelet[2999]: E1030 05:36:01.928081 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 05:36:01.930953 kubelet[2999]: W1030 05:36:01.928086 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 05:36:01.930953 kubelet[2999]: E1030 05:36:01.928091 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 05:36:01.930953 kubelet[2999]: E1030 05:36:01.928165 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 05:36:01.930953 kubelet[2999]: W1030 05:36:01.928168 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 05:36:01.930953 kubelet[2999]: E1030 05:36:01.928173 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 05:36:01.930953 kubelet[2999]: E1030 05:36:01.928237 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 05:36:01.930953 kubelet[2999]: W1030 05:36:01.928241 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 05:36:01.930953 kubelet[2999]: E1030 05:36:01.928249 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 05:36:01.931334 kubelet[2999]: E1030 05:36:01.928750 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 05:36:01.931334 kubelet[2999]: W1030 05:36:01.928755 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 05:36:01.931334 kubelet[2999]: E1030 05:36:01.928761 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 05:36:01.931334 kubelet[2999]: E1030 05:36:01.929010 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 05:36:01.931334 kubelet[2999]: W1030 05:36:01.929016 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 05:36:01.931334 kubelet[2999]: E1030 05:36:01.929022 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 05:36:01.931334 kubelet[2999]: E1030 05:36:01.929115 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 05:36:01.931334 kubelet[2999]: W1030 05:36:01.929121 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 05:36:01.931334 kubelet[2999]: E1030 05:36:01.929126 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 05:36:01.931334 kubelet[2999]: E1030 05:36:01.929227 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 05:36:01.936263 kubelet[2999]: W1030 05:36:01.929232 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 05:36:01.936263 kubelet[2999]: E1030 05:36:01.929237 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 05:36:01.943829 kubelet[2999]: E1030 05:36:01.943810 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 05:36:01.943829 kubelet[2999]: W1030 05:36:01.943822 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 05:36:01.943829 kubelet[2999]: E1030 05:36:01.943832 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 05:36:01.984036 containerd[1688]: time="2025-10-30T05:36:01.984014552Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-77789666b6-f7q5d,Uid:1ec6cb8d-2c41-403f-b09c-9c9d69038213,Namespace:calico-system,Attempt:0,} returns sandbox id \"cd511a3b1c1ced468c8a2c194f49d309b2a4c4c2d6a76fd92eada4a93d4da6fa\"" Oct 30 05:36:01.985576 containerd[1688]: time="2025-10-30T05:36:01.985562334Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Oct 30 05:36:01.990215 containerd[1688]: time="2025-10-30T05:36:01.990173470Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-7mqxk,Uid:35308386-d9de-4441-9567-277e1d4ad6ec,Namespace:calico-system,Attempt:0,} returns sandbox id \"ebf396814b1d89f75dcaf1f06391964b114ddf5b206e6125c15b8748af8e7fa8\"" Oct 30 05:36:03.718672 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1139452410.mount: Deactivated successfully. Oct 30 05:36:04.100856 kubelet[2999]: E1030 05:36:04.100770 2999 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wzqzl" podUID="9e840872-e6a6-422f-a0c7-b6b186a24394" Oct 30 05:36:05.182114 containerd[1688]: time="2025-10-30T05:36:05.181721539Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 05:36:05.193043 containerd[1688]: time="2025-10-30T05:36:05.193028702Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Oct 30 05:36:05.202937 containerd[1688]: time="2025-10-30T05:36:05.202919695Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 05:36:05.212235 containerd[1688]: time="2025-10-30T05:36:05.209398003Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 05:36:05.212235 containerd[1688]: time="2025-10-30T05:36:05.209777823Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 3.224133493s" Oct 30 05:36:05.212235 containerd[1688]: time="2025-10-30T05:36:05.209797936Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Oct 30 05:36:05.212235 containerd[1688]: time="2025-10-30T05:36:05.211508264Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Oct 30 05:36:05.244389 containerd[1688]: time="2025-10-30T05:36:05.244313333Z" level=info msg="CreateContainer within sandbox \"cd511a3b1c1ced468c8a2c194f49d309b2a4c4c2d6a76fd92eada4a93d4da6fa\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Oct 30 05:36:05.378134 containerd[1688]: time="2025-10-30T05:36:05.377461127Z" level=info msg="Container b116e3cbcd28acdff68b24125452171bfa5f642cb15346e7aa24adc320bec7e7: CDI devices from CRI Config.CDIDevices: []" Oct 30 05:36:05.394842 containerd[1688]: time="2025-10-30T05:36:05.394802009Z" level=info msg="CreateContainer within sandbox \"cd511a3b1c1ced468c8a2c194f49d309b2a4c4c2d6a76fd92eada4a93d4da6fa\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"b116e3cbcd28acdff68b24125452171bfa5f642cb15346e7aa24adc320bec7e7\"" Oct 30 05:36:05.395719 containerd[1688]: time="2025-10-30T05:36:05.395396597Z" level=info msg="StartContainer for \"b116e3cbcd28acdff68b24125452171bfa5f642cb15346e7aa24adc320bec7e7\"" Oct 30 05:36:05.396909 containerd[1688]: time="2025-10-30T05:36:05.396882847Z" level=info msg="connecting to shim b116e3cbcd28acdff68b24125452171bfa5f642cb15346e7aa24adc320bec7e7" address="unix:///run/containerd/s/51bf54fa0f91fb80f76ec41290e0b776aa2097ddfd377870b77767280312a309" protocol=ttrpc version=3 Oct 30 05:36:05.431500 systemd[1]: Started cri-containerd-b116e3cbcd28acdff68b24125452171bfa5f642cb15346e7aa24adc320bec7e7.scope - libcontainer container b116e3cbcd28acdff68b24125452171bfa5f642cb15346e7aa24adc320bec7e7. Oct 30 05:36:05.498162 containerd[1688]: time="2025-10-30T05:36:05.498090291Z" level=info msg="StartContainer for \"b116e3cbcd28acdff68b24125452171bfa5f642cb15346e7aa24adc320bec7e7\" returns successfully" Oct 30 05:36:06.100168 kubelet[2999]: E1030 05:36:06.100133 2999 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wzqzl" podUID="9e840872-e6a6-422f-a0c7-b6b186a24394" Oct 30 05:36:06.253480 kubelet[2999]: E1030 05:36:06.253451 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 05:36:06.253480 kubelet[2999]: W1030 05:36:06.253472 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 05:36:06.253480 kubelet[2999]: E1030 05:36:06.253485 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 05:36:06.253608 kubelet[2999]: E1030 05:36:06.253588 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 05:36:06.253608 kubelet[2999]: W1030 05:36:06.253592 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 05:36:06.253608 kubelet[2999]: E1030 05:36:06.253597 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 05:36:06.256123 kubelet[2999]: E1030 05:36:06.253671 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 05:36:06.256123 kubelet[2999]: W1030 05:36:06.253676 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 05:36:06.256123 kubelet[2999]: E1030 05:36:06.253681 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 05:36:06.262261 kubelet[2999]: E1030 05:36:06.262248 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 05:36:06.262261 kubelet[2999]: W1030 05:36:06.262258 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 05:36:06.262340 kubelet[2999]: E1030 05:36:06.262266 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 05:36:06.262409 kubelet[2999]: E1030 05:36:06.262397 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 05:36:06.262409 kubelet[2999]: W1030 05:36:06.262405 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 05:36:06.262471 kubelet[2999]: E1030 05:36:06.262411 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 05:36:06.262499 kubelet[2999]: E1030 05:36:06.262483 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 05:36:06.262499 kubelet[2999]: W1030 05:36:06.262488 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 05:36:06.262499 kubelet[2999]: E1030 05:36:06.262492 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 05:36:06.262565 kubelet[2999]: E1030 05:36:06.262561 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 05:36:06.262565 kubelet[2999]: W1030 05:36:06.262565 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 05:36:06.262617 kubelet[2999]: E1030 05:36:06.262570 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 05:36:06.262644 kubelet[2999]: E1030 05:36:06.262633 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 05:36:06.262644 kubelet[2999]: W1030 05:36:06.262637 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 05:36:06.262644 kubelet[2999]: E1030 05:36:06.262641 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 05:36:06.262741 kubelet[2999]: E1030 05:36:06.262730 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 05:36:06.262741 kubelet[2999]: W1030 05:36:06.262739 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 05:36:06.262783 kubelet[2999]: E1030 05:36:06.262744 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 05:36:06.262823 kubelet[2999]: E1030 05:36:06.262812 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 05:36:06.262823 kubelet[2999]: W1030 05:36:06.262820 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 05:36:06.262823 kubelet[2999]: E1030 05:36:06.262824 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 05:36:06.262902 kubelet[2999]: E1030 05:36:06.262892 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 05:36:06.262902 kubelet[2999]: W1030 05:36:06.262899 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 05:36:06.262938 kubelet[2999]: E1030 05:36:06.262904 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 05:36:06.262982 kubelet[2999]: E1030 05:36:06.262972 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 05:36:06.262982 kubelet[2999]: W1030 05:36:06.262979 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 05:36:06.263032 kubelet[2999]: E1030 05:36:06.262983 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 05:36:06.267704 kubelet[2999]: E1030 05:36:06.263058 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 05:36:06.267704 kubelet[2999]: W1030 05:36:06.263062 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 05:36:06.267704 kubelet[2999]: E1030 05:36:06.263066 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 05:36:06.267704 kubelet[2999]: E1030 05:36:06.263135 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 05:36:06.267704 kubelet[2999]: W1030 05:36:06.263139 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 05:36:06.267704 kubelet[2999]: E1030 05:36:06.263144 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 05:36:06.267704 kubelet[2999]: E1030 05:36:06.263212 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 05:36:06.267704 kubelet[2999]: W1030 05:36:06.263216 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 05:36:06.267704 kubelet[2999]: E1030 05:36:06.263220 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 05:36:06.267704 kubelet[2999]: E1030 05:36:06.263374 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 05:36:06.271802 kubelet[2999]: W1030 05:36:06.263378 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 05:36:06.271802 kubelet[2999]: E1030 05:36:06.263383 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 05:36:06.271802 kubelet[2999]: E1030 05:36:06.266884 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 05:36:06.271802 kubelet[2999]: W1030 05:36:06.266889 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 05:36:06.271802 kubelet[2999]: E1030 05:36:06.266895 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 05:36:06.271802 kubelet[2999]: E1030 05:36:06.267019 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 05:36:06.271802 kubelet[2999]: W1030 05:36:06.267028 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 05:36:06.271802 kubelet[2999]: E1030 05:36:06.267036 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 05:36:06.271802 kubelet[2999]: E1030 05:36:06.267128 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 05:36:06.271802 kubelet[2999]: W1030 05:36:06.267133 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 05:36:06.271960 kubelet[2999]: E1030 05:36:06.267137 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 05:36:06.271960 kubelet[2999]: E1030 05:36:06.267219 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 05:36:06.271960 kubelet[2999]: W1030 05:36:06.267223 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 05:36:06.271960 kubelet[2999]: E1030 05:36:06.267228 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 05:36:06.271960 kubelet[2999]: E1030 05:36:06.267333 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 05:36:06.271960 kubelet[2999]: W1030 05:36:06.267338 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 05:36:06.271960 kubelet[2999]: E1030 05:36:06.267343 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 05:36:06.271960 kubelet[2999]: E1030 05:36:06.267501 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 05:36:06.271960 kubelet[2999]: W1030 05:36:06.267506 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 05:36:06.271960 kubelet[2999]: E1030 05:36:06.267512 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 05:36:06.272115 kubelet[2999]: E1030 05:36:06.267628 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 05:36:06.272115 kubelet[2999]: W1030 05:36:06.267633 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 05:36:06.272115 kubelet[2999]: E1030 05:36:06.267638 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 05:36:06.272115 kubelet[2999]: E1030 05:36:06.267857 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 05:36:06.272115 kubelet[2999]: W1030 05:36:06.267862 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 05:36:06.272115 kubelet[2999]: E1030 05:36:06.267867 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 05:36:06.272115 kubelet[2999]: E1030 05:36:06.267951 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 05:36:06.272115 kubelet[2999]: W1030 05:36:06.267956 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 05:36:06.272115 kubelet[2999]: E1030 05:36:06.267960 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 05:36:06.272115 kubelet[2999]: E1030 05:36:06.268064 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 05:36:06.272267 kubelet[2999]: W1030 05:36:06.268069 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 05:36:06.272267 kubelet[2999]: E1030 05:36:06.268073 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 05:36:06.272267 kubelet[2999]: E1030 05:36:06.268213 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 05:36:06.272267 kubelet[2999]: W1030 05:36:06.268218 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 05:36:06.272267 kubelet[2999]: E1030 05:36:06.268228 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 05:36:06.272267 kubelet[2999]: E1030 05:36:06.268331 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 05:36:06.272267 kubelet[2999]: W1030 05:36:06.268336 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 05:36:06.272267 kubelet[2999]: E1030 05:36:06.268342 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 05:36:06.272267 kubelet[2999]: E1030 05:36:06.268431 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 05:36:06.272267 kubelet[2999]: W1030 05:36:06.268436 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 05:36:06.277450 kubelet[2999]: E1030 05:36:06.268442 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 05:36:06.277450 kubelet[2999]: E1030 05:36:06.268541 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 05:36:06.277450 kubelet[2999]: W1030 05:36:06.268547 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 05:36:06.277450 kubelet[2999]: E1030 05:36:06.268554 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 05:36:06.277450 kubelet[2999]: E1030 05:36:06.268635 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 05:36:06.277450 kubelet[2999]: W1030 05:36:06.268639 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 05:36:06.277450 kubelet[2999]: E1030 05:36:06.268643 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 05:36:06.277450 kubelet[2999]: E1030 05:36:06.268734 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 05:36:06.277450 kubelet[2999]: W1030 05:36:06.268738 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 05:36:06.277450 kubelet[2999]: E1030 05:36:06.268743 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 05:36:06.277610 kubelet[2999]: E1030 05:36:06.268902 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 05:36:06.277610 kubelet[2999]: W1030 05:36:06.268906 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 05:36:06.277610 kubelet[2999]: E1030 05:36:06.268911 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 05:36:06.787981 containerd[1688]: time="2025-10-30T05:36:06.787870535Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 05:36:06.788928 containerd[1688]: time="2025-10-30T05:36:06.788903846Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Oct 30 05:36:06.789453 containerd[1688]: time="2025-10-30T05:36:06.789346789Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 05:36:06.791799 containerd[1688]: time="2025-10-30T05:36:06.791680897Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 05:36:06.792629 containerd[1688]: time="2025-10-30T05:36:06.792614224Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.581088198s" Oct 30 05:36:06.792744 containerd[1688]: time="2025-10-30T05:36:06.792689205Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Oct 30 05:36:06.794832 containerd[1688]: time="2025-10-30T05:36:06.794787200Z" level=info msg="CreateContainer within sandbox \"ebf396814b1d89f75dcaf1f06391964b114ddf5b206e6125c15b8748af8e7fa8\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Oct 30 05:36:06.803245 containerd[1688]: time="2025-10-30T05:36:06.801473344Z" level=info msg="Container 04bf2a82b9863425afa1d46b2f06a171cbd803be0ca9dde5c63940c0c6980f2c: CDI devices from CRI Config.CDIDevices: []" Oct 30 05:36:06.806351 containerd[1688]: time="2025-10-30T05:36:06.806328885Z" level=info msg="CreateContainer within sandbox \"ebf396814b1d89f75dcaf1f06391964b114ddf5b206e6125c15b8748af8e7fa8\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"04bf2a82b9863425afa1d46b2f06a171cbd803be0ca9dde5c63940c0c6980f2c\"" Oct 30 05:36:06.806836 containerd[1688]: time="2025-10-30T05:36:06.806819604Z" level=info msg="StartContainer for \"04bf2a82b9863425afa1d46b2f06a171cbd803be0ca9dde5c63940c0c6980f2c\"" Oct 30 05:36:06.807862 containerd[1688]: time="2025-10-30T05:36:06.807842655Z" level=info msg="connecting to shim 04bf2a82b9863425afa1d46b2f06a171cbd803be0ca9dde5c63940c0c6980f2c" address="unix:///run/containerd/s/1979801019f7245d0212605a95324da7937aa2c1e62a78da90ff121d25a9cf6b" protocol=ttrpc version=3 Oct 30 05:36:06.831466 systemd[1]: Started cri-containerd-04bf2a82b9863425afa1d46b2f06a171cbd803be0ca9dde5c63940c0c6980f2c.scope - libcontainer container 04bf2a82b9863425afa1d46b2f06a171cbd803be0ca9dde5c63940c0c6980f2c. Oct 30 05:36:06.868404 containerd[1688]: time="2025-10-30T05:36:06.868323277Z" level=info msg="StartContainer for \"04bf2a82b9863425afa1d46b2f06a171cbd803be0ca9dde5c63940c0c6980f2c\" returns successfully" Oct 30 05:36:06.872220 systemd[1]: cri-containerd-04bf2a82b9863425afa1d46b2f06a171cbd803be0ca9dde5c63940c0c6980f2c.scope: Deactivated successfully. Oct 30 05:36:06.900410 containerd[1688]: time="2025-10-30T05:36:06.900375450Z" level=info msg="received exit event container_id:\"04bf2a82b9863425afa1d46b2f06a171cbd803be0ca9dde5c63940c0c6980f2c\" id:\"04bf2a82b9863425afa1d46b2f06a171cbd803be0ca9dde5c63940c0c6980f2c\" pid:3681 exited_at:{seconds:1761802566 nanos:875078398}" Oct 30 05:36:06.918339 containerd[1688]: time="2025-10-30T05:36:06.918313779Z" level=info msg="TaskExit event in podsandbox handler container_id:\"04bf2a82b9863425afa1d46b2f06a171cbd803be0ca9dde5c63940c0c6980f2c\" id:\"04bf2a82b9863425afa1d46b2f06a171cbd803be0ca9dde5c63940c0c6980f2c\" pid:3681 exited_at:{seconds:1761802566 nanos:875078398}" Oct 30 05:36:06.923219 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-04bf2a82b9863425afa1d46b2f06a171cbd803be0ca9dde5c63940c0c6980f2c-rootfs.mount: Deactivated successfully. Oct 30 05:36:07.204678 kubelet[2999]: I1030 05:36:07.204654 2999 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 30 05:36:07.206289 kubelet[2999]: I1030 05:36:07.205773 2999 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-77789666b6-f7q5d" podStartSLOduration=2.980049579 podStartE2EDuration="6.205760649s" podCreationTimestamp="2025-10-30 05:36:01 +0000 UTC" firstStartedPulling="2025-10-30 05:36:01.984672894 +0000 UTC m=+19.056631214" lastFinishedPulling="2025-10-30 05:36:05.21038396 +0000 UTC m=+22.282342284" observedRunningTime="2025-10-30 05:36:06.171303955 +0000 UTC m=+23.243262286" watchObservedRunningTime="2025-10-30 05:36:07.205760649 +0000 UTC m=+24.277718979" Oct 30 05:36:08.100212 kubelet[2999]: E1030 05:36:08.100164 2999 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wzqzl" podUID="9e840872-e6a6-422f-a0c7-b6b186a24394" Oct 30 05:36:08.169221 containerd[1688]: time="2025-10-30T05:36:08.168728607Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Oct 30 05:36:10.100761 kubelet[2999]: E1030 05:36:10.100720 2999 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wzqzl" podUID="9e840872-e6a6-422f-a0c7-b6b186a24394" Oct 30 05:36:12.100520 kubelet[2999]: E1030 05:36:12.100486 2999 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wzqzl" podUID="9e840872-e6a6-422f-a0c7-b6b186a24394" Oct 30 05:36:13.396032 kubelet[2999]: I1030 05:36:13.396009 2999 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 30 05:36:14.100856 kubelet[2999]: E1030 05:36:14.100547 2999 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wzqzl" podUID="9e840872-e6a6-422f-a0c7-b6b186a24394" Oct 30 05:36:14.663300 containerd[1688]: time="2025-10-30T05:36:14.663153695Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 05:36:14.663989 containerd[1688]: time="2025-10-30T05:36:14.663581269Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Oct 30 05:36:14.664794 containerd[1688]: time="2025-10-30T05:36:14.664057857Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 05:36:14.665139 containerd[1688]: time="2025-10-30T05:36:14.665123909Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 05:36:14.665616 containerd[1688]: time="2025-10-30T05:36:14.665590927Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 6.496839176s" Oct 30 05:36:14.665670 containerd[1688]: time="2025-10-30T05:36:14.665661679Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Oct 30 05:36:14.668532 containerd[1688]: time="2025-10-30T05:36:14.668506667Z" level=info msg="CreateContainer within sandbox \"ebf396814b1d89f75dcaf1f06391964b114ddf5b206e6125c15b8748af8e7fa8\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Oct 30 05:36:14.672102 containerd[1688]: time="2025-10-30T05:36:14.672083176Z" level=info msg="Container 0efe202e09dab07cbcb13f227f4cc4ea928d95a2c5c1a7ec927d7d098a9e0d6a: CDI devices from CRI Config.CDIDevices: []" Oct 30 05:36:14.710296 containerd[1688]: time="2025-10-30T05:36:14.710256102Z" level=info msg="CreateContainer within sandbox \"ebf396814b1d89f75dcaf1f06391964b114ddf5b206e6125c15b8748af8e7fa8\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"0efe202e09dab07cbcb13f227f4cc4ea928d95a2c5c1a7ec927d7d098a9e0d6a\"" Oct 30 05:36:14.713983 containerd[1688]: time="2025-10-30T05:36:14.711190380Z" level=info msg="StartContainer for \"0efe202e09dab07cbcb13f227f4cc4ea928d95a2c5c1a7ec927d7d098a9e0d6a\"" Oct 30 05:36:14.713983 containerd[1688]: time="2025-10-30T05:36:14.711998610Z" level=info msg="connecting to shim 0efe202e09dab07cbcb13f227f4cc4ea928d95a2c5c1a7ec927d7d098a9e0d6a" address="unix:///run/containerd/s/1979801019f7245d0212605a95324da7937aa2c1e62a78da90ff121d25a9cf6b" protocol=ttrpc version=3 Oct 30 05:36:14.730384 systemd[1]: Started cri-containerd-0efe202e09dab07cbcb13f227f4cc4ea928d95a2c5c1a7ec927d7d098a9e0d6a.scope - libcontainer container 0efe202e09dab07cbcb13f227f4cc4ea928d95a2c5c1a7ec927d7d098a9e0d6a. Oct 30 05:36:14.766944 containerd[1688]: time="2025-10-30T05:36:14.766916062Z" level=info msg="StartContainer for \"0efe202e09dab07cbcb13f227f4cc4ea928d95a2c5c1a7ec927d7d098a9e0d6a\" returns successfully" Oct 30 05:36:16.100147 kubelet[2999]: E1030 05:36:16.099929 2999 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wzqzl" podUID="9e840872-e6a6-422f-a0c7-b6b186a24394" Oct 30 05:36:16.404118 systemd[1]: cri-containerd-0efe202e09dab07cbcb13f227f4cc4ea928d95a2c5c1a7ec927d7d098a9e0d6a.scope: Deactivated successfully. Oct 30 05:36:16.404651 systemd[1]: cri-containerd-0efe202e09dab07cbcb13f227f4cc4ea928d95a2c5c1a7ec927d7d098a9e0d6a.scope: Consumed 308ms CPU time, 163.2M memory peak, 3.9M read from disk, 171.3M written to disk. Oct 30 05:36:16.430243 containerd[1688]: time="2025-10-30T05:36:16.406550758Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0efe202e09dab07cbcb13f227f4cc4ea928d95a2c5c1a7ec927d7d098a9e0d6a\" id:\"0efe202e09dab07cbcb13f227f4cc4ea928d95a2c5c1a7ec927d7d098a9e0d6a\" pid:3744 exited_at:{seconds:1761802576 nanos:405114640}" Oct 30 05:36:16.430243 containerd[1688]: time="2025-10-30T05:36:16.406611300Z" level=info msg="received exit event container_id:\"0efe202e09dab07cbcb13f227f4cc4ea928d95a2c5c1a7ec927d7d098a9e0d6a\" id:\"0efe202e09dab07cbcb13f227f4cc4ea928d95a2c5c1a7ec927d7d098a9e0d6a\" pid:3744 exited_at:{seconds:1761802576 nanos:405114640}" Oct 30 05:36:16.447067 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0efe202e09dab07cbcb13f227f4cc4ea928d95a2c5c1a7ec927d7d098a9e0d6a-rootfs.mount: Deactivated successfully. Oct 30 05:36:16.557031 kubelet[2999]: I1030 05:36:16.557009 2999 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Oct 30 05:36:16.712381 systemd[1]: Created slice kubepods-burstable-pod23a5b2c9_a75a_4c5c_9522_bc7e36043c20.slice - libcontainer container kubepods-burstable-pod23a5b2c9_a75a_4c5c_9522_bc7e36043c20.slice. Oct 30 05:36:16.717948 systemd[1]: Created slice kubepods-besteffort-pod99f1075f_ef64_457d_a338_2febaf8a005c.slice - libcontainer container kubepods-besteffort-pod99f1075f_ef64_457d_a338_2febaf8a005c.slice. Oct 30 05:36:16.747002 systemd[1]: Created slice kubepods-burstable-pod81efb543_1b70_4891_8544_9abff1da4cdb.slice - libcontainer container kubepods-burstable-pod81efb543_1b70_4891_8544_9abff1da4cdb.slice. Oct 30 05:36:16.759806 kubelet[2999]: I1030 05:36:16.759750 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwlk4\" (UniqueName: \"kubernetes.io/projected/99f1075f-ef64-457d-a338-2febaf8a005c-kube-api-access-bwlk4\") pod \"calico-apiserver-5c564d8bcd-xdxxp\" (UID: \"99f1075f-ef64-457d-a338-2febaf8a005c\") " pod="calico-apiserver/calico-apiserver-5c564d8bcd-xdxxp" Oct 30 05:36:16.760195 kubelet[2999]: I1030 05:36:16.760080 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qlzn4\" (UniqueName: \"kubernetes.io/projected/81efb543-1b70-4891-8544-9abff1da4cdb-kube-api-access-qlzn4\") pod \"coredns-674b8bbfcf-qvzr4\" (UID: \"81efb543-1b70-4891-8544-9abff1da4cdb\") " pod="kube-system/coredns-674b8bbfcf-qvzr4" Oct 30 05:36:16.760704 kubelet[2999]: I1030 05:36:16.760100 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11de3de5-1c6c-4871-83d5-a6b9faf25770-config\") pod \"goldmane-666569f655-tznl5\" (UID: \"11de3de5-1c6c-4871-83d5-a6b9faf25770\") " pod="calico-system/goldmane-666569f655-tznl5" Oct 30 05:36:16.760704 kubelet[2999]: I1030 05:36:16.760570 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/11de3de5-1c6c-4871-83d5-a6b9faf25770-goldmane-ca-bundle\") pod \"goldmane-666569f655-tznl5\" (UID: \"11de3de5-1c6c-4871-83d5-a6b9faf25770\") " pod="calico-system/goldmane-666569f655-tznl5" Oct 30 05:36:16.760704 kubelet[2999]: I1030 05:36:16.760581 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7b2k\" (UniqueName: \"kubernetes.io/projected/a80719c9-5ba9-4e0b-97b9-ad9c69d9e2be-kube-api-access-r7b2k\") pod \"calico-apiserver-9d7959b66-vrsrq\" (UID: \"a80719c9-5ba9-4e0b-97b9-ad9c69d9e2be\") " pod="calico-apiserver/calico-apiserver-9d7959b66-vrsrq" Oct 30 05:36:16.760704 kubelet[2999]: I1030 05:36:16.760591 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/23a5b2c9-a75a-4c5c-9522-bc7e36043c20-config-volume\") pod \"coredns-674b8bbfcf-6df7j\" (UID: \"23a5b2c9-a75a-4c5c-9522-bc7e36043c20\") " pod="kube-system/coredns-674b8bbfcf-6df7j" Oct 30 05:36:16.760704 kubelet[2999]: I1030 05:36:16.760601 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/1479dc23-7941-47ad-b834-96e9dba18694-whisker-backend-key-pair\") pod \"whisker-7fcb9688b6-vlnj9\" (UID: \"1479dc23-7941-47ad-b834-96e9dba18694\") " pod="calico-system/whisker-7fcb9688b6-vlnj9" Oct 30 05:36:16.761285 kubelet[2999]: I1030 05:36:16.760614 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/9762f718-a4c7-49eb-975e-48fddc6a6070-calico-apiserver-certs\") pod \"calico-apiserver-5c564d8bcd-kspf5\" (UID: \"9762f718-a4c7-49eb-975e-48fddc6a6070\") " pod="calico-apiserver/calico-apiserver-5c564d8bcd-kspf5" Oct 30 05:36:16.761455 kubelet[2999]: I1030 05:36:16.761418 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/11de3de5-1c6c-4871-83d5-a6b9faf25770-goldmane-key-pair\") pod \"goldmane-666569f655-tznl5\" (UID: \"11de3de5-1c6c-4871-83d5-a6b9faf25770\") " pod="calico-system/goldmane-666569f655-tznl5" Oct 30 05:36:16.761529 kubelet[2999]: I1030 05:36:16.761519 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkwwr\" (UniqueName: \"kubernetes.io/projected/e883b021-a9bb-48ab-80cc-7b947568b059-kube-api-access-kkwwr\") pod \"calico-kube-controllers-7c65b456cf-hz876\" (UID: \"e883b021-a9bb-48ab-80cc-7b947568b059\") " pod="calico-system/calico-kube-controllers-7c65b456cf-hz876" Oct 30 05:36:16.763946 kubelet[2999]: I1030 05:36:16.762560 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7nhtl\" (UniqueName: \"kubernetes.io/projected/1479dc23-7941-47ad-b834-96e9dba18694-kube-api-access-7nhtl\") pod \"whisker-7fcb9688b6-vlnj9\" (UID: \"1479dc23-7941-47ad-b834-96e9dba18694\") " pod="calico-system/whisker-7fcb9688b6-vlnj9" Oct 30 05:36:16.763946 kubelet[2999]: I1030 05:36:16.762919 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a80719c9-5ba9-4e0b-97b9-ad9c69d9e2be-calico-apiserver-certs\") pod \"calico-apiserver-9d7959b66-vrsrq\" (UID: \"a80719c9-5ba9-4e0b-97b9-ad9c69d9e2be\") " pod="calico-apiserver/calico-apiserver-9d7959b66-vrsrq" Oct 30 05:36:16.763946 kubelet[2999]: I1030 05:36:16.762936 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/99f1075f-ef64-457d-a338-2febaf8a005c-calico-apiserver-certs\") pod \"calico-apiserver-5c564d8bcd-xdxxp\" (UID: \"99f1075f-ef64-457d-a338-2febaf8a005c\") " pod="calico-apiserver/calico-apiserver-5c564d8bcd-xdxxp" Oct 30 05:36:16.763946 kubelet[2999]: I1030 05:36:16.762946 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1479dc23-7941-47ad-b834-96e9dba18694-whisker-ca-bundle\") pod \"whisker-7fcb9688b6-vlnj9\" (UID: \"1479dc23-7941-47ad-b834-96e9dba18694\") " pod="calico-system/whisker-7fcb9688b6-vlnj9" Oct 30 05:36:16.763946 kubelet[2999]: I1030 05:36:16.762957 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n22k8\" (UniqueName: \"kubernetes.io/projected/9762f718-a4c7-49eb-975e-48fddc6a6070-kube-api-access-n22k8\") pod \"calico-apiserver-5c564d8bcd-kspf5\" (UID: \"9762f718-a4c7-49eb-975e-48fddc6a6070\") " pod="calico-apiserver/calico-apiserver-5c564d8bcd-kspf5" Oct 30 05:36:16.764062 kubelet[2999]: I1030 05:36:16.762969 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jkwth\" (UniqueName: \"kubernetes.io/projected/11de3de5-1c6c-4871-83d5-a6b9faf25770-kube-api-access-jkwth\") pod \"goldmane-666569f655-tznl5\" (UID: \"11de3de5-1c6c-4871-83d5-a6b9faf25770\") " pod="calico-system/goldmane-666569f655-tznl5" Oct 30 05:36:16.764062 kubelet[2999]: I1030 05:36:16.762978 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e883b021-a9bb-48ab-80cc-7b947568b059-tigera-ca-bundle\") pod \"calico-kube-controllers-7c65b456cf-hz876\" (UID: \"e883b021-a9bb-48ab-80cc-7b947568b059\") " pod="calico-system/calico-kube-controllers-7c65b456cf-hz876" Oct 30 05:36:16.764062 kubelet[2999]: I1030 05:36:16.762992 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/81efb543-1b70-4891-8544-9abff1da4cdb-config-volume\") pod \"coredns-674b8bbfcf-qvzr4\" (UID: \"81efb543-1b70-4891-8544-9abff1da4cdb\") " pod="kube-system/coredns-674b8bbfcf-qvzr4" Oct 30 05:36:16.764062 kubelet[2999]: I1030 05:36:16.763001 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwmtc\" (UniqueName: \"kubernetes.io/projected/23a5b2c9-a75a-4c5c-9522-bc7e36043c20-kube-api-access-lwmtc\") pod \"coredns-674b8bbfcf-6df7j\" (UID: \"23a5b2c9-a75a-4c5c-9522-bc7e36043c20\") " pod="kube-system/coredns-674b8bbfcf-6df7j" Oct 30 05:36:16.764250 systemd[1]: Created slice kubepods-besteffort-poda80719c9_5ba9_4e0b_97b9_ad9c69d9e2be.slice - libcontainer container kubepods-besteffort-poda80719c9_5ba9_4e0b_97b9_ad9c69d9e2be.slice. Oct 30 05:36:16.769050 systemd[1]: Created slice kubepods-besteffort-pode883b021_a9bb_48ab_80cc_7b947568b059.slice - libcontainer container kubepods-besteffort-pode883b021_a9bb_48ab_80cc_7b947568b059.slice. Oct 30 05:36:16.774878 systemd[1]: Created slice kubepods-besteffort-pod9762f718_a4c7_49eb_975e_48fddc6a6070.slice - libcontainer container kubepods-besteffort-pod9762f718_a4c7_49eb_975e_48fddc6a6070.slice. Oct 30 05:36:16.781996 systemd[1]: Created slice kubepods-besteffort-pod11de3de5_1c6c_4871_83d5_a6b9faf25770.slice - libcontainer container kubepods-besteffort-pod11de3de5_1c6c_4871_83d5_a6b9faf25770.slice. Oct 30 05:36:16.788402 systemd[1]: Created slice kubepods-besteffort-pod1479dc23_7941_47ad_b834_96e9dba18694.slice - libcontainer container kubepods-besteffort-pod1479dc23_7941_47ad_b834_96e9dba18694.slice. Oct 30 05:36:17.030737 containerd[1688]: time="2025-10-30T05:36:17.030125776Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-6df7j,Uid:23a5b2c9-a75a-4c5c-9522-bc7e36043c20,Namespace:kube-system,Attempt:0,}" Oct 30 05:36:17.045697 containerd[1688]: time="2025-10-30T05:36:17.045664017Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c564d8bcd-xdxxp,Uid:99f1075f-ef64-457d-a338-2febaf8a005c,Namespace:calico-apiserver,Attempt:0,}" Oct 30 05:36:17.051380 containerd[1688]: time="2025-10-30T05:36:17.050935418Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-qvzr4,Uid:81efb543-1b70-4891-8544-9abff1da4cdb,Namespace:kube-system,Attempt:0,}" Oct 30 05:36:17.068366 containerd[1688]: time="2025-10-30T05:36:17.068344451Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9d7959b66-vrsrq,Uid:a80719c9-5ba9-4e0b-97b9-ad9c69d9e2be,Namespace:calico-apiserver,Attempt:0,}" Oct 30 05:36:17.073265 containerd[1688]: time="2025-10-30T05:36:17.073242637Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7c65b456cf-hz876,Uid:e883b021-a9bb-48ab-80cc-7b947568b059,Namespace:calico-system,Attempt:0,}" Oct 30 05:36:17.080560 containerd[1688]: time="2025-10-30T05:36:17.080530861Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c564d8bcd-kspf5,Uid:9762f718-a4c7-49eb-975e-48fddc6a6070,Namespace:calico-apiserver,Attempt:0,}" Oct 30 05:36:17.088210 containerd[1688]: time="2025-10-30T05:36:17.088181614Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-tznl5,Uid:11de3de5-1c6c-4871-83d5-a6b9faf25770,Namespace:calico-system,Attempt:0,}" Oct 30 05:36:17.092870 containerd[1688]: time="2025-10-30T05:36:17.092839526Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7fcb9688b6-vlnj9,Uid:1479dc23-7941-47ad-b834-96e9dba18694,Namespace:calico-system,Attempt:0,}" Oct 30 05:36:17.263604 containerd[1688]: time="2025-10-30T05:36:17.263428905Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Oct 30 05:36:17.451299 containerd[1688]: time="2025-10-30T05:36:17.450003683Z" level=error msg="Failed to destroy network for sandbox \"58b04580485d7e9978620e14f41126e4c1b1492ff8e69d6a95945b802617282e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 05:36:17.461074 systemd[1]: run-netns-cni\x2dd6f7e985\x2de9cd\x2d0759\x2d9700\x2d2c20238bb6c2.mount: Deactivated successfully. Oct 30 05:36:17.462771 containerd[1688]: time="2025-10-30T05:36:17.462716854Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c564d8bcd-kspf5,Uid:9762f718-a4c7-49eb-975e-48fddc6a6070,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"58b04580485d7e9978620e14f41126e4c1b1492ff8e69d6a95945b802617282e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 05:36:17.469831 kubelet[2999]: E1030 05:36:17.469779 2999 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"58b04580485d7e9978620e14f41126e4c1b1492ff8e69d6a95945b802617282e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 05:36:17.470471 kubelet[2999]: E1030 05:36:17.469852 2999 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"58b04580485d7e9978620e14f41126e4c1b1492ff8e69d6a95945b802617282e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5c564d8bcd-kspf5" Oct 30 05:36:17.470471 kubelet[2999]: E1030 05:36:17.469869 2999 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"58b04580485d7e9978620e14f41126e4c1b1492ff8e69d6a95945b802617282e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5c564d8bcd-kspf5" Oct 30 05:36:17.474533 kubelet[2999]: E1030 05:36:17.473988 2999 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5c564d8bcd-kspf5_calico-apiserver(9762f718-a4c7-49eb-975e-48fddc6a6070)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5c564d8bcd-kspf5_calico-apiserver(9762f718-a4c7-49eb-975e-48fddc6a6070)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"58b04580485d7e9978620e14f41126e4c1b1492ff8e69d6a95945b802617282e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5c564d8bcd-kspf5" podUID="9762f718-a4c7-49eb-975e-48fddc6a6070" Oct 30 05:36:17.501308 containerd[1688]: time="2025-10-30T05:36:17.501259169Z" level=error msg="Failed to destroy network for sandbox \"c942f9bb2ef7f8c0c137c2f6c20e98e15423d704e919f22fb1d240560e7d6bbc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 05:36:17.502993 containerd[1688]: time="2025-10-30T05:36:17.502975744Z" level=error msg="Failed to destroy network for sandbox \"47e194d1be46be2a557d2c565329514c916074779dfadace9bc358fd0a31b47e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 05:36:17.503223 systemd[1]: run-netns-cni\x2d39a7d008\x2dd06a\x2d0fe8\x2d237e\x2dd9d042d879e6.mount: Deactivated successfully. Oct 30 05:36:17.505202 systemd[1]: run-netns-cni\x2d1a6745aa\x2da375\x2d8308\x2dfa15\x2dda84b92e720d.mount: Deactivated successfully. Oct 30 05:36:17.507068 containerd[1688]: time="2025-10-30T05:36:17.506802832Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9d7959b66-vrsrq,Uid:a80719c9-5ba9-4e0b-97b9-ad9c69d9e2be,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c942f9bb2ef7f8c0c137c2f6c20e98e15423d704e919f22fb1d240560e7d6bbc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 05:36:17.508973 kubelet[2999]: E1030 05:36:17.506948 2999 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c942f9bb2ef7f8c0c137c2f6c20e98e15423d704e919f22fb1d240560e7d6bbc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 05:36:17.508973 kubelet[2999]: E1030 05:36:17.506985 2999 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c942f9bb2ef7f8c0c137c2f6c20e98e15423d704e919f22fb1d240560e7d6bbc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-9d7959b66-vrsrq" Oct 30 05:36:17.508973 kubelet[2999]: E1030 05:36:17.507002 2999 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c942f9bb2ef7f8c0c137c2f6c20e98e15423d704e919f22fb1d240560e7d6bbc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-9d7959b66-vrsrq" Oct 30 05:36:17.509070 kubelet[2999]: E1030 05:36:17.507037 2999 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-9d7959b66-vrsrq_calico-apiserver(a80719c9-5ba9-4e0b-97b9-ad9c69d9e2be)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-9d7959b66-vrsrq_calico-apiserver(a80719c9-5ba9-4e0b-97b9-ad9c69d9e2be)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c942f9bb2ef7f8c0c137c2f6c20e98e15423d704e919f22fb1d240560e7d6bbc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-9d7959b66-vrsrq" podUID="a80719c9-5ba9-4e0b-97b9-ad9c69d9e2be" Oct 30 05:36:17.510963 containerd[1688]: time="2025-10-30T05:36:17.510929858Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7fcb9688b6-vlnj9,Uid:1479dc23-7941-47ad-b834-96e9dba18694,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"47e194d1be46be2a557d2c565329514c916074779dfadace9bc358fd0a31b47e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 05:36:17.511764 kubelet[2999]: E1030 05:36:17.511737 2999 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"47e194d1be46be2a557d2c565329514c916074779dfadace9bc358fd0a31b47e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 05:36:17.515367 kubelet[2999]: E1030 05:36:17.511862 2999 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"47e194d1be46be2a557d2c565329514c916074779dfadace9bc358fd0a31b47e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7fcb9688b6-vlnj9" Oct 30 05:36:17.515367 kubelet[2999]: E1030 05:36:17.511876 2999 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"47e194d1be46be2a557d2c565329514c916074779dfadace9bc358fd0a31b47e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7fcb9688b6-vlnj9" Oct 30 05:36:17.515367 kubelet[2999]: E1030 05:36:17.511910 2999 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7fcb9688b6-vlnj9_calico-system(1479dc23-7941-47ad-b834-96e9dba18694)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7fcb9688b6-vlnj9_calico-system(1479dc23-7941-47ad-b834-96e9dba18694)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"47e194d1be46be2a557d2c565329514c916074779dfadace9bc358fd0a31b47e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7fcb9688b6-vlnj9" podUID="1479dc23-7941-47ad-b834-96e9dba18694" Oct 30 05:36:17.515879 containerd[1688]: time="2025-10-30T05:36:17.515857119Z" level=error msg="Failed to destroy network for sandbox \"acbc27965e02003e2627bedb9c534daccf328acb2d22076274bfa4a0ece90eb5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 05:36:17.517359 systemd[1]: run-netns-cni\x2d038c3eca\x2dcf97\x2d8343\x2d5dac\x2d73136932f764.mount: Deactivated successfully. Oct 30 05:36:17.520101 containerd[1688]: time="2025-10-30T05:36:17.520071006Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-qvzr4,Uid:81efb543-1b70-4891-8544-9abff1da4cdb,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"acbc27965e02003e2627bedb9c534daccf328acb2d22076274bfa4a0ece90eb5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 05:36:17.520705 kubelet[2999]: E1030 05:36:17.520682 2999 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"acbc27965e02003e2627bedb9c534daccf328acb2d22076274bfa4a0ece90eb5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 05:36:17.520832 kubelet[2999]: E1030 05:36:17.520815 2999 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"acbc27965e02003e2627bedb9c534daccf328acb2d22076274bfa4a0ece90eb5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-qvzr4" Oct 30 05:36:17.520878 kubelet[2999]: E1030 05:36:17.520835 2999 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"acbc27965e02003e2627bedb9c534daccf328acb2d22076274bfa4a0ece90eb5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-qvzr4" Oct 30 05:36:17.521301 kubelet[2999]: E1030 05:36:17.520931 2999 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-qvzr4_kube-system(81efb543-1b70-4891-8544-9abff1da4cdb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-qvzr4_kube-system(81efb543-1b70-4891-8544-9abff1da4cdb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"acbc27965e02003e2627bedb9c534daccf328acb2d22076274bfa4a0ece90eb5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-qvzr4" podUID="81efb543-1b70-4891-8544-9abff1da4cdb" Oct 30 05:36:17.530399 containerd[1688]: time="2025-10-30T05:36:17.530266549Z" level=error msg="Failed to destroy network for sandbox \"41168f928d4386ba851abd0b9dc69ebd24b076969e520b70771f516fda1072fc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 05:36:17.530772 containerd[1688]: time="2025-10-30T05:36:17.530748272Z" level=error msg="Failed to destroy network for sandbox \"fc45207f73b76a3738d7216133e4eeb799f87abc3ce3baf2abdc830117290ef4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 05:36:17.530979 containerd[1688]: time="2025-10-30T05:36:17.530960036Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-6df7j,Uid:23a5b2c9-a75a-4c5c-9522-bc7e36043c20,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"41168f928d4386ba851abd0b9dc69ebd24b076969e520b70771f516fda1072fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 05:36:17.531171 kubelet[2999]: E1030 05:36:17.531137 2999 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"41168f928d4386ba851abd0b9dc69ebd24b076969e520b70771f516fda1072fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 05:36:17.531268 kubelet[2999]: E1030 05:36:17.531242 2999 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"41168f928d4386ba851abd0b9dc69ebd24b076969e520b70771f516fda1072fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-6df7j" Oct 30 05:36:17.531394 kubelet[2999]: E1030 05:36:17.531378 2999 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"41168f928d4386ba851abd0b9dc69ebd24b076969e520b70771f516fda1072fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-6df7j" Oct 30 05:36:17.531503 kubelet[2999]: E1030 05:36:17.531487 2999 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-6df7j_kube-system(23a5b2c9-a75a-4c5c-9522-bc7e36043c20)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-6df7j_kube-system(23a5b2c9-a75a-4c5c-9522-bc7e36043c20)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"41168f928d4386ba851abd0b9dc69ebd24b076969e520b70771f516fda1072fc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-6df7j" podUID="23a5b2c9-a75a-4c5c-9522-bc7e36043c20" Oct 30 05:36:17.532884 containerd[1688]: time="2025-10-30T05:36:17.532838176Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c564d8bcd-xdxxp,Uid:99f1075f-ef64-457d-a338-2febaf8a005c,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc45207f73b76a3738d7216133e4eeb799f87abc3ce3baf2abdc830117290ef4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 05:36:17.533424 kubelet[2999]: E1030 05:36:17.533396 2999 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc45207f73b76a3738d7216133e4eeb799f87abc3ce3baf2abdc830117290ef4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 05:36:17.534812 kubelet[2999]: E1030 05:36:17.533439 2999 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc45207f73b76a3738d7216133e4eeb799f87abc3ce3baf2abdc830117290ef4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5c564d8bcd-xdxxp" Oct 30 05:36:17.534812 kubelet[2999]: E1030 05:36:17.533456 2999 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc45207f73b76a3738d7216133e4eeb799f87abc3ce3baf2abdc830117290ef4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5c564d8bcd-xdxxp" Oct 30 05:36:17.534812 kubelet[2999]: E1030 05:36:17.533489 2999 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5c564d8bcd-xdxxp_calico-apiserver(99f1075f-ef64-457d-a338-2febaf8a005c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5c564d8bcd-xdxxp_calico-apiserver(99f1075f-ef64-457d-a338-2febaf8a005c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fc45207f73b76a3738d7216133e4eeb799f87abc3ce3baf2abdc830117290ef4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5c564d8bcd-xdxxp" podUID="99f1075f-ef64-457d-a338-2febaf8a005c" Oct 30 05:36:17.536148 containerd[1688]: time="2025-10-30T05:36:17.536022060Z" level=error msg="Failed to destroy network for sandbox \"06cd8787f0c7fa3c1b162411bbb498b0daa62b7e0b7b4f3927acdb6b5f0e97e4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 05:36:17.536969 containerd[1688]: time="2025-10-30T05:36:17.536940720Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-tznl5,Uid:11de3de5-1c6c-4871-83d5-a6b9faf25770,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"06cd8787f0c7fa3c1b162411bbb498b0daa62b7e0b7b4f3927acdb6b5f0e97e4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 05:36:17.538333 kubelet[2999]: E1030 05:36:17.538305 2999 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"06cd8787f0c7fa3c1b162411bbb498b0daa62b7e0b7b4f3927acdb6b5f0e97e4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 05:36:17.538431 kubelet[2999]: E1030 05:36:17.538341 2999 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"06cd8787f0c7fa3c1b162411bbb498b0daa62b7e0b7b4f3927acdb6b5f0e97e4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-tznl5" Oct 30 05:36:17.538431 kubelet[2999]: E1030 05:36:17.538357 2999 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"06cd8787f0c7fa3c1b162411bbb498b0daa62b7e0b7b4f3927acdb6b5f0e97e4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-tznl5" Oct 30 05:36:17.538431 kubelet[2999]: E1030 05:36:17.538390 2999 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-tznl5_calico-system(11de3de5-1c6c-4871-83d5-a6b9faf25770)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-tznl5_calico-system(11de3de5-1c6c-4871-83d5-a6b9faf25770)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"06cd8787f0c7fa3c1b162411bbb498b0daa62b7e0b7b4f3927acdb6b5f0e97e4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-tznl5" podUID="11de3de5-1c6c-4871-83d5-a6b9faf25770" Oct 30 05:36:17.539178 containerd[1688]: time="2025-10-30T05:36:17.539092924Z" level=error msg="Failed to destroy network for sandbox \"060ad7584ebaca98087a4ee1b78bdc4543192e523e2184de35a6eb6f633d7aaf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 05:36:17.539551 containerd[1688]: time="2025-10-30T05:36:17.539517032Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7c65b456cf-hz876,Uid:e883b021-a9bb-48ab-80cc-7b947568b059,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"060ad7584ebaca98087a4ee1b78bdc4543192e523e2184de35a6eb6f633d7aaf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 05:36:17.539713 kubelet[2999]: E1030 05:36:17.539677 2999 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"060ad7584ebaca98087a4ee1b78bdc4543192e523e2184de35a6eb6f633d7aaf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 05:36:17.539816 kubelet[2999]: E1030 05:36:17.539804 2999 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"060ad7584ebaca98087a4ee1b78bdc4543192e523e2184de35a6eb6f633d7aaf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7c65b456cf-hz876" Oct 30 05:36:17.539956 kubelet[2999]: E1030 05:36:17.539877 2999 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"060ad7584ebaca98087a4ee1b78bdc4543192e523e2184de35a6eb6f633d7aaf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7c65b456cf-hz876" Oct 30 05:36:17.539956 kubelet[2999]: E1030 05:36:17.539922 2999 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7c65b456cf-hz876_calico-system(e883b021-a9bb-48ab-80cc-7b947568b059)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7c65b456cf-hz876_calico-system(e883b021-a9bb-48ab-80cc-7b947568b059)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"060ad7584ebaca98087a4ee1b78bdc4543192e523e2184de35a6eb6f633d7aaf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7c65b456cf-hz876" podUID="e883b021-a9bb-48ab-80cc-7b947568b059" Oct 30 05:36:18.110240 systemd[1]: Created slice kubepods-besteffort-pod9e840872_e6a6_422f_a0c7_b6b186a24394.slice - libcontainer container kubepods-besteffort-pod9e840872_e6a6_422f_a0c7_b6b186a24394.slice. Oct 30 05:36:18.111894 containerd[1688]: time="2025-10-30T05:36:18.111823650Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wzqzl,Uid:9e840872-e6a6-422f-a0c7-b6b186a24394,Namespace:calico-system,Attempt:0,}" Oct 30 05:36:18.153286 containerd[1688]: time="2025-10-30T05:36:18.153205497Z" level=error msg="Failed to destroy network for sandbox \"2d4462e5aff80d62a048c2d070090560c2ce37699839f0bd348335f36992263e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 05:36:18.153944 containerd[1688]: time="2025-10-30T05:36:18.153869820Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wzqzl,Uid:9e840872-e6a6-422f-a0c7-b6b186a24394,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2d4462e5aff80d62a048c2d070090560c2ce37699839f0bd348335f36992263e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 05:36:18.154052 kubelet[2999]: E1030 05:36:18.154027 2999 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2d4462e5aff80d62a048c2d070090560c2ce37699839f0bd348335f36992263e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 05:36:18.154100 kubelet[2999]: E1030 05:36:18.154068 2999 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2d4462e5aff80d62a048c2d070090560c2ce37699839f0bd348335f36992263e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wzqzl" Oct 30 05:36:18.154100 kubelet[2999]: E1030 05:36:18.154084 2999 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2d4462e5aff80d62a048c2d070090560c2ce37699839f0bd348335f36992263e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wzqzl" Oct 30 05:36:18.154169 kubelet[2999]: E1030 05:36:18.154118 2999 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-wzqzl_calico-system(9e840872-e6a6-422f-a0c7-b6b186a24394)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-wzqzl_calico-system(9e840872-e6a6-422f-a0c7-b6b186a24394)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2d4462e5aff80d62a048c2d070090560c2ce37699839f0bd348335f36992263e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-wzqzl" podUID="9e840872-e6a6-422f-a0c7-b6b186a24394" Oct 30 05:36:18.446130 systemd[1]: run-netns-cni\x2debb41108\x2d0bd5\x2d1fe8\x2d8089\x2d30ec92acc17d.mount: Deactivated successfully. Oct 30 05:36:18.446195 systemd[1]: run-netns-cni\x2d0d46fd99\x2d883d\x2d9800\x2df857\x2db90287b365cb.mount: Deactivated successfully. Oct 30 05:36:18.446229 systemd[1]: run-netns-cni\x2d7e786ffb\x2d6c03\x2d739c\x2d342e\x2df4f5d4e6d65b.mount: Deactivated successfully. Oct 30 05:36:18.446262 systemd[1]: run-netns-cni\x2de44d8e41\x2dda04\x2d8774\x2d06d3\x2d3077dffd8dba.mount: Deactivated successfully. Oct 30 05:36:22.458908 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1449420283.mount: Deactivated successfully. Oct 30 05:36:22.541299 containerd[1688]: time="2025-10-30T05:36:22.540894989Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Oct 30 05:36:22.547868 containerd[1688]: time="2025-10-30T05:36:22.547841444Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 05:36:22.557331 containerd[1688]: time="2025-10-30T05:36:22.557290824Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 05:36:22.567317 containerd[1688]: time="2025-10-30T05:36:22.567291054Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 05:36:22.568267 containerd[1688]: time="2025-10-30T05:36:22.568217656Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 5.303741214s" Oct 30 05:36:22.568267 containerd[1688]: time="2025-10-30T05:36:22.568239214Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Oct 30 05:36:22.705794 containerd[1688]: time="2025-10-30T05:36:22.705761110Z" level=info msg="CreateContainer within sandbox \"ebf396814b1d89f75dcaf1f06391964b114ddf5b206e6125c15b8748af8e7fa8\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Oct 30 05:36:22.808946 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount264383995.mount: Deactivated successfully. Oct 30 05:36:22.809256 containerd[1688]: time="2025-10-30T05:36:22.809235650Z" level=info msg="Container 6800f4d88e80d6366f007a8fb3b5a32ce863fca4f3eba69b2742da5146841bad: CDI devices from CRI Config.CDIDevices: []" Oct 30 05:36:22.890575 containerd[1688]: time="2025-10-30T05:36:22.890506093Z" level=info msg="CreateContainer within sandbox \"ebf396814b1d89f75dcaf1f06391964b114ddf5b206e6125c15b8748af8e7fa8\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"6800f4d88e80d6366f007a8fb3b5a32ce863fca4f3eba69b2742da5146841bad\"" Oct 30 05:36:22.890857 containerd[1688]: time="2025-10-30T05:36:22.890828148Z" level=info msg="StartContainer for \"6800f4d88e80d6366f007a8fb3b5a32ce863fca4f3eba69b2742da5146841bad\"" Oct 30 05:36:22.907842 containerd[1688]: time="2025-10-30T05:36:22.907761226Z" level=info msg="connecting to shim 6800f4d88e80d6366f007a8fb3b5a32ce863fca4f3eba69b2742da5146841bad" address="unix:///run/containerd/s/1979801019f7245d0212605a95324da7937aa2c1e62a78da90ff121d25a9cf6b" protocol=ttrpc version=3 Oct 30 05:36:23.118392 systemd[1]: Started cri-containerd-6800f4d88e80d6366f007a8fb3b5a32ce863fca4f3eba69b2742da5146841bad.scope - libcontainer container 6800f4d88e80d6366f007a8fb3b5a32ce863fca4f3eba69b2742da5146841bad. Oct 30 05:36:23.165939 containerd[1688]: time="2025-10-30T05:36:23.164418830Z" level=info msg="StartContainer for \"6800f4d88e80d6366f007a8fb3b5a32ce863fca4f3eba69b2742da5146841bad\" returns successfully" Oct 30 05:36:24.194943 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Oct 30 05:36:24.221826 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Oct 30 05:36:24.577588 kubelet[2999]: I1030 05:36:24.571899 2999 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-7mqxk" podStartSLOduration=2.992852132 podStartE2EDuration="23.571885528s" podCreationTimestamp="2025-10-30 05:36:01 +0000 UTC" firstStartedPulling="2025-10-30 05:36:01.990652171 +0000 UTC m=+19.062610488" lastFinishedPulling="2025-10-30 05:36:22.569685566 +0000 UTC m=+39.641643884" observedRunningTime="2025-10-30 05:36:23.383143089 +0000 UTC m=+40.455101419" watchObservedRunningTime="2025-10-30 05:36:24.571885528 +0000 UTC m=+41.643843857" Oct 30 05:36:24.652298 kubelet[2999]: I1030 05:36:24.651192 2999 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7nhtl\" (UniqueName: \"kubernetes.io/projected/1479dc23-7941-47ad-b834-96e9dba18694-kube-api-access-7nhtl\") pod \"1479dc23-7941-47ad-b834-96e9dba18694\" (UID: \"1479dc23-7941-47ad-b834-96e9dba18694\") " Oct 30 05:36:24.652741 kubelet[2999]: I1030 05:36:24.652478 2999 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1479dc23-7941-47ad-b834-96e9dba18694-whisker-ca-bundle\") pod \"1479dc23-7941-47ad-b834-96e9dba18694\" (UID: \"1479dc23-7941-47ad-b834-96e9dba18694\") " Oct 30 05:36:24.652741 kubelet[2999]: I1030 05:36:24.652513 2999 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/1479dc23-7941-47ad-b834-96e9dba18694-whisker-backend-key-pair\") pod \"1479dc23-7941-47ad-b834-96e9dba18694\" (UID: \"1479dc23-7941-47ad-b834-96e9dba18694\") " Oct 30 05:36:24.670749 kubelet[2999]: I1030 05:36:24.670216 2999 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1479dc23-7941-47ad-b834-96e9dba18694-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "1479dc23-7941-47ad-b834-96e9dba18694" (UID: "1479dc23-7941-47ad-b834-96e9dba18694"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Oct 30 05:36:24.679748 systemd[1]: var-lib-kubelet-pods-1479dc23\x2d7941\x2d47ad\x2db834\x2d96e9dba18694-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d7nhtl.mount: Deactivated successfully. Oct 30 05:36:24.679843 systemd[1]: var-lib-kubelet-pods-1479dc23\x2d7941\x2d47ad\x2db834\x2d96e9dba18694-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Oct 30 05:36:24.681953 kubelet[2999]: I1030 05:36:24.681594 2999 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1479dc23-7941-47ad-b834-96e9dba18694-kube-api-access-7nhtl" (OuterVolumeSpecName: "kube-api-access-7nhtl") pod "1479dc23-7941-47ad-b834-96e9dba18694" (UID: "1479dc23-7941-47ad-b834-96e9dba18694"). InnerVolumeSpecName "kube-api-access-7nhtl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Oct 30 05:36:24.681953 kubelet[2999]: I1030 05:36:24.681717 2999 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1479dc23-7941-47ad-b834-96e9dba18694-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "1479dc23-7941-47ad-b834-96e9dba18694" (UID: "1479dc23-7941-47ad-b834-96e9dba18694"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Oct 30 05:36:24.753065 kubelet[2999]: I1030 05:36:24.753014 2999 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7nhtl\" (UniqueName: \"kubernetes.io/projected/1479dc23-7941-47ad-b834-96e9dba18694-kube-api-access-7nhtl\") on node \"localhost\" DevicePath \"\"" Oct 30 05:36:24.753065 kubelet[2999]: I1030 05:36:24.753037 2999 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1479dc23-7941-47ad-b834-96e9dba18694-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Oct 30 05:36:24.753065 kubelet[2999]: I1030 05:36:24.753043 2999 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/1479dc23-7941-47ad-b834-96e9dba18694-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Oct 30 05:36:25.107786 systemd[1]: Removed slice kubepods-besteffort-pod1479dc23_7941_47ad_b834_96e9dba18694.slice - libcontainer container kubepods-besteffort-pod1479dc23_7941_47ad_b834_96e9dba18694.slice. Oct 30 05:36:25.356495 kubelet[2999]: I1030 05:36:25.356465 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a2f201cf-9e70-4227-a745-f73352a6fa53-whisker-backend-key-pair\") pod \"whisker-5fc4bbb676-9vsss\" (UID: \"a2f201cf-9e70-4227-a745-f73352a6fa53\") " pod="calico-system/whisker-5fc4bbb676-9vsss" Oct 30 05:36:25.356495 kubelet[2999]: I1030 05:36:25.356493 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a2f201cf-9e70-4227-a745-f73352a6fa53-whisker-ca-bundle\") pod \"whisker-5fc4bbb676-9vsss\" (UID: \"a2f201cf-9e70-4227-a745-f73352a6fa53\") " pod="calico-system/whisker-5fc4bbb676-9vsss" Oct 30 05:36:25.356613 kubelet[2999]: I1030 05:36:25.356506 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbs2j\" (UniqueName: \"kubernetes.io/projected/a2f201cf-9e70-4227-a745-f73352a6fa53-kube-api-access-jbs2j\") pod \"whisker-5fc4bbb676-9vsss\" (UID: \"a2f201cf-9e70-4227-a745-f73352a6fa53\") " pod="calico-system/whisker-5fc4bbb676-9vsss" Oct 30 05:36:25.371389 systemd[1]: Created slice kubepods-besteffort-poda2f201cf_9e70_4227_a745_f73352a6fa53.slice - libcontainer container kubepods-besteffort-poda2f201cf_9e70_4227_a745_f73352a6fa53.slice. Oct 30 05:36:25.678168 containerd[1688]: time="2025-10-30T05:36:25.678119312Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5fc4bbb676-9vsss,Uid:a2f201cf-9e70-4227-a745-f73352a6fa53,Namespace:calico-system,Attempt:0,}" Oct 30 05:36:26.255485 systemd-networkd[1568]: vxlan.calico: Link UP Oct 30 05:36:26.255491 systemd-networkd[1568]: vxlan.calico: Gained carrier Oct 30 05:36:26.479051 systemd-networkd[1568]: cali8d798c982be: Link UP Oct 30 05:36:26.480414 systemd-networkd[1568]: cali8d798c982be: Gained carrier Oct 30 05:36:26.495004 containerd[1688]: 2025-10-30 05:36:25.745 [INFO][4196] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 30 05:36:26.495004 containerd[1688]: 2025-10-30 05:36:25.947 [INFO][4196] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--5fc4bbb676--9vsss-eth0 whisker-5fc4bbb676- calico-system a2f201cf-9e70-4227-a745-f73352a6fa53 916 0 2025-10-30 05:36:25 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:5fc4bbb676 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-5fc4bbb676-9vsss eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali8d798c982be [] [] }} ContainerID="1bd73b2f6427d300a0ee494f30e32dbc4e8a3936f6b4805f5681f7f99877feec" Namespace="calico-system" Pod="whisker-5fc4bbb676-9vsss" WorkloadEndpoint="localhost-k8s-whisker--5fc4bbb676--9vsss-" Oct 30 05:36:26.495004 containerd[1688]: 2025-10-30 05:36:25.947 [INFO][4196] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1bd73b2f6427d300a0ee494f30e32dbc4e8a3936f6b4805f5681f7f99877feec" Namespace="calico-system" Pod="whisker-5fc4bbb676-9vsss" WorkloadEndpoint="localhost-k8s-whisker--5fc4bbb676--9vsss-eth0" Oct 30 05:36:26.495004 containerd[1688]: 2025-10-30 05:36:26.409 [INFO][4211] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1bd73b2f6427d300a0ee494f30e32dbc4e8a3936f6b4805f5681f7f99877feec" HandleID="k8s-pod-network.1bd73b2f6427d300a0ee494f30e32dbc4e8a3936f6b4805f5681f7f99877feec" Workload="localhost-k8s-whisker--5fc4bbb676--9vsss-eth0" Oct 30 05:36:26.495186 containerd[1688]: 2025-10-30 05:36:26.411 [INFO][4211] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="1bd73b2f6427d300a0ee494f30e32dbc4e8a3936f6b4805f5681f7f99877feec" HandleID="k8s-pod-network.1bd73b2f6427d300a0ee494f30e32dbc4e8a3936f6b4805f5681f7f99877feec" Workload="localhost-k8s-whisker--5fc4bbb676--9vsss-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000103df0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-5fc4bbb676-9vsss", "timestamp":"2025-10-30 05:36:26.409671882 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 30 05:36:26.495186 containerd[1688]: 2025-10-30 05:36:26.411 [INFO][4211] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 30 05:36:26.495186 containerd[1688]: 2025-10-30 05:36:26.411 [INFO][4211] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 30 05:36:26.495186 containerd[1688]: 2025-10-30 05:36:26.413 [INFO][4211] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 30 05:36:26.495186 containerd[1688]: 2025-10-30 05:36:26.440 [INFO][4211] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1bd73b2f6427d300a0ee494f30e32dbc4e8a3936f6b4805f5681f7f99877feec" host="localhost" Oct 30 05:36:26.495186 containerd[1688]: 2025-10-30 05:36:26.455 [INFO][4211] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 30 05:36:26.495186 containerd[1688]: 2025-10-30 05:36:26.459 [INFO][4211] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 30 05:36:26.495186 containerd[1688]: 2025-10-30 05:36:26.461 [INFO][4211] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 30 05:36:26.495186 containerd[1688]: 2025-10-30 05:36:26.462 [INFO][4211] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 30 05:36:26.495186 containerd[1688]: 2025-10-30 05:36:26.462 [INFO][4211] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1bd73b2f6427d300a0ee494f30e32dbc4e8a3936f6b4805f5681f7f99877feec" host="localhost" Oct 30 05:36:26.495716 containerd[1688]: 2025-10-30 05:36:26.463 [INFO][4211] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.1bd73b2f6427d300a0ee494f30e32dbc4e8a3936f6b4805f5681f7f99877feec Oct 30 05:36:26.495716 containerd[1688]: 2025-10-30 05:36:26.466 [INFO][4211] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1bd73b2f6427d300a0ee494f30e32dbc4e8a3936f6b4805f5681f7f99877feec" host="localhost" Oct 30 05:36:26.495716 containerd[1688]: 2025-10-30 05:36:26.471 [INFO][4211] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.1bd73b2f6427d300a0ee494f30e32dbc4e8a3936f6b4805f5681f7f99877feec" host="localhost" Oct 30 05:36:26.495716 containerd[1688]: 2025-10-30 05:36:26.471 [INFO][4211] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.1bd73b2f6427d300a0ee494f30e32dbc4e8a3936f6b4805f5681f7f99877feec" host="localhost" Oct 30 05:36:26.495716 containerd[1688]: 2025-10-30 05:36:26.471 [INFO][4211] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 30 05:36:26.495716 containerd[1688]: 2025-10-30 05:36:26.471 [INFO][4211] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="1bd73b2f6427d300a0ee494f30e32dbc4e8a3936f6b4805f5681f7f99877feec" HandleID="k8s-pod-network.1bd73b2f6427d300a0ee494f30e32dbc4e8a3936f6b4805f5681f7f99877feec" Workload="localhost-k8s-whisker--5fc4bbb676--9vsss-eth0" Oct 30 05:36:26.499049 containerd[1688]: 2025-10-30 05:36:26.473 [INFO][4196] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1bd73b2f6427d300a0ee494f30e32dbc4e8a3936f6b4805f5681f7f99877feec" Namespace="calico-system" Pod="whisker-5fc4bbb676-9vsss" WorkloadEndpoint="localhost-k8s-whisker--5fc4bbb676--9vsss-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--5fc4bbb676--9vsss-eth0", GenerateName:"whisker-5fc4bbb676-", Namespace:"calico-system", SelfLink:"", UID:"a2f201cf-9e70-4227-a745-f73352a6fa53", ResourceVersion:"916", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 5, 36, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5fc4bbb676", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-5fc4bbb676-9vsss", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali8d798c982be", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 05:36:26.499049 containerd[1688]: 2025-10-30 05:36:26.473 [INFO][4196] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="1bd73b2f6427d300a0ee494f30e32dbc4e8a3936f6b4805f5681f7f99877feec" Namespace="calico-system" Pod="whisker-5fc4bbb676-9vsss" WorkloadEndpoint="localhost-k8s-whisker--5fc4bbb676--9vsss-eth0" Oct 30 05:36:26.500867 containerd[1688]: 2025-10-30 05:36:26.473 [INFO][4196] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8d798c982be ContainerID="1bd73b2f6427d300a0ee494f30e32dbc4e8a3936f6b4805f5681f7f99877feec" Namespace="calico-system" Pod="whisker-5fc4bbb676-9vsss" WorkloadEndpoint="localhost-k8s-whisker--5fc4bbb676--9vsss-eth0" Oct 30 05:36:26.500867 containerd[1688]: 2025-10-30 05:36:26.479 [INFO][4196] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1bd73b2f6427d300a0ee494f30e32dbc4e8a3936f6b4805f5681f7f99877feec" Namespace="calico-system" Pod="whisker-5fc4bbb676-9vsss" WorkloadEndpoint="localhost-k8s-whisker--5fc4bbb676--9vsss-eth0" Oct 30 05:36:26.500906 containerd[1688]: 2025-10-30 05:36:26.481 [INFO][4196] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1bd73b2f6427d300a0ee494f30e32dbc4e8a3936f6b4805f5681f7f99877feec" Namespace="calico-system" Pod="whisker-5fc4bbb676-9vsss" WorkloadEndpoint="localhost-k8s-whisker--5fc4bbb676--9vsss-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--5fc4bbb676--9vsss-eth0", GenerateName:"whisker-5fc4bbb676-", Namespace:"calico-system", SelfLink:"", UID:"a2f201cf-9e70-4227-a745-f73352a6fa53", ResourceVersion:"916", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 5, 36, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5fc4bbb676", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1bd73b2f6427d300a0ee494f30e32dbc4e8a3936f6b4805f5681f7f99877feec", Pod:"whisker-5fc4bbb676-9vsss", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali8d798c982be", MAC:"1e:b3:1b:b0:69:7a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 05:36:26.500948 containerd[1688]: 2025-10-30 05:36:26.490 [INFO][4196] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1bd73b2f6427d300a0ee494f30e32dbc4e8a3936f6b4805f5681f7f99877feec" Namespace="calico-system" Pod="whisker-5fc4bbb676-9vsss" WorkloadEndpoint="localhost-k8s-whisker--5fc4bbb676--9vsss-eth0" Oct 30 05:36:26.735560 containerd[1688]: time="2025-10-30T05:36:26.735516577Z" level=info msg="connecting to shim 1bd73b2f6427d300a0ee494f30e32dbc4e8a3936f6b4805f5681f7f99877feec" address="unix:///run/containerd/s/f122f84b0575c080e3249866b827f1c228c0469e41a74c39cba20ac4c5592cbe" namespace=k8s.io protocol=ttrpc version=3 Oct 30 05:36:26.786618 systemd[1]: Started cri-containerd-1bd73b2f6427d300a0ee494f30e32dbc4e8a3936f6b4805f5681f7f99877feec.scope - libcontainer container 1bd73b2f6427d300a0ee494f30e32dbc4e8a3936f6b4805f5681f7f99877feec. Oct 30 05:36:26.798814 systemd-resolved[1334]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 30 05:36:26.838774 containerd[1688]: time="2025-10-30T05:36:26.838675800Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5fc4bbb676-9vsss,Uid:a2f201cf-9e70-4227-a745-f73352a6fa53,Namespace:calico-system,Attempt:0,} returns sandbox id \"1bd73b2f6427d300a0ee494f30e32dbc4e8a3936f6b4805f5681f7f99877feec\"" Oct 30 05:36:26.867732 containerd[1688]: time="2025-10-30T05:36:26.867673863Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 30 05:36:27.102306 kubelet[2999]: I1030 05:36:27.102213 2999 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1479dc23-7941-47ad-b834-96e9dba18694" path="/var/lib/kubelet/pods/1479dc23-7941-47ad-b834-96e9dba18694/volumes" Oct 30 05:36:27.237691 containerd[1688]: time="2025-10-30T05:36:27.237666148Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 05:36:27.242338 containerd[1688]: time="2025-10-30T05:36:27.242310767Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 30 05:36:27.242431 containerd[1688]: time="2025-10-30T05:36:27.242374940Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Oct 30 05:36:27.244904 kubelet[2999]: E1030 05:36:27.244879 2999 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 30 05:36:27.244974 kubelet[2999]: E1030 05:36:27.244917 2999 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 30 05:36:27.248250 kubelet[2999]: E1030 05:36:27.248202 2999 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:3e229c8b9d734849864085095eacfb5b,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jbs2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5fc4bbb676-9vsss_calico-system(a2f201cf-9e70-4227-a745-f73352a6fa53): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 30 05:36:27.250300 containerd[1688]: time="2025-10-30T05:36:27.250136856Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 30 05:36:27.596931 containerd[1688]: time="2025-10-30T05:36:27.596901547Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 05:36:27.599405 containerd[1688]: time="2025-10-30T05:36:27.599383355Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 30 05:36:27.599460 containerd[1688]: time="2025-10-30T05:36:27.599437706Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Oct 30 05:36:27.599532 kubelet[2999]: E1030 05:36:27.599507 2999 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 30 05:36:27.599592 kubelet[2999]: E1030 05:36:27.599539 2999 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 30 05:36:27.599710 kubelet[2999]: E1030 05:36:27.599611 2999 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jbs2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5fc4bbb676-9vsss_calico-system(a2f201cf-9e70-4227-a745-f73352a6fa53): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 30 05:36:27.601156 kubelet[2999]: E1030 05:36:27.601125 2999 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5fc4bbb676-9vsss" podUID="a2f201cf-9e70-4227-a745-f73352a6fa53" Oct 30 05:36:27.749373 systemd-networkd[1568]: vxlan.calico: Gained IPv6LL Oct 30 05:36:28.231099 kubelet[2999]: I1030 05:36:28.230565 2999 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 30 05:36:28.362302 containerd[1688]: time="2025-10-30T05:36:28.362245986Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6800f4d88e80d6366f007a8fb3b5a32ce863fca4f3eba69b2742da5146841bad\" id:\"4a1bd5804f464e20f30833649a2fb8a441c770ec82ba687a93f6187632ab194d\" pid:4388 exited_at:{seconds:1761802588 nanos:362032865}" Oct 30 05:36:28.379099 kubelet[2999]: E1030 05:36:28.378947 2999 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5fc4bbb676-9vsss" podUID="a2f201cf-9e70-4227-a745-f73352a6fa53" Oct 30 05:36:28.448674 containerd[1688]: time="2025-10-30T05:36:28.448645825Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6800f4d88e80d6366f007a8fb3b5a32ce863fca4f3eba69b2742da5146841bad\" id:\"fa3df3852e1457820d1b08d1da111d3099742a7de9d5c2444efc0c486542030c\" pid:4411 exited_at:{seconds:1761802588 nanos:448457848}" Oct 30 05:36:28.517525 systemd-networkd[1568]: cali8d798c982be: Gained IPv6LL Oct 30 05:36:30.100988 containerd[1688]: time="2025-10-30T05:36:30.100693424Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-qvzr4,Uid:81efb543-1b70-4891-8544-9abff1da4cdb,Namespace:kube-system,Attempt:0,}" Oct 30 05:36:30.100988 containerd[1688]: time="2025-10-30T05:36:30.100693505Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9d7959b66-vrsrq,Uid:a80719c9-5ba9-4e0b-97b9-ad9c69d9e2be,Namespace:calico-apiserver,Attempt:0,}" Oct 30 05:36:30.101248 containerd[1688]: time="2025-10-30T05:36:30.101085628Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7c65b456cf-hz876,Uid:e883b021-a9bb-48ab-80cc-7b947568b059,Namespace:calico-system,Attempt:0,}" Oct 30 05:36:30.250497 systemd-networkd[1568]: calib16a9e4d07f: Link UP Oct 30 05:36:30.250901 systemd-networkd[1568]: calib16a9e4d07f: Gained carrier Oct 30 05:36:30.263453 containerd[1688]: 2025-10-30 05:36:30.174 [INFO][4427] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--9d7959b66--vrsrq-eth0 calico-apiserver-9d7959b66- calico-apiserver a80719c9-5ba9-4e0b-97b9-ad9c69d9e2be 846 0 2025-10-30 05:35:58 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:9d7959b66 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-9d7959b66-vrsrq eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calib16a9e4d07f [] [] }} ContainerID="a3cc1df9341cb757e1f1d7ded97a73d64c7548a73e7344fb5e76d9f8c22bc67d" Namespace="calico-apiserver" Pod="calico-apiserver-9d7959b66-vrsrq" WorkloadEndpoint="localhost-k8s-calico--apiserver--9d7959b66--vrsrq-" Oct 30 05:36:30.263453 containerd[1688]: 2025-10-30 05:36:30.174 [INFO][4427] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a3cc1df9341cb757e1f1d7ded97a73d64c7548a73e7344fb5e76d9f8c22bc67d" Namespace="calico-apiserver" Pod="calico-apiserver-9d7959b66-vrsrq" WorkloadEndpoint="localhost-k8s-calico--apiserver--9d7959b66--vrsrq-eth0" Oct 30 05:36:30.263453 containerd[1688]: 2025-10-30 05:36:30.209 [INFO][4464] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a3cc1df9341cb757e1f1d7ded97a73d64c7548a73e7344fb5e76d9f8c22bc67d" HandleID="k8s-pod-network.a3cc1df9341cb757e1f1d7ded97a73d64c7548a73e7344fb5e76d9f8c22bc67d" Workload="localhost-k8s-calico--apiserver--9d7959b66--vrsrq-eth0" Oct 30 05:36:30.263594 containerd[1688]: 2025-10-30 05:36:30.209 [INFO][4464] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a3cc1df9341cb757e1f1d7ded97a73d64c7548a73e7344fb5e76d9f8c22bc67d" HandleID="k8s-pod-network.a3cc1df9341cb757e1f1d7ded97a73d64c7548a73e7344fb5e76d9f8c22bc67d" Workload="localhost-k8s-calico--apiserver--9d7959b66--vrsrq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00032a470), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-9d7959b66-vrsrq", "timestamp":"2025-10-30 05:36:30.209808877 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 30 05:36:30.263594 containerd[1688]: 2025-10-30 05:36:30.209 [INFO][4464] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 30 05:36:30.263594 containerd[1688]: 2025-10-30 05:36:30.209 [INFO][4464] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 30 05:36:30.263594 containerd[1688]: 2025-10-30 05:36:30.209 [INFO][4464] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 30 05:36:30.263594 containerd[1688]: 2025-10-30 05:36:30.216 [INFO][4464] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a3cc1df9341cb757e1f1d7ded97a73d64c7548a73e7344fb5e76d9f8c22bc67d" host="localhost" Oct 30 05:36:30.263594 containerd[1688]: 2025-10-30 05:36:30.219 [INFO][4464] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 30 05:36:30.263594 containerd[1688]: 2025-10-30 05:36:30.221 [INFO][4464] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 30 05:36:30.263594 containerd[1688]: 2025-10-30 05:36:30.223 [INFO][4464] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 30 05:36:30.263594 containerd[1688]: 2025-10-30 05:36:30.224 [INFO][4464] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 30 05:36:30.263594 containerd[1688]: 2025-10-30 05:36:30.224 [INFO][4464] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a3cc1df9341cb757e1f1d7ded97a73d64c7548a73e7344fb5e76d9f8c22bc67d" host="localhost" Oct 30 05:36:30.264973 containerd[1688]: 2025-10-30 05:36:30.225 [INFO][4464] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a3cc1df9341cb757e1f1d7ded97a73d64c7548a73e7344fb5e76d9f8c22bc67d Oct 30 05:36:30.264973 containerd[1688]: 2025-10-30 05:36:30.229 [INFO][4464] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a3cc1df9341cb757e1f1d7ded97a73d64c7548a73e7344fb5e76d9f8c22bc67d" host="localhost" Oct 30 05:36:30.264973 containerd[1688]: 2025-10-30 05:36:30.241 [INFO][4464] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.a3cc1df9341cb757e1f1d7ded97a73d64c7548a73e7344fb5e76d9f8c22bc67d" host="localhost" Oct 30 05:36:30.264973 containerd[1688]: 2025-10-30 05:36:30.241 [INFO][4464] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.a3cc1df9341cb757e1f1d7ded97a73d64c7548a73e7344fb5e76d9f8c22bc67d" host="localhost" Oct 30 05:36:30.264973 containerd[1688]: 2025-10-30 05:36:30.241 [INFO][4464] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 30 05:36:30.264973 containerd[1688]: 2025-10-30 05:36:30.241 [INFO][4464] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="a3cc1df9341cb757e1f1d7ded97a73d64c7548a73e7344fb5e76d9f8c22bc67d" HandleID="k8s-pod-network.a3cc1df9341cb757e1f1d7ded97a73d64c7548a73e7344fb5e76d9f8c22bc67d" Workload="localhost-k8s-calico--apiserver--9d7959b66--vrsrq-eth0" Oct 30 05:36:30.265075 containerd[1688]: 2025-10-30 05:36:30.243 [INFO][4427] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a3cc1df9341cb757e1f1d7ded97a73d64c7548a73e7344fb5e76d9f8c22bc67d" Namespace="calico-apiserver" Pod="calico-apiserver-9d7959b66-vrsrq" WorkloadEndpoint="localhost-k8s-calico--apiserver--9d7959b66--vrsrq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--9d7959b66--vrsrq-eth0", GenerateName:"calico-apiserver-9d7959b66-", Namespace:"calico-apiserver", SelfLink:"", UID:"a80719c9-5ba9-4e0b-97b9-ad9c69d9e2be", ResourceVersion:"846", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 5, 35, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"9d7959b66", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-9d7959b66-vrsrq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib16a9e4d07f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 05:36:30.265120 containerd[1688]: 2025-10-30 05:36:30.243 [INFO][4427] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="a3cc1df9341cb757e1f1d7ded97a73d64c7548a73e7344fb5e76d9f8c22bc67d" Namespace="calico-apiserver" Pod="calico-apiserver-9d7959b66-vrsrq" WorkloadEndpoint="localhost-k8s-calico--apiserver--9d7959b66--vrsrq-eth0" Oct 30 05:36:30.265120 containerd[1688]: 2025-10-30 05:36:30.243 [INFO][4427] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib16a9e4d07f ContainerID="a3cc1df9341cb757e1f1d7ded97a73d64c7548a73e7344fb5e76d9f8c22bc67d" Namespace="calico-apiserver" Pod="calico-apiserver-9d7959b66-vrsrq" WorkloadEndpoint="localhost-k8s-calico--apiserver--9d7959b66--vrsrq-eth0" Oct 30 05:36:30.265120 containerd[1688]: 2025-10-30 05:36:30.251 [INFO][4427] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a3cc1df9341cb757e1f1d7ded97a73d64c7548a73e7344fb5e76d9f8c22bc67d" Namespace="calico-apiserver" Pod="calico-apiserver-9d7959b66-vrsrq" WorkloadEndpoint="localhost-k8s-calico--apiserver--9d7959b66--vrsrq-eth0" Oct 30 05:36:30.265661 containerd[1688]: 2025-10-30 05:36:30.251 [INFO][4427] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a3cc1df9341cb757e1f1d7ded97a73d64c7548a73e7344fb5e76d9f8c22bc67d" Namespace="calico-apiserver" Pod="calico-apiserver-9d7959b66-vrsrq" WorkloadEndpoint="localhost-k8s-calico--apiserver--9d7959b66--vrsrq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--9d7959b66--vrsrq-eth0", GenerateName:"calico-apiserver-9d7959b66-", Namespace:"calico-apiserver", SelfLink:"", UID:"a80719c9-5ba9-4e0b-97b9-ad9c69d9e2be", ResourceVersion:"846", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 5, 35, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"9d7959b66", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a3cc1df9341cb757e1f1d7ded97a73d64c7548a73e7344fb5e76d9f8c22bc67d", Pod:"calico-apiserver-9d7959b66-vrsrq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib16a9e4d07f", MAC:"ba:cc:eb:41:28:8c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 05:36:30.265729 containerd[1688]: 2025-10-30 05:36:30.259 [INFO][4427] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a3cc1df9341cb757e1f1d7ded97a73d64c7548a73e7344fb5e76d9f8c22bc67d" Namespace="calico-apiserver" Pod="calico-apiserver-9d7959b66-vrsrq" WorkloadEndpoint="localhost-k8s-calico--apiserver--9d7959b66--vrsrq-eth0" Oct 30 05:36:30.292791 containerd[1688]: time="2025-10-30T05:36:30.292753788Z" level=info msg="connecting to shim a3cc1df9341cb757e1f1d7ded97a73d64c7548a73e7344fb5e76d9f8c22bc67d" address="unix:///run/containerd/s/1a7430c72a8a014c5ea1742ed6e2b02de8cd71f6e2744a821b47c49b7a661f9f" namespace=k8s.io protocol=ttrpc version=3 Oct 30 05:36:30.316173 systemd[1]: Started cri-containerd-a3cc1df9341cb757e1f1d7ded97a73d64c7548a73e7344fb5e76d9f8c22bc67d.scope - libcontainer container a3cc1df9341cb757e1f1d7ded97a73d64c7548a73e7344fb5e76d9f8c22bc67d. Oct 30 05:36:30.335001 systemd-resolved[1334]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 30 05:36:30.351655 systemd-networkd[1568]: calife2f4d2e6b5: Link UP Oct 30 05:36:30.353399 systemd-networkd[1568]: calife2f4d2e6b5: Gained carrier Oct 30 05:36:30.362648 containerd[1688]: 2025-10-30 05:36:30.168 [INFO][4425] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--qvzr4-eth0 coredns-674b8bbfcf- kube-system 81efb543-1b70-4891-8544-9abff1da4cdb 844 0 2025-10-30 05:35:50 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-qvzr4 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calife2f4d2e6b5 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="c388c0a81a46ab650418800937d66231f293448a03e48716bd5377d04c73b6ac" Namespace="kube-system" Pod="coredns-674b8bbfcf-qvzr4" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--qvzr4-" Oct 30 05:36:30.362648 containerd[1688]: 2025-10-30 05:36:30.168 [INFO][4425] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c388c0a81a46ab650418800937d66231f293448a03e48716bd5377d04c73b6ac" Namespace="kube-system" Pod="coredns-674b8bbfcf-qvzr4" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--qvzr4-eth0" Oct 30 05:36:30.362648 containerd[1688]: 2025-10-30 05:36:30.210 [INFO][4461] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c388c0a81a46ab650418800937d66231f293448a03e48716bd5377d04c73b6ac" HandleID="k8s-pod-network.c388c0a81a46ab650418800937d66231f293448a03e48716bd5377d04c73b6ac" Workload="localhost-k8s-coredns--674b8bbfcf--qvzr4-eth0" Oct 30 05:36:30.362885 containerd[1688]: 2025-10-30 05:36:30.210 [INFO][4461] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c388c0a81a46ab650418800937d66231f293448a03e48716bd5377d04c73b6ac" HandleID="k8s-pod-network.c388c0a81a46ab650418800937d66231f293448a03e48716bd5377d04c73b6ac" Workload="localhost-k8s-coredns--674b8bbfcf--qvzr4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d4fe0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-qvzr4", "timestamp":"2025-10-30 05:36:30.210155482 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 30 05:36:30.362885 containerd[1688]: 2025-10-30 05:36:30.210 [INFO][4461] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 30 05:36:30.362885 containerd[1688]: 2025-10-30 05:36:30.241 [INFO][4461] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 30 05:36:30.362885 containerd[1688]: 2025-10-30 05:36:30.241 [INFO][4461] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 30 05:36:30.362885 containerd[1688]: 2025-10-30 05:36:30.320 [INFO][4461] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c388c0a81a46ab650418800937d66231f293448a03e48716bd5377d04c73b6ac" host="localhost" Oct 30 05:36:30.362885 containerd[1688]: 2025-10-30 05:36:30.324 [INFO][4461] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 30 05:36:30.362885 containerd[1688]: 2025-10-30 05:36:30.327 [INFO][4461] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 30 05:36:30.362885 containerd[1688]: 2025-10-30 05:36:30.329 [INFO][4461] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 30 05:36:30.362885 containerd[1688]: 2025-10-30 05:36:30.331 [INFO][4461] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 30 05:36:30.362885 containerd[1688]: 2025-10-30 05:36:30.332 [INFO][4461] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c388c0a81a46ab650418800937d66231f293448a03e48716bd5377d04c73b6ac" host="localhost" Oct 30 05:36:30.364138 containerd[1688]: 2025-10-30 05:36:30.333 [INFO][4461] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c388c0a81a46ab650418800937d66231f293448a03e48716bd5377d04c73b6ac Oct 30 05:36:30.364138 containerd[1688]: 2025-10-30 05:36:30.337 [INFO][4461] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c388c0a81a46ab650418800937d66231f293448a03e48716bd5377d04c73b6ac" host="localhost" Oct 30 05:36:30.364138 containerd[1688]: 2025-10-30 05:36:30.346 [INFO][4461] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.c388c0a81a46ab650418800937d66231f293448a03e48716bd5377d04c73b6ac" host="localhost" Oct 30 05:36:30.364138 containerd[1688]: 2025-10-30 05:36:30.346 [INFO][4461] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.c388c0a81a46ab650418800937d66231f293448a03e48716bd5377d04c73b6ac" host="localhost" Oct 30 05:36:30.364138 containerd[1688]: 2025-10-30 05:36:30.346 [INFO][4461] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 30 05:36:30.364138 containerd[1688]: 2025-10-30 05:36:30.346 [INFO][4461] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="c388c0a81a46ab650418800937d66231f293448a03e48716bd5377d04c73b6ac" HandleID="k8s-pod-network.c388c0a81a46ab650418800937d66231f293448a03e48716bd5377d04c73b6ac" Workload="localhost-k8s-coredns--674b8bbfcf--qvzr4-eth0" Oct 30 05:36:30.364258 containerd[1688]: 2025-10-30 05:36:30.349 [INFO][4425] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c388c0a81a46ab650418800937d66231f293448a03e48716bd5377d04c73b6ac" Namespace="kube-system" Pod="coredns-674b8bbfcf-qvzr4" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--qvzr4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--qvzr4-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"81efb543-1b70-4891-8544-9abff1da4cdb", ResourceVersion:"844", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 5, 35, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-qvzr4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calife2f4d2e6b5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 05:36:30.365007 containerd[1688]: 2025-10-30 05:36:30.349 [INFO][4425] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="c388c0a81a46ab650418800937d66231f293448a03e48716bd5377d04c73b6ac" Namespace="kube-system" Pod="coredns-674b8bbfcf-qvzr4" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--qvzr4-eth0" Oct 30 05:36:30.365007 containerd[1688]: 2025-10-30 05:36:30.350 [INFO][4425] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calife2f4d2e6b5 ContainerID="c388c0a81a46ab650418800937d66231f293448a03e48716bd5377d04c73b6ac" Namespace="kube-system" Pod="coredns-674b8bbfcf-qvzr4" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--qvzr4-eth0" Oct 30 05:36:30.365007 containerd[1688]: 2025-10-30 05:36:30.351 [INFO][4425] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c388c0a81a46ab650418800937d66231f293448a03e48716bd5377d04c73b6ac" Namespace="kube-system" Pod="coredns-674b8bbfcf-qvzr4" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--qvzr4-eth0" Oct 30 05:36:30.365066 containerd[1688]: 2025-10-30 05:36:30.352 [INFO][4425] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c388c0a81a46ab650418800937d66231f293448a03e48716bd5377d04c73b6ac" Namespace="kube-system" Pod="coredns-674b8bbfcf-qvzr4" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--qvzr4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--qvzr4-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"81efb543-1b70-4891-8544-9abff1da4cdb", ResourceVersion:"844", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 5, 35, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c388c0a81a46ab650418800937d66231f293448a03e48716bd5377d04c73b6ac", Pod:"coredns-674b8bbfcf-qvzr4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calife2f4d2e6b5", MAC:"ea:f6:52:b9:e5:33", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 05:36:30.365066 containerd[1688]: 2025-10-30 05:36:30.358 [INFO][4425] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c388c0a81a46ab650418800937d66231f293448a03e48716bd5377d04c73b6ac" Namespace="kube-system" Pod="coredns-674b8bbfcf-qvzr4" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--qvzr4-eth0" Oct 30 05:36:30.382319 containerd[1688]: time="2025-10-30T05:36:30.382292251Z" level=info msg="connecting to shim c388c0a81a46ab650418800937d66231f293448a03e48716bd5377d04c73b6ac" address="unix:///run/containerd/s/3dcc5d85c93e52cc9086f5aaa673fa58f3efd22dc53c5d317f050bab32eb6d56" namespace=k8s.io protocol=ttrpc version=3 Oct 30 05:36:30.410458 systemd[1]: Started cri-containerd-c388c0a81a46ab650418800937d66231f293448a03e48716bd5377d04c73b6ac.scope - libcontainer container c388c0a81a46ab650418800937d66231f293448a03e48716bd5377d04c73b6ac. Oct 30 05:36:30.415058 containerd[1688]: time="2025-10-30T05:36:30.415035897Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9d7959b66-vrsrq,Uid:a80719c9-5ba9-4e0b-97b9-ad9c69d9e2be,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"a3cc1df9341cb757e1f1d7ded97a73d64c7548a73e7344fb5e76d9f8c22bc67d\"" Oct 30 05:36:30.417363 containerd[1688]: time="2025-10-30T05:36:30.417344789Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 30 05:36:30.432019 systemd-resolved[1334]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 30 05:36:30.446642 systemd-networkd[1568]: calic7603d32189: Link UP Oct 30 05:36:30.446737 systemd-networkd[1568]: calic7603d32189: Gained carrier Oct 30 05:36:30.461819 containerd[1688]: 2025-10-30 05:36:30.180 [INFO][4430] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--7c65b456cf--hz876-eth0 calico-kube-controllers-7c65b456cf- calico-system e883b021-a9bb-48ab-80cc-7b947568b059 849 0 2025-10-30 05:36:01 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7c65b456cf projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-7c65b456cf-hz876 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calic7603d32189 [] [] }} ContainerID="507ed639a59f6a3405d27e48fff6c55bea3da7ccda5ca60b2256f69ab2f1e8d9" Namespace="calico-system" Pod="calico-kube-controllers-7c65b456cf-hz876" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7c65b456cf--hz876-" Oct 30 05:36:30.461819 containerd[1688]: 2025-10-30 05:36:30.180 [INFO][4430] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="507ed639a59f6a3405d27e48fff6c55bea3da7ccda5ca60b2256f69ab2f1e8d9" Namespace="calico-system" Pod="calico-kube-controllers-7c65b456cf-hz876" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7c65b456cf--hz876-eth0" Oct 30 05:36:30.461819 containerd[1688]: 2025-10-30 05:36:30.214 [INFO][4473] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="507ed639a59f6a3405d27e48fff6c55bea3da7ccda5ca60b2256f69ab2f1e8d9" HandleID="k8s-pod-network.507ed639a59f6a3405d27e48fff6c55bea3da7ccda5ca60b2256f69ab2f1e8d9" Workload="localhost-k8s-calico--kube--controllers--7c65b456cf--hz876-eth0" Oct 30 05:36:30.461819 containerd[1688]: 2025-10-30 05:36:30.214 [INFO][4473] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="507ed639a59f6a3405d27e48fff6c55bea3da7ccda5ca60b2256f69ab2f1e8d9" HandleID="k8s-pod-network.507ed639a59f6a3405d27e48fff6c55bea3da7ccda5ca60b2256f69ab2f1e8d9" Workload="localhost-k8s-calico--kube--controllers--7c65b456cf--hz876-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5180), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-7c65b456cf-hz876", "timestamp":"2025-10-30 05:36:30.214105549 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 30 05:36:30.461819 containerd[1688]: 2025-10-30 05:36:30.214 [INFO][4473] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 30 05:36:30.461819 containerd[1688]: 2025-10-30 05:36:30.346 [INFO][4473] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 30 05:36:30.461819 containerd[1688]: 2025-10-30 05:36:30.346 [INFO][4473] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 30 05:36:30.461819 containerd[1688]: 2025-10-30 05:36:30.419 [INFO][4473] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.507ed639a59f6a3405d27e48fff6c55bea3da7ccda5ca60b2256f69ab2f1e8d9" host="localhost" Oct 30 05:36:30.461819 containerd[1688]: 2025-10-30 05:36:30.425 [INFO][4473] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 30 05:36:30.461819 containerd[1688]: 2025-10-30 05:36:30.428 [INFO][4473] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 30 05:36:30.461819 containerd[1688]: 2025-10-30 05:36:30.431 [INFO][4473] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 30 05:36:30.461819 containerd[1688]: 2025-10-30 05:36:30.434 [INFO][4473] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 30 05:36:30.461819 containerd[1688]: 2025-10-30 05:36:30.434 [INFO][4473] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.507ed639a59f6a3405d27e48fff6c55bea3da7ccda5ca60b2256f69ab2f1e8d9" host="localhost" Oct 30 05:36:30.461819 containerd[1688]: 2025-10-30 05:36:30.436 [INFO][4473] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.507ed639a59f6a3405d27e48fff6c55bea3da7ccda5ca60b2256f69ab2f1e8d9 Oct 30 05:36:30.461819 containerd[1688]: 2025-10-30 05:36:30.440 [INFO][4473] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.507ed639a59f6a3405d27e48fff6c55bea3da7ccda5ca60b2256f69ab2f1e8d9" host="localhost" Oct 30 05:36:30.461819 containerd[1688]: 2025-10-30 05:36:30.443 [INFO][4473] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.507ed639a59f6a3405d27e48fff6c55bea3da7ccda5ca60b2256f69ab2f1e8d9" host="localhost" Oct 30 05:36:30.461819 containerd[1688]: 2025-10-30 05:36:30.443 [INFO][4473] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.507ed639a59f6a3405d27e48fff6c55bea3da7ccda5ca60b2256f69ab2f1e8d9" host="localhost" Oct 30 05:36:30.461819 containerd[1688]: 2025-10-30 05:36:30.443 [INFO][4473] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 30 05:36:30.461819 containerd[1688]: 2025-10-30 05:36:30.443 [INFO][4473] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="507ed639a59f6a3405d27e48fff6c55bea3da7ccda5ca60b2256f69ab2f1e8d9" HandleID="k8s-pod-network.507ed639a59f6a3405d27e48fff6c55bea3da7ccda5ca60b2256f69ab2f1e8d9" Workload="localhost-k8s-calico--kube--controllers--7c65b456cf--hz876-eth0" Oct 30 05:36:30.463425 containerd[1688]: 2025-10-30 05:36:30.444 [INFO][4430] cni-plugin/k8s.go 418: Populated endpoint ContainerID="507ed639a59f6a3405d27e48fff6c55bea3da7ccda5ca60b2256f69ab2f1e8d9" Namespace="calico-system" Pod="calico-kube-controllers-7c65b456cf-hz876" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7c65b456cf--hz876-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7c65b456cf--hz876-eth0", GenerateName:"calico-kube-controllers-7c65b456cf-", Namespace:"calico-system", SelfLink:"", UID:"e883b021-a9bb-48ab-80cc-7b947568b059", ResourceVersion:"849", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 5, 36, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7c65b456cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-7c65b456cf-hz876", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic7603d32189", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 05:36:30.463425 containerd[1688]: 2025-10-30 05:36:30.445 [INFO][4430] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="507ed639a59f6a3405d27e48fff6c55bea3da7ccda5ca60b2256f69ab2f1e8d9" Namespace="calico-system" Pod="calico-kube-controllers-7c65b456cf-hz876" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7c65b456cf--hz876-eth0" Oct 30 05:36:30.463425 containerd[1688]: 2025-10-30 05:36:30.445 [INFO][4430] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic7603d32189 ContainerID="507ed639a59f6a3405d27e48fff6c55bea3da7ccda5ca60b2256f69ab2f1e8d9" Namespace="calico-system" Pod="calico-kube-controllers-7c65b456cf-hz876" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7c65b456cf--hz876-eth0" Oct 30 05:36:30.463425 containerd[1688]: 2025-10-30 05:36:30.447 [INFO][4430] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="507ed639a59f6a3405d27e48fff6c55bea3da7ccda5ca60b2256f69ab2f1e8d9" Namespace="calico-system" Pod="calico-kube-controllers-7c65b456cf-hz876" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7c65b456cf--hz876-eth0" Oct 30 05:36:30.463425 containerd[1688]: 2025-10-30 05:36:30.447 [INFO][4430] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="507ed639a59f6a3405d27e48fff6c55bea3da7ccda5ca60b2256f69ab2f1e8d9" Namespace="calico-system" Pod="calico-kube-controllers-7c65b456cf-hz876" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7c65b456cf--hz876-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7c65b456cf--hz876-eth0", GenerateName:"calico-kube-controllers-7c65b456cf-", Namespace:"calico-system", SelfLink:"", UID:"e883b021-a9bb-48ab-80cc-7b947568b059", ResourceVersion:"849", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 5, 36, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7c65b456cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"507ed639a59f6a3405d27e48fff6c55bea3da7ccda5ca60b2256f69ab2f1e8d9", Pod:"calico-kube-controllers-7c65b456cf-hz876", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic7603d32189", MAC:"d2:79:e8:db:7f:d0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 05:36:30.463425 containerd[1688]: 2025-10-30 05:36:30.457 [INFO][4430] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="507ed639a59f6a3405d27e48fff6c55bea3da7ccda5ca60b2256f69ab2f1e8d9" Namespace="calico-system" Pod="calico-kube-controllers-7c65b456cf-hz876" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7c65b456cf--hz876-eth0" Oct 30 05:36:30.499811 containerd[1688]: time="2025-10-30T05:36:30.499788614Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-qvzr4,Uid:81efb543-1b70-4891-8544-9abff1da4cdb,Namespace:kube-system,Attempt:0,} returns sandbox id \"c388c0a81a46ab650418800937d66231f293448a03e48716bd5377d04c73b6ac\"" Oct 30 05:36:30.500029 containerd[1688]: time="2025-10-30T05:36:30.499947445Z" level=info msg="connecting to shim 507ed639a59f6a3405d27e48fff6c55bea3da7ccda5ca60b2256f69ab2f1e8d9" address="unix:///run/containerd/s/1a098d1b5b14dd0c31ec3e04b53aec44c1b6caa646debc07661581bc4dce8202" namespace=k8s.io protocol=ttrpc version=3 Oct 30 05:36:30.530057 systemd[1]: Started cri-containerd-507ed639a59f6a3405d27e48fff6c55bea3da7ccda5ca60b2256f69ab2f1e8d9.scope - libcontainer container 507ed639a59f6a3405d27e48fff6c55bea3da7ccda5ca60b2256f69ab2f1e8d9. Oct 30 05:36:30.539642 systemd-resolved[1334]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 30 05:36:30.564532 containerd[1688]: time="2025-10-30T05:36:30.564466217Z" level=info msg="CreateContainer within sandbox \"c388c0a81a46ab650418800937d66231f293448a03e48716bd5377d04c73b6ac\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 30 05:36:30.580316 containerd[1688]: time="2025-10-30T05:36:30.579957002Z" level=info msg="Container b331a1d61129d965e1cd2a4ee3a415ecb0c5595a686cec26a69da4fbc658538c: CDI devices from CRI Config.CDIDevices: []" Oct 30 05:36:30.582373 containerd[1688]: time="2025-10-30T05:36:30.582350996Z" level=info msg="CreateContainer within sandbox \"c388c0a81a46ab650418800937d66231f293448a03e48716bd5377d04c73b6ac\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b331a1d61129d965e1cd2a4ee3a415ecb0c5595a686cec26a69da4fbc658538c\"" Oct 30 05:36:30.582905 containerd[1688]: time="2025-10-30T05:36:30.582892905Z" level=info msg="StartContainer for \"b331a1d61129d965e1cd2a4ee3a415ecb0c5595a686cec26a69da4fbc658538c\"" Oct 30 05:36:30.584103 containerd[1688]: time="2025-10-30T05:36:30.584058166Z" level=info msg="connecting to shim b331a1d61129d965e1cd2a4ee3a415ecb0c5595a686cec26a69da4fbc658538c" address="unix:///run/containerd/s/3dcc5d85c93e52cc9086f5aaa673fa58f3efd22dc53c5d317f050bab32eb6d56" protocol=ttrpc version=3 Oct 30 05:36:30.596937 containerd[1688]: time="2025-10-30T05:36:30.596917102Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7c65b456cf-hz876,Uid:e883b021-a9bb-48ab-80cc-7b947568b059,Namespace:calico-system,Attempt:0,} returns sandbox id \"507ed639a59f6a3405d27e48fff6c55bea3da7ccda5ca60b2256f69ab2f1e8d9\"" Oct 30 05:36:30.602425 systemd[1]: Started cri-containerd-b331a1d61129d965e1cd2a4ee3a415ecb0c5595a686cec26a69da4fbc658538c.scope - libcontainer container b331a1d61129d965e1cd2a4ee3a415ecb0c5595a686cec26a69da4fbc658538c. Oct 30 05:36:30.643593 containerd[1688]: time="2025-10-30T05:36:30.643530496Z" level=info msg="StartContainer for \"b331a1d61129d965e1cd2a4ee3a415ecb0c5595a686cec26a69da4fbc658538c\" returns successfully" Oct 30 05:36:30.763933 containerd[1688]: time="2025-10-30T05:36:30.763905271Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 05:36:30.768920 containerd[1688]: time="2025-10-30T05:36:30.768876833Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 30 05:36:30.769060 containerd[1688]: time="2025-10-30T05:36:30.768972031Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 30 05:36:30.769221 kubelet[2999]: E1030 05:36:30.769185 2999 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 30 05:36:30.769569 kubelet[2999]: E1030 05:36:30.769225 2999 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 30 05:36:30.769569 kubelet[2999]: E1030 05:36:30.769390 2999 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r7b2k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-9d7959b66-vrsrq_calico-apiserver(a80719c9-5ba9-4e0b-97b9-ad9c69d9e2be): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 30 05:36:30.770461 containerd[1688]: time="2025-10-30T05:36:30.770438010Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 30 05:36:30.770687 kubelet[2999]: E1030 05:36:30.770434 2999 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9d7959b66-vrsrq" podUID="a80719c9-5ba9-4e0b-97b9-ad9c69d9e2be" Oct 30 05:36:31.101970 containerd[1688]: time="2025-10-30T05:36:31.101795258Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c564d8bcd-kspf5,Uid:9762f718-a4c7-49eb-975e-48fddc6a6070,Namespace:calico-apiserver,Attempt:0,}" Oct 30 05:36:31.214597 systemd-networkd[1568]: cali954ec6d604d: Link UP Oct 30 05:36:31.215490 systemd-networkd[1568]: cali954ec6d604d: Gained carrier Oct 30 05:36:31.217995 containerd[1688]: time="2025-10-30T05:36:31.217938485Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 05:36:31.222304 containerd[1688]: time="2025-10-30T05:36:31.221656030Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 30 05:36:31.222304 containerd[1688]: time="2025-10-30T05:36:31.221696756Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Oct 30 05:36:31.226470 kubelet[2999]: E1030 05:36:31.225430 2999 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 30 05:36:31.226470 kubelet[2999]: E1030 05:36:31.225467 2999 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 30 05:36:31.226470 kubelet[2999]: E1030 05:36:31.225557 2999 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kkwwr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7c65b456cf-hz876_calico-system(e883b021-a9bb-48ab-80cc-7b947568b059): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 30 05:36:31.226780 kubelet[2999]: E1030 05:36:31.226660 2999 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7c65b456cf-hz876" podUID="e883b021-a9bb-48ab-80cc-7b947568b059" Oct 30 05:36:31.227634 containerd[1688]: 2025-10-30 05:36:31.138 [INFO][4685] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5c564d8bcd--kspf5-eth0 calico-apiserver-5c564d8bcd- calico-apiserver 9762f718-a4c7-49eb-975e-48fddc6a6070 850 0 2025-10-30 05:35:57 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5c564d8bcd projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5c564d8bcd-kspf5 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali954ec6d604d [] [] }} ContainerID="932cc5f416d1c698ab4bdedf64b517c802e1c74e7cc819e956e3c807a2486b67" Namespace="calico-apiserver" Pod="calico-apiserver-5c564d8bcd-kspf5" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c564d8bcd--kspf5-" Oct 30 05:36:31.227634 containerd[1688]: 2025-10-30 05:36:31.138 [INFO][4685] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="932cc5f416d1c698ab4bdedf64b517c802e1c74e7cc819e956e3c807a2486b67" Namespace="calico-apiserver" Pod="calico-apiserver-5c564d8bcd-kspf5" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c564d8bcd--kspf5-eth0" Oct 30 05:36:31.227634 containerd[1688]: 2025-10-30 05:36:31.186 [INFO][4695] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="932cc5f416d1c698ab4bdedf64b517c802e1c74e7cc819e956e3c807a2486b67" HandleID="k8s-pod-network.932cc5f416d1c698ab4bdedf64b517c802e1c74e7cc819e956e3c807a2486b67" Workload="localhost-k8s-calico--apiserver--5c564d8bcd--kspf5-eth0" Oct 30 05:36:31.227634 containerd[1688]: 2025-10-30 05:36:31.186 [INFO][4695] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="932cc5f416d1c698ab4bdedf64b517c802e1c74e7cc819e956e3c807a2486b67" HandleID="k8s-pod-network.932cc5f416d1c698ab4bdedf64b517c802e1c74e7cc819e956e3c807a2486b67" Workload="localhost-k8s-calico--apiserver--5c564d8bcd--kspf5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003320d0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5c564d8bcd-kspf5", "timestamp":"2025-10-30 05:36:31.186513775 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 30 05:36:31.227634 containerd[1688]: 2025-10-30 05:36:31.186 [INFO][4695] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 30 05:36:31.227634 containerd[1688]: 2025-10-30 05:36:31.186 [INFO][4695] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 30 05:36:31.227634 containerd[1688]: 2025-10-30 05:36:31.186 [INFO][4695] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 30 05:36:31.227634 containerd[1688]: 2025-10-30 05:36:31.191 [INFO][4695] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.932cc5f416d1c698ab4bdedf64b517c802e1c74e7cc819e956e3c807a2486b67" host="localhost" Oct 30 05:36:31.227634 containerd[1688]: 2025-10-30 05:36:31.194 [INFO][4695] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 30 05:36:31.227634 containerd[1688]: 2025-10-30 05:36:31.197 [INFO][4695] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 30 05:36:31.227634 containerd[1688]: 2025-10-30 05:36:31.198 [INFO][4695] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 30 05:36:31.227634 containerd[1688]: 2025-10-30 05:36:31.199 [INFO][4695] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 30 05:36:31.227634 containerd[1688]: 2025-10-30 05:36:31.199 [INFO][4695] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.932cc5f416d1c698ab4bdedf64b517c802e1c74e7cc819e956e3c807a2486b67" host="localhost" Oct 30 05:36:31.227634 containerd[1688]: 2025-10-30 05:36:31.200 [INFO][4695] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.932cc5f416d1c698ab4bdedf64b517c802e1c74e7cc819e956e3c807a2486b67 Oct 30 05:36:31.227634 containerd[1688]: 2025-10-30 05:36:31.205 [INFO][4695] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.932cc5f416d1c698ab4bdedf64b517c802e1c74e7cc819e956e3c807a2486b67" host="localhost" Oct 30 05:36:31.227634 containerd[1688]: 2025-10-30 05:36:31.210 [INFO][4695] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.932cc5f416d1c698ab4bdedf64b517c802e1c74e7cc819e956e3c807a2486b67" host="localhost" Oct 30 05:36:31.227634 containerd[1688]: 2025-10-30 05:36:31.210 [INFO][4695] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.932cc5f416d1c698ab4bdedf64b517c802e1c74e7cc819e956e3c807a2486b67" host="localhost" Oct 30 05:36:31.227634 containerd[1688]: 2025-10-30 05:36:31.210 [INFO][4695] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 30 05:36:31.227634 containerd[1688]: 2025-10-30 05:36:31.210 [INFO][4695] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="932cc5f416d1c698ab4bdedf64b517c802e1c74e7cc819e956e3c807a2486b67" HandleID="k8s-pod-network.932cc5f416d1c698ab4bdedf64b517c802e1c74e7cc819e956e3c807a2486b67" Workload="localhost-k8s-calico--apiserver--5c564d8bcd--kspf5-eth0" Oct 30 05:36:31.228667 containerd[1688]: 2025-10-30 05:36:31.211 [INFO][4685] cni-plugin/k8s.go 418: Populated endpoint ContainerID="932cc5f416d1c698ab4bdedf64b517c802e1c74e7cc819e956e3c807a2486b67" Namespace="calico-apiserver" Pod="calico-apiserver-5c564d8bcd-kspf5" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c564d8bcd--kspf5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5c564d8bcd--kspf5-eth0", GenerateName:"calico-apiserver-5c564d8bcd-", Namespace:"calico-apiserver", SelfLink:"", UID:"9762f718-a4c7-49eb-975e-48fddc6a6070", ResourceVersion:"850", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 5, 35, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5c564d8bcd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5c564d8bcd-kspf5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali954ec6d604d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 05:36:31.228667 containerd[1688]: 2025-10-30 05:36:31.211 [INFO][4685] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="932cc5f416d1c698ab4bdedf64b517c802e1c74e7cc819e956e3c807a2486b67" Namespace="calico-apiserver" Pod="calico-apiserver-5c564d8bcd-kspf5" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c564d8bcd--kspf5-eth0" Oct 30 05:36:31.228667 containerd[1688]: 2025-10-30 05:36:31.211 [INFO][4685] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali954ec6d604d ContainerID="932cc5f416d1c698ab4bdedf64b517c802e1c74e7cc819e956e3c807a2486b67" Namespace="calico-apiserver" Pod="calico-apiserver-5c564d8bcd-kspf5" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c564d8bcd--kspf5-eth0" Oct 30 05:36:31.228667 containerd[1688]: 2025-10-30 05:36:31.215 [INFO][4685] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="932cc5f416d1c698ab4bdedf64b517c802e1c74e7cc819e956e3c807a2486b67" Namespace="calico-apiserver" Pod="calico-apiserver-5c564d8bcd-kspf5" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c564d8bcd--kspf5-eth0" Oct 30 05:36:31.228667 containerd[1688]: 2025-10-30 05:36:31.216 [INFO][4685] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="932cc5f416d1c698ab4bdedf64b517c802e1c74e7cc819e956e3c807a2486b67" Namespace="calico-apiserver" Pod="calico-apiserver-5c564d8bcd-kspf5" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c564d8bcd--kspf5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5c564d8bcd--kspf5-eth0", GenerateName:"calico-apiserver-5c564d8bcd-", Namespace:"calico-apiserver", SelfLink:"", UID:"9762f718-a4c7-49eb-975e-48fddc6a6070", ResourceVersion:"850", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 5, 35, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5c564d8bcd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"932cc5f416d1c698ab4bdedf64b517c802e1c74e7cc819e956e3c807a2486b67", Pod:"calico-apiserver-5c564d8bcd-kspf5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali954ec6d604d", MAC:"e6:62:75:c1:13:1c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 05:36:31.228667 containerd[1688]: 2025-10-30 05:36:31.224 [INFO][4685] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="932cc5f416d1c698ab4bdedf64b517c802e1c74e7cc819e956e3c807a2486b67" Namespace="calico-apiserver" Pod="calico-apiserver-5c564d8bcd-kspf5" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c564d8bcd--kspf5-eth0" Oct 30 05:36:31.255417 containerd[1688]: time="2025-10-30T05:36:31.255387091Z" level=info msg="connecting to shim 932cc5f416d1c698ab4bdedf64b517c802e1c74e7cc819e956e3c807a2486b67" address="unix:///run/containerd/s/5eb6008865dfe34452b895d9510e941ac56ee41607698d89c7393da1ba27da04" namespace=k8s.io protocol=ttrpc version=3 Oct 30 05:36:31.331427 systemd[1]: Started cri-containerd-932cc5f416d1c698ab4bdedf64b517c802e1c74e7cc819e956e3c807a2486b67.scope - libcontainer container 932cc5f416d1c698ab4bdedf64b517c802e1c74e7cc819e956e3c807a2486b67. Oct 30 05:36:31.344885 systemd-resolved[1334]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 30 05:36:31.375976 containerd[1688]: time="2025-10-30T05:36:31.375641712Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c564d8bcd-kspf5,Uid:9762f718-a4c7-49eb-975e-48fddc6a6070,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"932cc5f416d1c698ab4bdedf64b517c802e1c74e7cc819e956e3c807a2486b67\"" Oct 30 05:36:31.377211 containerd[1688]: time="2025-10-30T05:36:31.377189719Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 30 05:36:31.394203 kubelet[2999]: E1030 05:36:31.394175 2999 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9d7959b66-vrsrq" podUID="a80719c9-5ba9-4e0b-97b9-ad9c69d9e2be" Oct 30 05:36:31.396624 kubelet[2999]: E1030 05:36:31.396598 2999 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7c65b456cf-hz876" podUID="e883b021-a9bb-48ab-80cc-7b947568b059" Oct 30 05:36:31.710891 containerd[1688]: time="2025-10-30T05:36:31.710769987Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 05:36:31.718583 containerd[1688]: time="2025-10-30T05:36:31.718465291Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 30 05:36:31.718583 containerd[1688]: time="2025-10-30T05:36:31.718555781Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 30 05:36:31.718884 kubelet[2999]: E1030 05:36:31.718846 2999 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 30 05:36:31.718941 kubelet[2999]: E1030 05:36:31.718890 2999 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 30 05:36:31.719282 kubelet[2999]: E1030 05:36:31.718975 2999 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n22k8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5c564d8bcd-kspf5_calico-apiserver(9762f718-a4c7-49eb-975e-48fddc6a6070): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 30 05:36:31.725405 kubelet[2999]: E1030 05:36:31.720463 2999 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5c564d8bcd-kspf5" podUID="9762f718-a4c7-49eb-975e-48fddc6a6070" Oct 30 05:36:31.781386 systemd-networkd[1568]: calife2f4d2e6b5: Gained IPv6LL Oct 30 05:36:31.909384 systemd-networkd[1568]: calib16a9e4d07f: Gained IPv6LL Oct 30 05:36:31.973416 systemd-networkd[1568]: calic7603d32189: Gained IPv6LL Oct 30 05:36:32.100725 containerd[1688]: time="2025-10-30T05:36:32.100690389Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-tznl5,Uid:11de3de5-1c6c-4871-83d5-a6b9faf25770,Namespace:calico-system,Attempt:0,}" Oct 30 05:36:32.101829 containerd[1688]: time="2025-10-30T05:36:32.100690378Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-6df7j,Uid:23a5b2c9-a75a-4c5c-9522-bc7e36043c20,Namespace:kube-system,Attempt:0,}" Oct 30 05:36:32.353575 systemd-networkd[1568]: calia39c93331ba: Link UP Oct 30 05:36:32.353695 systemd-networkd[1568]: calia39c93331ba: Gained carrier Oct 30 05:36:32.370842 containerd[1688]: 2025-10-30 05:36:32.273 [INFO][4767] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--6df7j-eth0 coredns-674b8bbfcf- kube-system 23a5b2c9-a75a-4c5c-9522-bc7e36043c20 839 0 2025-10-30 05:35:50 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-6df7j eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calia39c93331ba [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="77d37d5cbbd14108a849a9dd0daece6f7627867cd78ecc8d8066d4edfbb76afe" Namespace="kube-system" Pod="coredns-674b8bbfcf-6df7j" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--6df7j-" Oct 30 05:36:32.370842 containerd[1688]: 2025-10-30 05:36:32.273 [INFO][4767] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="77d37d5cbbd14108a849a9dd0daece6f7627867cd78ecc8d8066d4edfbb76afe" Namespace="kube-system" Pod="coredns-674b8bbfcf-6df7j" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--6df7j-eth0" Oct 30 05:36:32.370842 containerd[1688]: 2025-10-30 05:36:32.304 [INFO][4790] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="77d37d5cbbd14108a849a9dd0daece6f7627867cd78ecc8d8066d4edfbb76afe" HandleID="k8s-pod-network.77d37d5cbbd14108a849a9dd0daece6f7627867cd78ecc8d8066d4edfbb76afe" Workload="localhost-k8s-coredns--674b8bbfcf--6df7j-eth0" Oct 30 05:36:32.370842 containerd[1688]: 2025-10-30 05:36:32.304 [INFO][4790] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="77d37d5cbbd14108a849a9dd0daece6f7627867cd78ecc8d8066d4edfbb76afe" HandleID="k8s-pod-network.77d37d5cbbd14108a849a9dd0daece6f7627867cd78ecc8d8066d4edfbb76afe" Workload="localhost-k8s-coredns--674b8bbfcf--6df7j-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003328e0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-6df7j", "timestamp":"2025-10-30 05:36:32.304854061 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 30 05:36:32.370842 containerd[1688]: 2025-10-30 05:36:32.305 [INFO][4790] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 30 05:36:32.370842 containerd[1688]: 2025-10-30 05:36:32.305 [INFO][4790] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 30 05:36:32.370842 containerd[1688]: 2025-10-30 05:36:32.305 [INFO][4790] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 30 05:36:32.370842 containerd[1688]: 2025-10-30 05:36:32.313 [INFO][4790] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.77d37d5cbbd14108a849a9dd0daece6f7627867cd78ecc8d8066d4edfbb76afe" host="localhost" Oct 30 05:36:32.370842 containerd[1688]: 2025-10-30 05:36:32.319 [INFO][4790] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 30 05:36:32.370842 containerd[1688]: 2025-10-30 05:36:32.328 [INFO][4790] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 30 05:36:32.370842 containerd[1688]: 2025-10-30 05:36:32.331 [INFO][4790] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 30 05:36:32.370842 containerd[1688]: 2025-10-30 05:36:32.333 [INFO][4790] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 30 05:36:32.370842 containerd[1688]: 2025-10-30 05:36:32.333 [INFO][4790] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.77d37d5cbbd14108a849a9dd0daece6f7627867cd78ecc8d8066d4edfbb76afe" host="localhost" Oct 30 05:36:32.370842 containerd[1688]: 2025-10-30 05:36:32.334 [INFO][4790] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.77d37d5cbbd14108a849a9dd0daece6f7627867cd78ecc8d8066d4edfbb76afe Oct 30 05:36:32.370842 containerd[1688]: 2025-10-30 05:36:32.338 [INFO][4790] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.77d37d5cbbd14108a849a9dd0daece6f7627867cd78ecc8d8066d4edfbb76afe" host="localhost" Oct 30 05:36:32.370842 containerd[1688]: 2025-10-30 05:36:32.343 [INFO][4790] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.77d37d5cbbd14108a849a9dd0daece6f7627867cd78ecc8d8066d4edfbb76afe" host="localhost" Oct 30 05:36:32.370842 containerd[1688]: 2025-10-30 05:36:32.343 [INFO][4790] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.77d37d5cbbd14108a849a9dd0daece6f7627867cd78ecc8d8066d4edfbb76afe" host="localhost" Oct 30 05:36:32.370842 containerd[1688]: 2025-10-30 05:36:32.343 [INFO][4790] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 30 05:36:32.370842 containerd[1688]: 2025-10-30 05:36:32.343 [INFO][4790] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="77d37d5cbbd14108a849a9dd0daece6f7627867cd78ecc8d8066d4edfbb76afe" HandleID="k8s-pod-network.77d37d5cbbd14108a849a9dd0daece6f7627867cd78ecc8d8066d4edfbb76afe" Workload="localhost-k8s-coredns--674b8bbfcf--6df7j-eth0" Oct 30 05:36:32.384463 containerd[1688]: 2025-10-30 05:36:32.349 [INFO][4767] cni-plugin/k8s.go 418: Populated endpoint ContainerID="77d37d5cbbd14108a849a9dd0daece6f7627867cd78ecc8d8066d4edfbb76afe" Namespace="kube-system" Pod="coredns-674b8bbfcf-6df7j" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--6df7j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--6df7j-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"23a5b2c9-a75a-4c5c-9522-bc7e36043c20", ResourceVersion:"839", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 5, 35, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-6df7j", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia39c93331ba", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 05:36:32.384463 containerd[1688]: 2025-10-30 05:36:32.349 [INFO][4767] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="77d37d5cbbd14108a849a9dd0daece6f7627867cd78ecc8d8066d4edfbb76afe" Namespace="kube-system" Pod="coredns-674b8bbfcf-6df7j" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--6df7j-eth0" Oct 30 05:36:32.384463 containerd[1688]: 2025-10-30 05:36:32.349 [INFO][4767] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia39c93331ba ContainerID="77d37d5cbbd14108a849a9dd0daece6f7627867cd78ecc8d8066d4edfbb76afe" Namespace="kube-system" Pod="coredns-674b8bbfcf-6df7j" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--6df7j-eth0" Oct 30 05:36:32.384463 containerd[1688]: 2025-10-30 05:36:32.355 [INFO][4767] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="77d37d5cbbd14108a849a9dd0daece6f7627867cd78ecc8d8066d4edfbb76afe" Namespace="kube-system" Pod="coredns-674b8bbfcf-6df7j" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--6df7j-eth0" Oct 30 05:36:32.384463 containerd[1688]: 2025-10-30 05:36:32.355 [INFO][4767] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="77d37d5cbbd14108a849a9dd0daece6f7627867cd78ecc8d8066d4edfbb76afe" Namespace="kube-system" Pod="coredns-674b8bbfcf-6df7j" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--6df7j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--6df7j-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"23a5b2c9-a75a-4c5c-9522-bc7e36043c20", ResourceVersion:"839", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 5, 35, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"77d37d5cbbd14108a849a9dd0daece6f7627867cd78ecc8d8066d4edfbb76afe", Pod:"coredns-674b8bbfcf-6df7j", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia39c93331ba", MAC:"4e:2b:d0:bc:fe:6a", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 05:36:32.384463 containerd[1688]: 2025-10-30 05:36:32.367 [INFO][4767] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="77d37d5cbbd14108a849a9dd0daece6f7627867cd78ecc8d8066d4edfbb76afe" Namespace="kube-system" Pod="coredns-674b8bbfcf-6df7j" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--6df7j-eth0" Oct 30 05:36:32.386958 kubelet[2999]: I1030 05:36:32.384759 2999 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-qvzr4" podStartSLOduration=42.367858327 podStartE2EDuration="42.367858327s" podCreationTimestamp="2025-10-30 05:35:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-30 05:36:31.447378153 +0000 UTC m=+48.519336477" watchObservedRunningTime="2025-10-30 05:36:32.367858327 +0000 UTC m=+49.439816656" Oct 30 05:36:32.400912 containerd[1688]: time="2025-10-30T05:36:32.400587268Z" level=info msg="connecting to shim 77d37d5cbbd14108a849a9dd0daece6f7627867cd78ecc8d8066d4edfbb76afe" address="unix:///run/containerd/s/633185b1b2356d9cfc85b716421937ebaab71b6b4758dce9494616c3ee3cd8b3" namespace=k8s.io protocol=ttrpc version=3 Oct 30 05:36:32.407743 kubelet[2999]: E1030 05:36:32.407715 2999 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7c65b456cf-hz876" podUID="e883b021-a9bb-48ab-80cc-7b947568b059" Oct 30 05:36:32.408090 kubelet[2999]: E1030 05:36:32.408073 2999 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5c564d8bcd-kspf5" podUID="9762f718-a4c7-49eb-975e-48fddc6a6070" Oct 30 05:36:32.408281 kubelet[2999]: E1030 05:36:32.408227 2999 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9d7959b66-vrsrq" podUID="a80719c9-5ba9-4e0b-97b9-ad9c69d9e2be" Oct 30 05:36:32.430416 systemd[1]: Started cri-containerd-77d37d5cbbd14108a849a9dd0daece6f7627867cd78ecc8d8066d4edfbb76afe.scope - libcontainer container 77d37d5cbbd14108a849a9dd0daece6f7627867cd78ecc8d8066d4edfbb76afe. Oct 30 05:36:32.446193 systemd-resolved[1334]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 30 05:36:32.487892 containerd[1688]: time="2025-10-30T05:36:32.487866997Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-6df7j,Uid:23a5b2c9-a75a-4c5c-9522-bc7e36043c20,Namespace:kube-system,Attempt:0,} returns sandbox id \"77d37d5cbbd14108a849a9dd0daece6f7627867cd78ecc8d8066d4edfbb76afe\"" Oct 30 05:36:32.498948 containerd[1688]: time="2025-10-30T05:36:32.498921857Z" level=info msg="CreateContainer within sandbox \"77d37d5cbbd14108a849a9dd0daece6f7627867cd78ecc8d8066d4edfbb76afe\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 30 05:36:32.501109 systemd-networkd[1568]: caliac5451d9824: Link UP Oct 30 05:36:32.502216 systemd-networkd[1568]: caliac5451d9824: Gained carrier Oct 30 05:36:32.512451 containerd[1688]: time="2025-10-30T05:36:32.512414510Z" level=info msg="Container 46d7393b63f457d3b496df61b911d660518601f22696af3af6a6b33c1037fe59: CDI devices from CRI Config.CDIDevices: []" Oct 30 05:36:32.516584 containerd[1688]: 2025-10-30 05:36:32.290 [INFO][4776] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--666569f655--tznl5-eth0 goldmane-666569f655- calico-system 11de3de5-1c6c-4871-83d5-a6b9faf25770 847 0 2025-10-30 05:35:59 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-666569f655-tznl5 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] caliac5451d9824 [] [] }} ContainerID="4a3480ff40a8f799692d5d86956264a12219b55c051506722b6b37d55df103d1" Namespace="calico-system" Pod="goldmane-666569f655-tznl5" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--tznl5-" Oct 30 05:36:32.516584 containerd[1688]: 2025-10-30 05:36:32.290 [INFO][4776] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4a3480ff40a8f799692d5d86956264a12219b55c051506722b6b37d55df103d1" Namespace="calico-system" Pod="goldmane-666569f655-tznl5" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--tznl5-eth0" Oct 30 05:36:32.516584 containerd[1688]: 2025-10-30 05:36:32.330 [INFO][4797] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4a3480ff40a8f799692d5d86956264a12219b55c051506722b6b37d55df103d1" HandleID="k8s-pod-network.4a3480ff40a8f799692d5d86956264a12219b55c051506722b6b37d55df103d1" Workload="localhost-k8s-goldmane--666569f655--tznl5-eth0" Oct 30 05:36:32.516584 containerd[1688]: 2025-10-30 05:36:32.330 [INFO][4797] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="4a3480ff40a8f799692d5d86956264a12219b55c051506722b6b37d55df103d1" HandleID="k8s-pod-network.4a3480ff40a8f799692d5d86956264a12219b55c051506722b6b37d55df103d1" Workload="localhost-k8s-goldmane--666569f655--tznl5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5870), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-666569f655-tznl5", "timestamp":"2025-10-30 05:36:32.330198945 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 30 05:36:32.516584 containerd[1688]: 2025-10-30 05:36:32.330 [INFO][4797] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 30 05:36:32.516584 containerd[1688]: 2025-10-30 05:36:32.343 [INFO][4797] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 30 05:36:32.516584 containerd[1688]: 2025-10-30 05:36:32.343 [INFO][4797] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 30 05:36:32.516584 containerd[1688]: 2025-10-30 05:36:32.424 [INFO][4797] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4a3480ff40a8f799692d5d86956264a12219b55c051506722b6b37d55df103d1" host="localhost" Oct 30 05:36:32.516584 containerd[1688]: 2025-10-30 05:36:32.439 [INFO][4797] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 30 05:36:32.516584 containerd[1688]: 2025-10-30 05:36:32.473 [INFO][4797] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 30 05:36:32.516584 containerd[1688]: 2025-10-30 05:36:32.478 [INFO][4797] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 30 05:36:32.516584 containerd[1688]: 2025-10-30 05:36:32.480 [INFO][4797] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 30 05:36:32.516584 containerd[1688]: 2025-10-30 05:36:32.480 [INFO][4797] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4a3480ff40a8f799692d5d86956264a12219b55c051506722b6b37d55df103d1" host="localhost" Oct 30 05:36:32.516584 containerd[1688]: 2025-10-30 05:36:32.481 [INFO][4797] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.4a3480ff40a8f799692d5d86956264a12219b55c051506722b6b37d55df103d1 Oct 30 05:36:32.516584 containerd[1688]: 2025-10-30 05:36:32.484 [INFO][4797] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4a3480ff40a8f799692d5d86956264a12219b55c051506722b6b37d55df103d1" host="localhost" Oct 30 05:36:32.516584 containerd[1688]: 2025-10-30 05:36:32.491 [INFO][4797] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.4a3480ff40a8f799692d5d86956264a12219b55c051506722b6b37d55df103d1" host="localhost" Oct 30 05:36:32.516584 containerd[1688]: 2025-10-30 05:36:32.491 [INFO][4797] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.4a3480ff40a8f799692d5d86956264a12219b55c051506722b6b37d55df103d1" host="localhost" Oct 30 05:36:32.516584 containerd[1688]: 2025-10-30 05:36:32.491 [INFO][4797] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 30 05:36:32.516584 containerd[1688]: 2025-10-30 05:36:32.491 [INFO][4797] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="4a3480ff40a8f799692d5d86956264a12219b55c051506722b6b37d55df103d1" HandleID="k8s-pod-network.4a3480ff40a8f799692d5d86956264a12219b55c051506722b6b37d55df103d1" Workload="localhost-k8s-goldmane--666569f655--tznl5-eth0" Oct 30 05:36:32.517203 containerd[1688]: 2025-10-30 05:36:32.496 [INFO][4776] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4a3480ff40a8f799692d5d86956264a12219b55c051506722b6b37d55df103d1" Namespace="calico-system" Pod="goldmane-666569f655-tznl5" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--tznl5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--tznl5-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"11de3de5-1c6c-4871-83d5-a6b9faf25770", ResourceVersion:"847", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 5, 35, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-666569f655-tznl5", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"caliac5451d9824", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 05:36:32.517203 containerd[1688]: 2025-10-30 05:36:32.496 [INFO][4776] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="4a3480ff40a8f799692d5d86956264a12219b55c051506722b6b37d55df103d1" Namespace="calico-system" Pod="goldmane-666569f655-tznl5" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--tznl5-eth0" Oct 30 05:36:32.517203 containerd[1688]: 2025-10-30 05:36:32.496 [INFO][4776] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliac5451d9824 ContainerID="4a3480ff40a8f799692d5d86956264a12219b55c051506722b6b37d55df103d1" Namespace="calico-system" Pod="goldmane-666569f655-tznl5" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--tznl5-eth0" Oct 30 05:36:32.517203 containerd[1688]: 2025-10-30 05:36:32.504 [INFO][4776] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4a3480ff40a8f799692d5d86956264a12219b55c051506722b6b37d55df103d1" Namespace="calico-system" Pod="goldmane-666569f655-tznl5" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--tznl5-eth0" Oct 30 05:36:32.517203 containerd[1688]: 2025-10-30 05:36:32.505 [INFO][4776] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4a3480ff40a8f799692d5d86956264a12219b55c051506722b6b37d55df103d1" Namespace="calico-system" Pod="goldmane-666569f655-tznl5" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--tznl5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--tznl5-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"11de3de5-1c6c-4871-83d5-a6b9faf25770", ResourceVersion:"847", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 5, 35, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4a3480ff40a8f799692d5d86956264a12219b55c051506722b6b37d55df103d1", Pod:"goldmane-666569f655-tznl5", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"caliac5451d9824", MAC:"42:67:36:73:65:5c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 05:36:32.517203 containerd[1688]: 2025-10-30 05:36:32.514 [INFO][4776] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4a3480ff40a8f799692d5d86956264a12219b55c051506722b6b37d55df103d1" Namespace="calico-system" Pod="goldmane-666569f655-tznl5" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--tznl5-eth0" Oct 30 05:36:32.517813 containerd[1688]: time="2025-10-30T05:36:32.517792299Z" level=info msg="CreateContainer within sandbox \"77d37d5cbbd14108a849a9dd0daece6f7627867cd78ecc8d8066d4edfbb76afe\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"46d7393b63f457d3b496df61b911d660518601f22696af3af6a6b33c1037fe59\"" Oct 30 05:36:32.518584 containerd[1688]: time="2025-10-30T05:36:32.518404933Z" level=info msg="StartContainer for \"46d7393b63f457d3b496df61b911d660518601f22696af3af6a6b33c1037fe59\"" Oct 30 05:36:32.518869 containerd[1688]: time="2025-10-30T05:36:32.518853439Z" level=info msg="connecting to shim 46d7393b63f457d3b496df61b911d660518601f22696af3af6a6b33c1037fe59" address="unix:///run/containerd/s/633185b1b2356d9cfc85b716421937ebaab71b6b4758dce9494616c3ee3cd8b3" protocol=ttrpc version=3 Oct 30 05:36:32.547735 containerd[1688]: time="2025-10-30T05:36:32.547383705Z" level=info msg="connecting to shim 4a3480ff40a8f799692d5d86956264a12219b55c051506722b6b37d55df103d1" address="unix:///run/containerd/s/86e69d82922aa270dd2f1e79f63bf835f4aeef02d2da27c8bcd760952d919501" namespace=k8s.io protocol=ttrpc version=3 Oct 30 05:36:32.551404 systemd[1]: Started cri-containerd-46d7393b63f457d3b496df61b911d660518601f22696af3af6a6b33c1037fe59.scope - libcontainer container 46d7393b63f457d3b496df61b911d660518601f22696af3af6a6b33c1037fe59. Oct 30 05:36:32.578421 systemd[1]: Started cri-containerd-4a3480ff40a8f799692d5d86956264a12219b55c051506722b6b37d55df103d1.scope - libcontainer container 4a3480ff40a8f799692d5d86956264a12219b55c051506722b6b37d55df103d1. Oct 30 05:36:32.601470 containerd[1688]: time="2025-10-30T05:36:32.601444307Z" level=info msg="StartContainer for \"46d7393b63f457d3b496df61b911d660518601f22696af3af6a6b33c1037fe59\" returns successfully" Oct 30 05:36:32.613404 systemd-networkd[1568]: cali954ec6d604d: Gained IPv6LL Oct 30 05:36:32.623493 systemd-resolved[1334]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 30 05:36:32.669851 containerd[1688]: time="2025-10-30T05:36:32.669813188Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-tznl5,Uid:11de3de5-1c6c-4871-83d5-a6b9faf25770,Namespace:calico-system,Attempt:0,} returns sandbox id \"4a3480ff40a8f799692d5d86956264a12219b55c051506722b6b37d55df103d1\"" Oct 30 05:36:32.671945 containerd[1688]: time="2025-10-30T05:36:32.671894179Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 30 05:36:33.021896 containerd[1688]: time="2025-10-30T05:36:33.021774410Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 05:36:33.027207 containerd[1688]: time="2025-10-30T05:36:33.027111823Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 30 05:36:33.027207 containerd[1688]: time="2025-10-30T05:36:33.027175026Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Oct 30 05:36:33.027347 kubelet[2999]: E1030 05:36:33.027298 2999 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 30 05:36:33.027347 kubelet[2999]: E1030 05:36:33.027333 2999 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 30 05:36:33.028615 kubelet[2999]: E1030 05:36:33.027415 2999 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jkwth,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-tznl5_calico-system(11de3de5-1c6c-4871-83d5-a6b9faf25770): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 30 05:36:33.028615 kubelet[2999]: E1030 05:36:33.028540 2999 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-tznl5" podUID="11de3de5-1c6c-4871-83d5-a6b9faf25770" Oct 30 05:36:33.112507 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount200641160.mount: Deactivated successfully. Oct 30 05:36:33.224625 containerd[1688]: time="2025-10-30T05:36:33.224592146Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c564d8bcd-xdxxp,Uid:99f1075f-ef64-457d-a338-2febaf8a005c,Namespace:calico-apiserver,Attempt:0,}" Oct 30 05:36:33.224914 containerd[1688]: time="2025-10-30T05:36:33.224897453Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wzqzl,Uid:9e840872-e6a6-422f-a0c7-b6b186a24394,Namespace:calico-system,Attempt:0,}" Oct 30 05:36:33.409141 kubelet[2999]: E1030 05:36:33.408802 2999 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-tznl5" podUID="11de3de5-1c6c-4871-83d5-a6b9faf25770" Oct 30 05:36:33.449351 systemd-networkd[1568]: cali665c5e9e426: Link UP Oct 30 05:36:33.449470 systemd-networkd[1568]: cali665c5e9e426: Gained carrier Oct 30 05:36:33.473836 containerd[1688]: 2025-10-30 05:36:33.347 [INFO][4956] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--wzqzl-eth0 csi-node-driver- calico-system 9e840872-e6a6-422f-a0c7-b6b186a24394 723 0 2025-10-30 05:36:01 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-wzqzl eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali665c5e9e426 [] [] }} ContainerID="b01a6ff6bd2131ecbcf58b7cec3308cea953db1186e6a4e0f8def620d9edb0d2" Namespace="calico-system" Pod="csi-node-driver-wzqzl" WorkloadEndpoint="localhost-k8s-csi--node--driver--wzqzl-" Oct 30 05:36:33.473836 containerd[1688]: 2025-10-30 05:36:33.348 [INFO][4956] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b01a6ff6bd2131ecbcf58b7cec3308cea953db1186e6a4e0f8def620d9edb0d2" Namespace="calico-system" Pod="csi-node-driver-wzqzl" WorkloadEndpoint="localhost-k8s-csi--node--driver--wzqzl-eth0" Oct 30 05:36:33.473836 containerd[1688]: 2025-10-30 05:36:33.369 [INFO][4979] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b01a6ff6bd2131ecbcf58b7cec3308cea953db1186e6a4e0f8def620d9edb0d2" HandleID="k8s-pod-network.b01a6ff6bd2131ecbcf58b7cec3308cea953db1186e6a4e0f8def620d9edb0d2" Workload="localhost-k8s-csi--node--driver--wzqzl-eth0" Oct 30 05:36:33.473836 containerd[1688]: 2025-10-30 05:36:33.369 [INFO][4979] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b01a6ff6bd2131ecbcf58b7cec3308cea953db1186e6a4e0f8def620d9edb0d2" HandleID="k8s-pod-network.b01a6ff6bd2131ecbcf58b7cec3308cea953db1186e6a4e0f8def620d9edb0d2" Workload="localhost-k8s-csi--node--driver--wzqzl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002df8f0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-wzqzl", "timestamp":"2025-10-30 05:36:33.369459557 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 30 05:36:33.473836 containerd[1688]: 2025-10-30 05:36:33.369 [INFO][4979] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 30 05:36:33.473836 containerd[1688]: 2025-10-30 05:36:33.369 [INFO][4979] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 30 05:36:33.473836 containerd[1688]: 2025-10-30 05:36:33.369 [INFO][4979] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 30 05:36:33.473836 containerd[1688]: 2025-10-30 05:36:33.375 [INFO][4979] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b01a6ff6bd2131ecbcf58b7cec3308cea953db1186e6a4e0f8def620d9edb0d2" host="localhost" Oct 30 05:36:33.473836 containerd[1688]: 2025-10-30 05:36:33.378 [INFO][4979] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 30 05:36:33.473836 containerd[1688]: 2025-10-30 05:36:33.380 [INFO][4979] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 30 05:36:33.473836 containerd[1688]: 2025-10-30 05:36:33.381 [INFO][4979] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 30 05:36:33.473836 containerd[1688]: 2025-10-30 05:36:33.383 [INFO][4979] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 30 05:36:33.473836 containerd[1688]: 2025-10-30 05:36:33.383 [INFO][4979] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b01a6ff6bd2131ecbcf58b7cec3308cea953db1186e6a4e0f8def620d9edb0d2" host="localhost" Oct 30 05:36:33.473836 containerd[1688]: 2025-10-30 05:36:33.384 [INFO][4979] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b01a6ff6bd2131ecbcf58b7cec3308cea953db1186e6a4e0f8def620d9edb0d2 Oct 30 05:36:33.473836 containerd[1688]: 2025-10-30 05:36:33.388 [INFO][4979] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b01a6ff6bd2131ecbcf58b7cec3308cea953db1186e6a4e0f8def620d9edb0d2" host="localhost" Oct 30 05:36:33.473836 containerd[1688]: 2025-10-30 05:36:33.442 [INFO][4979] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.b01a6ff6bd2131ecbcf58b7cec3308cea953db1186e6a4e0f8def620d9edb0d2" host="localhost" Oct 30 05:36:33.473836 containerd[1688]: 2025-10-30 05:36:33.442 [INFO][4979] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.b01a6ff6bd2131ecbcf58b7cec3308cea953db1186e6a4e0f8def620d9edb0d2" host="localhost" Oct 30 05:36:33.473836 containerd[1688]: 2025-10-30 05:36:33.442 [INFO][4979] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 30 05:36:33.473836 containerd[1688]: 2025-10-30 05:36:33.442 [INFO][4979] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="b01a6ff6bd2131ecbcf58b7cec3308cea953db1186e6a4e0f8def620d9edb0d2" HandleID="k8s-pod-network.b01a6ff6bd2131ecbcf58b7cec3308cea953db1186e6a4e0f8def620d9edb0d2" Workload="localhost-k8s-csi--node--driver--wzqzl-eth0" Oct 30 05:36:33.503400 containerd[1688]: 2025-10-30 05:36:33.445 [INFO][4956] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b01a6ff6bd2131ecbcf58b7cec3308cea953db1186e6a4e0f8def620d9edb0d2" Namespace="calico-system" Pod="csi-node-driver-wzqzl" WorkloadEndpoint="localhost-k8s-csi--node--driver--wzqzl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--wzqzl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9e840872-e6a6-422f-a0c7-b6b186a24394", ResourceVersion:"723", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 5, 36, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-wzqzl", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali665c5e9e426", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 05:36:33.503400 containerd[1688]: 2025-10-30 05:36:33.445 [INFO][4956] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="b01a6ff6bd2131ecbcf58b7cec3308cea953db1186e6a4e0f8def620d9edb0d2" Namespace="calico-system" Pod="csi-node-driver-wzqzl" WorkloadEndpoint="localhost-k8s-csi--node--driver--wzqzl-eth0" Oct 30 05:36:33.503400 containerd[1688]: 2025-10-30 05:36:33.445 [INFO][4956] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali665c5e9e426 ContainerID="b01a6ff6bd2131ecbcf58b7cec3308cea953db1186e6a4e0f8def620d9edb0d2" Namespace="calico-system" Pod="csi-node-driver-wzqzl" WorkloadEndpoint="localhost-k8s-csi--node--driver--wzqzl-eth0" Oct 30 05:36:33.503400 containerd[1688]: 2025-10-30 05:36:33.448 [INFO][4956] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b01a6ff6bd2131ecbcf58b7cec3308cea953db1186e6a4e0f8def620d9edb0d2" Namespace="calico-system" Pod="csi-node-driver-wzqzl" WorkloadEndpoint="localhost-k8s-csi--node--driver--wzqzl-eth0" Oct 30 05:36:33.503400 containerd[1688]: 2025-10-30 05:36:33.450 [INFO][4956] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b01a6ff6bd2131ecbcf58b7cec3308cea953db1186e6a4e0f8def620d9edb0d2" Namespace="calico-system" Pod="csi-node-driver-wzqzl" WorkloadEndpoint="localhost-k8s-csi--node--driver--wzqzl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--wzqzl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9e840872-e6a6-422f-a0c7-b6b186a24394", ResourceVersion:"723", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 5, 36, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b01a6ff6bd2131ecbcf58b7cec3308cea953db1186e6a4e0f8def620d9edb0d2", Pod:"csi-node-driver-wzqzl", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali665c5e9e426", MAC:"ae:1c:13:05:78:25", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 05:36:33.503400 containerd[1688]: 2025-10-30 05:36:33.469 [INFO][4956] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b01a6ff6bd2131ecbcf58b7cec3308cea953db1186e6a4e0f8def620d9edb0d2" Namespace="calico-system" Pod="csi-node-driver-wzqzl" WorkloadEndpoint="localhost-k8s-csi--node--driver--wzqzl-eth0" Oct 30 05:36:33.588903 containerd[1688]: time="2025-10-30T05:36:33.588844740Z" level=info msg="connecting to shim b01a6ff6bd2131ecbcf58b7cec3308cea953db1186e6a4e0f8def620d9edb0d2" address="unix:///run/containerd/s/4f56b4f6a73b1357dea415bedb9dc81901b079551f87d4749837763ffe9d286b" namespace=k8s.io protocol=ttrpc version=3 Oct 30 05:36:33.617330 systemd-networkd[1568]: calicead3e2d1b1: Link UP Oct 30 05:36:33.618680 systemd-networkd[1568]: calicead3e2d1b1: Gained carrier Oct 30 05:36:33.638411 systemd[1]: Started cri-containerd-b01a6ff6bd2131ecbcf58b7cec3308cea953db1186e6a4e0f8def620d9edb0d2.scope - libcontainer container b01a6ff6bd2131ecbcf58b7cec3308cea953db1186e6a4e0f8def620d9edb0d2. Oct 30 05:36:33.645185 kubelet[2999]: I1030 05:36:33.645149 2999 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-6df7j" podStartSLOduration=43.645137719 podStartE2EDuration="43.645137719s" podCreationTimestamp="2025-10-30 05:35:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-30 05:36:33.622592626 +0000 UTC m=+50.694550951" watchObservedRunningTime="2025-10-30 05:36:33.645137719 +0000 UTC m=+50.717096048" Oct 30 05:36:33.649068 containerd[1688]: 2025-10-30 05:36:33.348 [INFO][4953] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5c564d8bcd--xdxxp-eth0 calico-apiserver-5c564d8bcd- calico-apiserver 99f1075f-ef64-457d-a338-2febaf8a005c 845 0 2025-10-30 05:35:57 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5c564d8bcd projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5c564d8bcd-xdxxp eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calicead3e2d1b1 [] [] }} ContainerID="4df72e824872e2d6bf8f2bb111d90db43ca1e11c3ad09605afdd0e1580820a2c" Namespace="calico-apiserver" Pod="calico-apiserver-5c564d8bcd-xdxxp" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c564d8bcd--xdxxp-" Oct 30 05:36:33.649068 containerd[1688]: 2025-10-30 05:36:33.349 [INFO][4953] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4df72e824872e2d6bf8f2bb111d90db43ca1e11c3ad09605afdd0e1580820a2c" Namespace="calico-apiserver" Pod="calico-apiserver-5c564d8bcd-xdxxp" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c564d8bcd--xdxxp-eth0" Oct 30 05:36:33.649068 containerd[1688]: 2025-10-30 05:36:33.411 [INFO][4981] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4df72e824872e2d6bf8f2bb111d90db43ca1e11c3ad09605afdd0e1580820a2c" HandleID="k8s-pod-network.4df72e824872e2d6bf8f2bb111d90db43ca1e11c3ad09605afdd0e1580820a2c" Workload="localhost-k8s-calico--apiserver--5c564d8bcd--xdxxp-eth0" Oct 30 05:36:33.649068 containerd[1688]: 2025-10-30 05:36:33.411 [INFO][4981] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="4df72e824872e2d6bf8f2bb111d90db43ca1e11c3ad09605afdd0e1580820a2c" HandleID="k8s-pod-network.4df72e824872e2d6bf8f2bb111d90db43ca1e11c3ad09605afdd0e1580820a2c" Workload="localhost-k8s-calico--apiserver--5c564d8bcd--xdxxp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ad770), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5c564d8bcd-xdxxp", "timestamp":"2025-10-30 05:36:33.411820246 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 30 05:36:33.649068 containerd[1688]: 2025-10-30 05:36:33.411 [INFO][4981] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 30 05:36:33.649068 containerd[1688]: 2025-10-30 05:36:33.442 [INFO][4981] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 30 05:36:33.649068 containerd[1688]: 2025-10-30 05:36:33.442 [INFO][4981] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 30 05:36:33.649068 containerd[1688]: 2025-10-30 05:36:33.476 [INFO][4981] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4df72e824872e2d6bf8f2bb111d90db43ca1e11c3ad09605afdd0e1580820a2c" host="localhost" Oct 30 05:36:33.649068 containerd[1688]: 2025-10-30 05:36:33.544 [INFO][4981] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 30 05:36:33.649068 containerd[1688]: 2025-10-30 05:36:33.553 [INFO][4981] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 30 05:36:33.649068 containerd[1688]: 2025-10-30 05:36:33.561 [INFO][4981] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 30 05:36:33.649068 containerd[1688]: 2025-10-30 05:36:33.566 [INFO][4981] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 30 05:36:33.649068 containerd[1688]: 2025-10-30 05:36:33.566 [INFO][4981] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4df72e824872e2d6bf8f2bb111d90db43ca1e11c3ad09605afdd0e1580820a2c" host="localhost" Oct 30 05:36:33.649068 containerd[1688]: 2025-10-30 05:36:33.572 [INFO][4981] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.4df72e824872e2d6bf8f2bb111d90db43ca1e11c3ad09605afdd0e1580820a2c Oct 30 05:36:33.649068 containerd[1688]: 2025-10-30 05:36:33.589 [INFO][4981] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4df72e824872e2d6bf8f2bb111d90db43ca1e11c3ad09605afdd0e1580820a2c" host="localhost" Oct 30 05:36:33.649068 containerd[1688]: 2025-10-30 05:36:33.611 [INFO][4981] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.137/26] block=192.168.88.128/26 handle="k8s-pod-network.4df72e824872e2d6bf8f2bb111d90db43ca1e11c3ad09605afdd0e1580820a2c" host="localhost" Oct 30 05:36:33.649068 containerd[1688]: 2025-10-30 05:36:33.611 [INFO][4981] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.137/26] handle="k8s-pod-network.4df72e824872e2d6bf8f2bb111d90db43ca1e11c3ad09605afdd0e1580820a2c" host="localhost" Oct 30 05:36:33.649068 containerd[1688]: 2025-10-30 05:36:33.611 [INFO][4981] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 30 05:36:33.649068 containerd[1688]: 2025-10-30 05:36:33.611 [INFO][4981] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.137/26] IPv6=[] ContainerID="4df72e824872e2d6bf8f2bb111d90db43ca1e11c3ad09605afdd0e1580820a2c" HandleID="k8s-pod-network.4df72e824872e2d6bf8f2bb111d90db43ca1e11c3ad09605afdd0e1580820a2c" Workload="localhost-k8s-calico--apiserver--5c564d8bcd--xdxxp-eth0" Oct 30 05:36:33.659669 systemd-resolved[1334]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 30 05:36:33.663330 containerd[1688]: 2025-10-30 05:36:33.614 [INFO][4953] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4df72e824872e2d6bf8f2bb111d90db43ca1e11c3ad09605afdd0e1580820a2c" Namespace="calico-apiserver" Pod="calico-apiserver-5c564d8bcd-xdxxp" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c564d8bcd--xdxxp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5c564d8bcd--xdxxp-eth0", GenerateName:"calico-apiserver-5c564d8bcd-", Namespace:"calico-apiserver", SelfLink:"", UID:"99f1075f-ef64-457d-a338-2febaf8a005c", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 5, 35, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5c564d8bcd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5c564d8bcd-xdxxp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calicead3e2d1b1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 05:36:33.663330 containerd[1688]: 2025-10-30 05:36:33.614 [INFO][4953] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.137/32] ContainerID="4df72e824872e2d6bf8f2bb111d90db43ca1e11c3ad09605afdd0e1580820a2c" Namespace="calico-apiserver" Pod="calico-apiserver-5c564d8bcd-xdxxp" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c564d8bcd--xdxxp-eth0" Oct 30 05:36:33.663330 containerd[1688]: 2025-10-30 05:36:33.614 [INFO][4953] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicead3e2d1b1 ContainerID="4df72e824872e2d6bf8f2bb111d90db43ca1e11c3ad09605afdd0e1580820a2c" Namespace="calico-apiserver" Pod="calico-apiserver-5c564d8bcd-xdxxp" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c564d8bcd--xdxxp-eth0" Oct 30 05:36:33.663330 containerd[1688]: 2025-10-30 05:36:33.619 [INFO][4953] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4df72e824872e2d6bf8f2bb111d90db43ca1e11c3ad09605afdd0e1580820a2c" Namespace="calico-apiserver" Pod="calico-apiserver-5c564d8bcd-xdxxp" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c564d8bcd--xdxxp-eth0" Oct 30 05:36:33.663330 containerd[1688]: 2025-10-30 05:36:33.620 [INFO][4953] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4df72e824872e2d6bf8f2bb111d90db43ca1e11c3ad09605afdd0e1580820a2c" Namespace="calico-apiserver" Pod="calico-apiserver-5c564d8bcd-xdxxp" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c564d8bcd--xdxxp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5c564d8bcd--xdxxp-eth0", GenerateName:"calico-apiserver-5c564d8bcd-", Namespace:"calico-apiserver", SelfLink:"", UID:"99f1075f-ef64-457d-a338-2febaf8a005c", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 5, 35, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5c564d8bcd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4df72e824872e2d6bf8f2bb111d90db43ca1e11c3ad09605afdd0e1580820a2c", Pod:"calico-apiserver-5c564d8bcd-xdxxp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calicead3e2d1b1", MAC:"0e:8a:5d:b3:6d:b9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 05:36:33.663330 containerd[1688]: 2025-10-30 05:36:33.645 [INFO][4953] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4df72e824872e2d6bf8f2bb111d90db43ca1e11c3ad09605afdd0e1580820a2c" Namespace="calico-apiserver" Pod="calico-apiserver-5c564d8bcd-xdxxp" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c564d8bcd--xdxxp-eth0" Oct 30 05:36:33.682237 containerd[1688]: time="2025-10-30T05:36:33.682212427Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wzqzl,Uid:9e840872-e6a6-422f-a0c7-b6b186a24394,Namespace:calico-system,Attempt:0,} returns sandbox id \"b01a6ff6bd2131ecbcf58b7cec3308cea953db1186e6a4e0f8def620d9edb0d2\"" Oct 30 05:36:33.683365 containerd[1688]: time="2025-10-30T05:36:33.683349440Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 30 05:36:33.731483 containerd[1688]: time="2025-10-30T05:36:33.731434049Z" level=info msg="connecting to shim 4df72e824872e2d6bf8f2bb111d90db43ca1e11c3ad09605afdd0e1580820a2c" address="unix:///run/containerd/s/77950159be2c66f6f19373d32c670b3bf9dab34e730d33778f2f257e71fa02f8" namespace=k8s.io protocol=ttrpc version=3 Oct 30 05:36:33.751408 systemd[1]: Started cri-containerd-4df72e824872e2d6bf8f2bb111d90db43ca1e11c3ad09605afdd0e1580820a2c.scope - libcontainer container 4df72e824872e2d6bf8f2bb111d90db43ca1e11c3ad09605afdd0e1580820a2c. Oct 30 05:36:33.765341 systemd-resolved[1334]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 30 05:36:33.813284 containerd[1688]: time="2025-10-30T05:36:33.813246785Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c564d8bcd-xdxxp,Uid:99f1075f-ef64-457d-a338-2febaf8a005c,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"4df72e824872e2d6bf8f2bb111d90db43ca1e11c3ad09605afdd0e1580820a2c\"" Oct 30 05:36:33.830361 systemd-networkd[1568]: calia39c93331ba: Gained IPv6LL Oct 30 05:36:34.030413 containerd[1688]: time="2025-10-30T05:36:34.030375492Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 05:36:34.044908 containerd[1688]: time="2025-10-30T05:36:34.044853761Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 30 05:36:34.044996 containerd[1688]: time="2025-10-30T05:36:34.044931107Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Oct 30 05:36:34.045043 kubelet[2999]: E1030 05:36:34.045017 2999 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 30 05:36:34.045082 kubelet[2999]: E1030 05:36:34.045050 2999 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 30 05:36:34.045420 containerd[1688]: time="2025-10-30T05:36:34.045220336Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 30 05:36:34.045737 kubelet[2999]: E1030 05:36:34.045710 2999 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7nqxs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-wzqzl_calico-system(9e840872-e6a6-422f-a0c7-b6b186a24394): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 30 05:36:34.214363 systemd-networkd[1568]: caliac5451d9824: Gained IPv6LL Oct 30 05:36:34.412502 containerd[1688]: time="2025-10-30T05:36:34.412437941Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 05:36:34.413409 containerd[1688]: time="2025-10-30T05:36:34.413351024Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 30 05:36:34.413824 containerd[1688]: time="2025-10-30T05:36:34.413452247Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 30 05:36:34.413859 kubelet[2999]: E1030 05:36:34.413551 2999 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 30 05:36:34.413859 kubelet[2999]: E1030 05:36:34.413573 2999 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 30 05:36:34.413859 kubelet[2999]: E1030 05:36:34.413676 2999 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bwlk4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5c564d8bcd-xdxxp_calico-apiserver(99f1075f-ef64-457d-a338-2febaf8a005c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 30 05:36:34.414124 containerd[1688]: time="2025-10-30T05:36:34.413878704Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 30 05:36:34.415327 kubelet[2999]: E1030 05:36:34.415306 2999 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5c564d8bcd-xdxxp" podUID="99f1075f-ef64-457d-a338-2febaf8a005c" Oct 30 05:36:34.417001 kubelet[2999]: E1030 05:36:34.416936 2999 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-tznl5" podUID="11de3de5-1c6c-4871-83d5-a6b9faf25770" Oct 30 05:36:34.661361 systemd-networkd[1568]: cali665c5e9e426: Gained IPv6LL Oct 30 05:36:34.725445 systemd-networkd[1568]: calicead3e2d1b1: Gained IPv6LL Oct 30 05:36:34.786074 containerd[1688]: time="2025-10-30T05:36:34.786024239Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 05:36:34.786515 containerd[1688]: time="2025-10-30T05:36:34.786451432Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 30 05:36:34.786515 containerd[1688]: time="2025-10-30T05:36:34.786496214Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Oct 30 05:36:34.786694 kubelet[2999]: E1030 05:36:34.786613 2999 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 30 05:36:34.787090 kubelet[2999]: E1030 05:36:34.786746 2999 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 30 05:36:34.787090 kubelet[2999]: E1030 05:36:34.786843 2999 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7nqxs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-wzqzl_calico-system(9e840872-e6a6-422f-a0c7-b6b186a24394): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 30 05:36:34.788224 kubelet[2999]: E1030 05:36:34.788137 2999 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wzqzl" podUID="9e840872-e6a6-422f-a0c7-b6b186a24394" Oct 30 05:36:35.419496 kubelet[2999]: E1030 05:36:35.419320 2999 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5c564d8bcd-xdxxp" podUID="99f1075f-ef64-457d-a338-2febaf8a005c" Oct 30 05:36:35.420115 kubelet[2999]: E1030 05:36:35.419826 2999 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wzqzl" podUID="9e840872-e6a6-422f-a0c7-b6b186a24394" Oct 30 05:36:36.422506 kubelet[2999]: E1030 05:36:36.422405 2999 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5c564d8bcd-xdxxp" podUID="99f1075f-ef64-457d-a338-2febaf8a005c" Oct 30 05:36:40.101131 containerd[1688]: time="2025-10-30T05:36:40.101062552Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 30 05:36:40.462189 containerd[1688]: time="2025-10-30T05:36:40.462021729Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 05:36:40.476362 containerd[1688]: time="2025-10-30T05:36:40.476334194Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 30 05:36:40.476502 containerd[1688]: time="2025-10-30T05:36:40.476394969Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Oct 30 05:36:40.476652 kubelet[2999]: E1030 05:36:40.476612 2999 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 30 05:36:40.476923 kubelet[2999]: E1030 05:36:40.476662 2999 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 30 05:36:40.504427 kubelet[2999]: E1030 05:36:40.504389 2999 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:3e229c8b9d734849864085095eacfb5b,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jbs2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5fc4bbb676-9vsss_calico-system(a2f201cf-9e70-4227-a745-f73352a6fa53): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 30 05:36:40.506747 containerd[1688]: time="2025-10-30T05:36:40.506727253Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 30 05:36:40.895446 containerd[1688]: time="2025-10-30T05:36:40.895093289Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 05:36:40.896862 containerd[1688]: time="2025-10-30T05:36:40.896816471Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 30 05:36:40.896862 containerd[1688]: time="2025-10-30T05:36:40.896848400Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Oct 30 05:36:40.897073 kubelet[2999]: E1030 05:36:40.897030 2999 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 30 05:36:40.897135 kubelet[2999]: E1030 05:36:40.897081 2999 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 30 05:36:40.897392 kubelet[2999]: E1030 05:36:40.897163 2999 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jbs2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5fc4bbb676-9vsss_calico-system(a2f201cf-9e70-4227-a745-f73352a6fa53): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 30 05:36:40.898781 kubelet[2999]: E1030 05:36:40.898752 2999 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5fc4bbb676-9vsss" podUID="a2f201cf-9e70-4227-a745-f73352a6fa53" Oct 30 05:36:45.102466 containerd[1688]: time="2025-10-30T05:36:45.102122511Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 30 05:36:45.408992 containerd[1688]: time="2025-10-30T05:36:45.408881585Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 05:36:45.409506 containerd[1688]: time="2025-10-30T05:36:45.409469920Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 30 05:36:45.409548 containerd[1688]: time="2025-10-30T05:36:45.409530296Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Oct 30 05:36:45.409695 kubelet[2999]: E1030 05:36:45.409646 2999 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 30 05:36:45.410042 kubelet[2999]: E1030 05:36:45.409702 2999 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 30 05:36:45.410042 kubelet[2999]: E1030 05:36:45.409801 2999 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kkwwr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7c65b456cf-hz876_calico-system(e883b021-a9bb-48ab-80cc-7b947568b059): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 30 05:36:45.410977 kubelet[2999]: E1030 05:36:45.410947 2999 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7c65b456cf-hz876" podUID="e883b021-a9bb-48ab-80cc-7b947568b059" Oct 30 05:36:46.101629 containerd[1688]: time="2025-10-30T05:36:46.101374358Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 30 05:36:46.465207 containerd[1688]: time="2025-10-30T05:36:46.465172641Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 05:36:46.473788 containerd[1688]: time="2025-10-30T05:36:46.473748364Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 30 05:36:46.473859 containerd[1688]: time="2025-10-30T05:36:46.473802548Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 30 05:36:46.473910 kubelet[2999]: E1030 05:36:46.473886 2999 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 30 05:36:46.474265 kubelet[2999]: E1030 05:36:46.473917 2999 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 30 05:36:46.474265 kubelet[2999]: E1030 05:36:46.473993 2999 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r7b2k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-9d7959b66-vrsrq_calico-apiserver(a80719c9-5ba9-4e0b-97b9-ad9c69d9e2be): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 30 05:36:46.475345 kubelet[2999]: E1030 05:36:46.475308 2999 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9d7959b66-vrsrq" podUID="a80719c9-5ba9-4e0b-97b9-ad9c69d9e2be" Oct 30 05:36:47.102907 containerd[1688]: time="2025-10-30T05:36:47.102194437Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 30 05:36:47.427392 containerd[1688]: time="2025-10-30T05:36:47.427346209Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 05:36:47.427850 containerd[1688]: time="2025-10-30T05:36:47.427830798Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 30 05:36:47.427981 containerd[1688]: time="2025-10-30T05:36:47.427967663Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Oct 30 05:36:47.428038 kubelet[2999]: E1030 05:36:47.427949 2999 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 30 05:36:47.428038 kubelet[2999]: E1030 05:36:47.428007 2999 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 30 05:36:47.428612 containerd[1688]: time="2025-10-30T05:36:47.428365984Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 30 05:36:47.436302 kubelet[2999]: E1030 05:36:47.428179 2999 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jkwth,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-tznl5_calico-system(11de3de5-1c6c-4871-83d5-a6b9faf25770): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 30 05:36:47.437642 kubelet[2999]: E1030 05:36:47.437600 2999 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-tznl5" podUID="11de3de5-1c6c-4871-83d5-a6b9faf25770" Oct 30 05:36:47.781032 containerd[1688]: time="2025-10-30T05:36:47.780691298Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 05:36:47.787343 containerd[1688]: time="2025-10-30T05:36:47.787319662Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 30 05:36:47.787407 containerd[1688]: time="2025-10-30T05:36:47.787373892Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Oct 30 05:36:47.787559 kubelet[2999]: E1030 05:36:47.787530 2999 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 30 05:36:47.787861 kubelet[2999]: E1030 05:36:47.787715 2999 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 30 05:36:47.788443 containerd[1688]: time="2025-10-30T05:36:47.788033398Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 30 05:36:47.788474 kubelet[2999]: E1030 05:36:47.788208 2999 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7nqxs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-wzqzl_calico-system(9e840872-e6a6-422f-a0c7-b6b186a24394): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 30 05:36:48.131646 containerd[1688]: time="2025-10-30T05:36:48.131263738Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 05:36:48.131646 containerd[1688]: time="2025-10-30T05:36:48.131625655Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 30 05:36:48.132039 containerd[1688]: time="2025-10-30T05:36:48.131676097Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 30 05:36:48.132062 kubelet[2999]: E1030 05:36:48.131775 2999 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 30 05:36:48.132062 kubelet[2999]: E1030 05:36:48.131807 2999 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 30 05:36:48.132062 kubelet[2999]: E1030 05:36:48.131949 2999 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n22k8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5c564d8bcd-kspf5_calico-apiserver(9762f718-a4c7-49eb-975e-48fddc6a6070): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 30 05:36:48.132803 containerd[1688]: time="2025-10-30T05:36:48.132787978Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 30 05:36:48.133241 kubelet[2999]: E1030 05:36:48.133216 2999 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5c564d8bcd-kspf5" podUID="9762f718-a4c7-49eb-975e-48fddc6a6070" Oct 30 05:36:48.469252 containerd[1688]: time="2025-10-30T05:36:48.469148691Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 05:36:48.469720 containerd[1688]: time="2025-10-30T05:36:48.469657634Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 30 05:36:48.469720 containerd[1688]: time="2025-10-30T05:36:48.469661895Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Oct 30 05:36:48.469893 kubelet[2999]: E1030 05:36:48.469860 2999 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 30 05:36:48.469983 kubelet[2999]: E1030 05:36:48.469970 2999 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 30 05:36:48.470167 kubelet[2999]: E1030 05:36:48.470132 2999 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7nqxs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-wzqzl_calico-system(9e840872-e6a6-422f-a0c7-b6b186a24394): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 30 05:36:48.471306 kubelet[2999]: E1030 05:36:48.471262 2999 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wzqzl" podUID="9e840872-e6a6-422f-a0c7-b6b186a24394" Oct 30 05:36:49.102304 containerd[1688]: time="2025-10-30T05:36:49.102006053Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 30 05:36:49.422908 containerd[1688]: time="2025-10-30T05:36:49.422808723Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 05:36:49.423189 containerd[1688]: time="2025-10-30T05:36:49.423161779Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 30 05:36:49.423325 containerd[1688]: time="2025-10-30T05:36:49.423223606Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 30 05:36:49.423451 kubelet[2999]: E1030 05:36:49.423415 2999 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 30 05:36:49.423942 kubelet[2999]: E1030 05:36:49.423462 2999 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 30 05:36:49.423942 kubelet[2999]: E1030 05:36:49.423573 2999 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bwlk4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5c564d8bcd-xdxxp_calico-apiserver(99f1075f-ef64-457d-a338-2febaf8a005c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 30 05:36:49.425191 kubelet[2999]: E1030 05:36:49.425160 2999 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5c564d8bcd-xdxxp" podUID="99f1075f-ef64-457d-a338-2febaf8a005c" Oct 30 05:36:53.141516 kubelet[2999]: E1030 05:36:53.141465 2999 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5fc4bbb676-9vsss" podUID="a2f201cf-9e70-4227-a745-f73352a6fa53" Oct 30 05:36:58.654954 containerd[1688]: time="2025-10-30T05:36:58.654892842Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6800f4d88e80d6366f007a8fb3b5a32ce863fca4f3eba69b2742da5146841bad\" id:\"58a8df0a2cdd6eae201649cf63be8a64436e813f47bc22034b4cd8831b221762\" pid:5151 exited_at:{seconds:1761802618 nanos:654536784}" Oct 30 05:36:59.104032 kubelet[2999]: E1030 05:36:59.103880 2999 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5c564d8bcd-kspf5" podUID="9762f718-a4c7-49eb-975e-48fddc6a6070" Oct 30 05:36:59.110634 kubelet[2999]: E1030 05:36:59.110546 2999 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wzqzl" podUID="9e840872-e6a6-422f-a0c7-b6b186a24394" Oct 30 05:36:59.695493 systemd[1]: Started sshd@7-139.178.70.106:22-139.178.68.195:38098.service - OpenSSH per-connection server daemon (139.178.68.195:38098). Oct 30 05:36:59.834515 sshd[5171]: Accepted publickey for core from 139.178.68.195 port 38098 ssh2: RSA SHA256:KIpSDL70YldeAtV2+Yri8KI6blHeuWNqKvGCxAMYgRQ Oct 30 05:36:59.836946 sshd-session[5171]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 05:36:59.840812 systemd-logind[1654]: New session 10 of user core. Oct 30 05:36:59.848385 systemd[1]: Started session-10.scope - Session 10 of User core. Oct 30 05:37:00.103368 kubelet[2999]: E1030 05:37:00.102867 2999 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7c65b456cf-hz876" podUID="e883b021-a9bb-48ab-80cc-7b947568b059" Oct 30 05:37:00.103368 kubelet[2999]: E1030 05:37:00.103082 2999 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9d7959b66-vrsrq" podUID="a80719c9-5ba9-4e0b-97b9-ad9c69d9e2be" Oct 30 05:37:00.298154 sshd[5174]: Connection closed by 139.178.68.195 port 38098 Oct 30 05:37:00.298364 sshd-session[5171]: pam_unix(sshd:session): session closed for user core Oct 30 05:37:00.310912 systemd-logind[1654]: Session 10 logged out. Waiting for processes to exit. Oct 30 05:37:00.314412 systemd[1]: sshd@7-139.178.70.106:22-139.178.68.195:38098.service: Deactivated successfully. Oct 30 05:37:00.316159 systemd[1]: session-10.scope: Deactivated successfully. Oct 30 05:37:00.318717 systemd-logind[1654]: Removed session 10. Oct 30 05:37:02.102004 kubelet[2999]: E1030 05:37:02.101904 2999 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-tznl5" podUID="11de3de5-1c6c-4871-83d5-a6b9faf25770" Oct 30 05:37:02.102004 kubelet[2999]: E1030 05:37:02.101959 2999 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5c564d8bcd-xdxxp" podUID="99f1075f-ef64-457d-a338-2febaf8a005c" Oct 30 05:37:04.102350 containerd[1688]: time="2025-10-30T05:37:04.102301160Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 30 05:37:04.454018 containerd[1688]: time="2025-10-30T05:37:04.453990451Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 05:37:04.454312 containerd[1688]: time="2025-10-30T05:37:04.454290573Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 30 05:37:04.454371 containerd[1688]: time="2025-10-30T05:37:04.454339569Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Oct 30 05:37:04.454461 kubelet[2999]: E1030 05:37:04.454433 2999 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 30 05:37:04.454726 kubelet[2999]: E1030 05:37:04.454468 2999 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 30 05:37:04.454726 kubelet[2999]: E1030 05:37:04.454547 2999 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:3e229c8b9d734849864085095eacfb5b,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jbs2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5fc4bbb676-9vsss_calico-system(a2f201cf-9e70-4227-a745-f73352a6fa53): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 30 05:37:04.456230 containerd[1688]: time="2025-10-30T05:37:04.456155208Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 30 05:37:04.813160 containerd[1688]: time="2025-10-30T05:37:04.813076808Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 05:37:04.813777 containerd[1688]: time="2025-10-30T05:37:04.813676751Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 30 05:37:04.813777 containerd[1688]: time="2025-10-30T05:37:04.813727089Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Oct 30 05:37:04.813843 kubelet[2999]: E1030 05:37:04.813819 2999 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 30 05:37:04.813872 kubelet[2999]: E1030 05:37:04.813855 2999 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 30 05:37:04.813950 kubelet[2999]: E1030 05:37:04.813925 2999 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jbs2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5fc4bbb676-9vsss_calico-system(a2f201cf-9e70-4227-a745-f73352a6fa53): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 30 05:37:04.815230 kubelet[2999]: E1030 05:37:04.815194 2999 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5fc4bbb676-9vsss" podUID="a2f201cf-9e70-4227-a745-f73352a6fa53" Oct 30 05:37:05.334346 systemd[1]: Started sshd@8-139.178.70.106:22-139.178.68.195:36546.service - OpenSSH per-connection server daemon (139.178.68.195:36546). Oct 30 05:37:05.468811 sshd[5193]: Accepted publickey for core from 139.178.68.195 port 36546 ssh2: RSA SHA256:KIpSDL70YldeAtV2+Yri8KI6blHeuWNqKvGCxAMYgRQ Oct 30 05:37:05.470189 sshd-session[5193]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 05:37:05.473427 systemd-logind[1654]: New session 11 of user core. Oct 30 05:37:05.479391 systemd[1]: Started session-11.scope - Session 11 of User core. Oct 30 05:37:05.652861 sshd[5196]: Connection closed by 139.178.68.195 port 36546 Oct 30 05:37:05.653197 sshd-session[5193]: pam_unix(sshd:session): session closed for user core Oct 30 05:37:05.655808 systemd-logind[1654]: Session 11 logged out. Waiting for processes to exit. Oct 30 05:37:05.656567 systemd[1]: sshd@8-139.178.70.106:22-139.178.68.195:36546.service: Deactivated successfully. Oct 30 05:37:05.657603 systemd[1]: session-11.scope: Deactivated successfully. Oct 30 05:37:05.659606 systemd-logind[1654]: Removed session 11. Oct 30 05:37:10.662194 systemd[1]: Started sshd@9-139.178.70.106:22-139.178.68.195:36550.service - OpenSSH per-connection server daemon (139.178.68.195:36550). Oct 30 05:37:10.782903 sshd[5215]: Accepted publickey for core from 139.178.68.195 port 36550 ssh2: RSA SHA256:KIpSDL70YldeAtV2+Yri8KI6blHeuWNqKvGCxAMYgRQ Oct 30 05:37:10.783691 sshd-session[5215]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 05:37:10.786408 systemd-logind[1654]: New session 12 of user core. Oct 30 05:37:10.792396 systemd[1]: Started session-12.scope - Session 12 of User core. Oct 30 05:37:10.863174 sshd[5218]: Connection closed by 139.178.68.195 port 36550 Oct 30 05:37:10.863537 sshd-session[5215]: pam_unix(sshd:session): session closed for user core Oct 30 05:37:10.865762 systemd[1]: sshd@9-139.178.70.106:22-139.178.68.195:36550.service: Deactivated successfully. Oct 30 05:37:10.866892 systemd[1]: session-12.scope: Deactivated successfully. Oct 30 05:37:10.867502 systemd-logind[1654]: Session 12 logged out. Waiting for processes to exit. Oct 30 05:37:10.868186 systemd-logind[1654]: Removed session 12. Oct 30 05:37:11.102809 containerd[1688]: time="2025-10-30T05:37:11.102761651Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 30 05:37:11.447810 containerd[1688]: time="2025-10-30T05:37:11.447670375Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 05:37:11.453752 containerd[1688]: time="2025-10-30T05:37:11.453651084Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 30 05:37:11.454005 containerd[1688]: time="2025-10-30T05:37:11.453831843Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 30 05:37:11.454127 kubelet[2999]: E1030 05:37:11.454101 2999 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 30 05:37:11.454520 kubelet[2999]: E1030 05:37:11.454199 2999 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 30 05:37:11.455557 kubelet[2999]: E1030 05:37:11.454846 2999 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r7b2k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-9d7959b66-vrsrq_calico-apiserver(a80719c9-5ba9-4e0b-97b9-ad9c69d9e2be): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 30 05:37:11.456673 kubelet[2999]: E1030 05:37:11.456649 2999 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9d7959b66-vrsrq" podUID="a80719c9-5ba9-4e0b-97b9-ad9c69d9e2be" Oct 30 05:37:12.101760 containerd[1688]: time="2025-10-30T05:37:12.101505554Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 30 05:37:12.457559 containerd[1688]: time="2025-10-30T05:37:12.457388245Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 05:37:12.464627 containerd[1688]: time="2025-10-30T05:37:12.461922306Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 30 05:37:12.464627 containerd[1688]: time="2025-10-30T05:37:12.461984086Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Oct 30 05:37:12.464713 kubelet[2999]: E1030 05:37:12.462105 2999 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 30 05:37:12.464713 kubelet[2999]: E1030 05:37:12.462150 2999 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 30 05:37:12.464713 kubelet[2999]: E1030 05:37:12.462314 2999 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kkwwr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7c65b456cf-hz876_calico-system(e883b021-a9bb-48ab-80cc-7b947568b059): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 30 05:37:12.464713 kubelet[2999]: E1030 05:37:12.464310 2999 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7c65b456cf-hz876" podUID="e883b021-a9bb-48ab-80cc-7b947568b059" Oct 30 05:37:13.104872 containerd[1688]: time="2025-10-30T05:37:13.103230305Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 30 05:37:13.553665 containerd[1688]: time="2025-10-30T05:37:13.553636337Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 05:37:13.557190 containerd[1688]: time="2025-10-30T05:37:13.557159036Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 30 05:37:13.562551 containerd[1688]: time="2025-10-30T05:37:13.557229379Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 30 05:37:13.562551 containerd[1688]: time="2025-10-30T05:37:13.557583288Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 30 05:37:13.562609 kubelet[2999]: E1030 05:37:13.557324 2999 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 30 05:37:13.562609 kubelet[2999]: E1030 05:37:13.557391 2999 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 30 05:37:13.562609 kubelet[2999]: E1030 05:37:13.557782 2999 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n22k8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5c564d8bcd-kspf5_calico-apiserver(9762f718-a4c7-49eb-975e-48fddc6a6070): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 30 05:37:13.562609 kubelet[2999]: E1030 05:37:13.558917 2999 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5c564d8bcd-kspf5" podUID="9762f718-a4c7-49eb-975e-48fddc6a6070" Oct 30 05:37:13.920587 containerd[1688]: time="2025-10-30T05:37:13.920410582Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 05:37:13.934205 containerd[1688]: time="2025-10-30T05:37:13.934114633Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 30 05:37:13.934205 containerd[1688]: time="2025-10-30T05:37:13.934180753Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Oct 30 05:37:13.963958 kubelet[2999]: E1030 05:37:13.934320 2999 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 30 05:37:13.963958 kubelet[2999]: E1030 05:37:13.934367 2999 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 30 05:37:13.963958 kubelet[2999]: E1030 05:37:13.934472 2999 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jkwth,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-tznl5_calico-system(11de3de5-1c6c-4871-83d5-a6b9faf25770): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 30 05:37:13.963958 kubelet[2999]: E1030 05:37:13.935592 2999 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-tznl5" podUID="11de3de5-1c6c-4871-83d5-a6b9faf25770" Oct 30 05:37:14.101051 containerd[1688]: time="2025-10-30T05:37:14.101025111Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 30 05:37:14.459430 containerd[1688]: time="2025-10-30T05:37:14.459385395Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 05:37:14.466945 containerd[1688]: time="2025-10-30T05:37:14.466917776Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 30 05:37:14.467041 containerd[1688]: time="2025-10-30T05:37:14.466976007Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Oct 30 05:37:14.467241 kubelet[2999]: E1030 05:37:14.467144 2999 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 30 05:37:14.467241 kubelet[2999]: E1030 05:37:14.467196 2999 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 30 05:37:14.467506 kubelet[2999]: E1030 05:37:14.467462 2999 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7nqxs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-wzqzl_calico-system(9e840872-e6a6-422f-a0c7-b6b186a24394): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 30 05:37:14.474137 containerd[1688]: time="2025-10-30T05:37:14.473897192Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 30 05:37:14.831130 containerd[1688]: time="2025-10-30T05:37:14.830839322Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 05:37:14.837471 containerd[1688]: time="2025-10-30T05:37:14.837207842Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 30 05:37:14.837471 containerd[1688]: time="2025-10-30T05:37:14.837288090Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Oct 30 05:37:14.837556 kubelet[2999]: E1030 05:37:14.837376 2999 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 30 05:37:14.837556 kubelet[2999]: E1030 05:37:14.837409 2999 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 30 05:37:14.837556 kubelet[2999]: E1030 05:37:14.837496 2999 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7nqxs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-wzqzl_calico-system(9e840872-e6a6-422f-a0c7-b6b186a24394): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 30 05:37:14.838850 kubelet[2999]: E1030 05:37:14.838826 2999 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wzqzl" podUID="9e840872-e6a6-422f-a0c7-b6b186a24394" Oct 30 05:37:15.871517 systemd[1]: Started sshd@10-139.178.70.106:22-139.178.68.195:44958.service - OpenSSH per-connection server daemon (139.178.68.195:44958). Oct 30 05:37:15.907990 sshd[5232]: Accepted publickey for core from 139.178.68.195 port 44958 ssh2: RSA SHA256:KIpSDL70YldeAtV2+Yri8KI6blHeuWNqKvGCxAMYgRQ Oct 30 05:37:15.908845 sshd-session[5232]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 05:37:15.912179 systemd-logind[1654]: New session 13 of user core. Oct 30 05:37:15.917366 systemd[1]: Started session-13.scope - Session 13 of User core. Oct 30 05:37:15.983687 sshd[5235]: Connection closed by 139.178.68.195 port 44958 Oct 30 05:37:15.983596 sshd-session[5232]: pam_unix(sshd:session): session closed for user core Oct 30 05:37:15.990075 systemd[1]: sshd@10-139.178.70.106:22-139.178.68.195:44958.service: Deactivated successfully. Oct 30 05:37:15.992535 systemd[1]: session-13.scope: Deactivated successfully. Oct 30 05:37:15.994535 systemd-logind[1654]: Session 13 logged out. Waiting for processes to exit. Oct 30 05:37:16.000813 systemd[1]: Started sshd@11-139.178.70.106:22-139.178.68.195:44972.service - OpenSSH per-connection server daemon (139.178.68.195:44972). Oct 30 05:37:16.004912 systemd-logind[1654]: Removed session 13. Oct 30 05:37:16.069156 sshd[5248]: Accepted publickey for core from 139.178.68.195 port 44972 ssh2: RSA SHA256:KIpSDL70YldeAtV2+Yri8KI6blHeuWNqKvGCxAMYgRQ Oct 30 05:37:16.070340 sshd-session[5248]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 05:37:16.075280 systemd-logind[1654]: New session 14 of user core. Oct 30 05:37:16.081483 systemd[1]: Started session-14.scope - Session 14 of User core. Oct 30 05:37:16.102208 containerd[1688]: time="2025-10-30T05:37:16.102142505Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 30 05:37:16.259686 sshd[5251]: Connection closed by 139.178.68.195 port 44972 Oct 30 05:37:16.259910 sshd-session[5248]: pam_unix(sshd:session): session closed for user core Oct 30 05:37:16.267955 systemd[1]: sshd@11-139.178.70.106:22-139.178.68.195:44972.service: Deactivated successfully. Oct 30 05:37:16.269261 systemd[1]: session-14.scope: Deactivated successfully. Oct 30 05:37:16.270239 systemd-logind[1654]: Session 14 logged out. Waiting for processes to exit. Oct 30 05:37:16.271758 systemd[1]: Started sshd@12-139.178.70.106:22-139.178.68.195:44984.service - OpenSSH per-connection server daemon (139.178.68.195:44984). Oct 30 05:37:16.273159 systemd-logind[1654]: Removed session 14. Oct 30 05:37:16.309305 sshd[5260]: Accepted publickey for core from 139.178.68.195 port 44984 ssh2: RSA SHA256:KIpSDL70YldeAtV2+Yri8KI6blHeuWNqKvGCxAMYgRQ Oct 30 05:37:16.309931 sshd-session[5260]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 05:37:16.313076 systemd-logind[1654]: New session 15 of user core. Oct 30 05:37:16.317373 systemd[1]: Started session-15.scope - Session 15 of User core. Oct 30 05:37:16.406195 sshd[5263]: Connection closed by 139.178.68.195 port 44984 Oct 30 05:37:16.408487 sshd-session[5260]: pam_unix(sshd:session): session closed for user core Oct 30 05:37:16.411712 systemd[1]: sshd@12-139.178.70.106:22-139.178.68.195:44984.service: Deactivated successfully. Oct 30 05:37:16.413709 systemd[1]: session-15.scope: Deactivated successfully. Oct 30 05:37:16.415112 systemd-logind[1654]: Session 15 logged out. Waiting for processes to exit. Oct 30 05:37:16.416251 systemd-logind[1654]: Removed session 15. Oct 30 05:37:16.461165 containerd[1688]: time="2025-10-30T05:37:16.461112205Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 05:37:16.975874 containerd[1688]: time="2025-10-30T05:37:16.975840431Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 30 05:37:16.993961 containerd[1688]: time="2025-10-30T05:37:16.993919886Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 30 05:37:16.994269 kubelet[2999]: E1030 05:37:16.994163 2999 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 30 05:37:16.994269 kubelet[2999]: E1030 05:37:16.994214 2999 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 30 05:37:17.013910 kubelet[2999]: E1030 05:37:17.013846 2999 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bwlk4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5c564d8bcd-xdxxp_calico-apiserver(99f1075f-ef64-457d-a338-2febaf8a005c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 30 05:37:17.015404 kubelet[2999]: E1030 05:37:17.015377 2999 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5c564d8bcd-xdxxp" podUID="99f1075f-ef64-457d-a338-2febaf8a005c" Oct 30 05:37:19.102115 kubelet[2999]: E1030 05:37:19.101702 2999 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5fc4bbb676-9vsss" podUID="a2f201cf-9e70-4227-a745-f73352a6fa53" Oct 30 05:37:21.416440 systemd[1]: Started sshd@13-139.178.70.106:22-139.178.68.195:44990.service - OpenSSH per-connection server daemon (139.178.68.195:44990). Oct 30 05:37:21.462169 sshd[5285]: Accepted publickey for core from 139.178.68.195 port 44990 ssh2: RSA SHA256:KIpSDL70YldeAtV2+Yri8KI6blHeuWNqKvGCxAMYgRQ Oct 30 05:37:21.462947 sshd-session[5285]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 05:37:21.466257 systemd-logind[1654]: New session 16 of user core. Oct 30 05:37:21.474620 systemd[1]: Started session-16.scope - Session 16 of User core. Oct 30 05:37:21.591207 sshd[5288]: Connection closed by 139.178.68.195 port 44990 Oct 30 05:37:21.591572 sshd-session[5285]: pam_unix(sshd:session): session closed for user core Oct 30 05:37:21.595170 systemd[1]: sshd@13-139.178.70.106:22-139.178.68.195:44990.service: Deactivated successfully. Oct 30 05:37:21.596775 systemd[1]: session-16.scope: Deactivated successfully. Oct 30 05:37:21.597398 systemd-logind[1654]: Session 16 logged out. Waiting for processes to exit. Oct 30 05:37:21.598268 systemd-logind[1654]: Removed session 16. Oct 30 05:37:23.102769 kubelet[2999]: E1030 05:37:23.102321 2999 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9d7959b66-vrsrq" podUID="a80719c9-5ba9-4e0b-97b9-ad9c69d9e2be" Oct 30 05:37:24.102360 kubelet[2999]: E1030 05:37:24.102054 2999 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7c65b456cf-hz876" podUID="e883b021-a9bb-48ab-80cc-7b947568b059" Oct 30 05:37:26.101223 kubelet[2999]: E1030 05:37:26.101170 2999 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5c564d8bcd-kspf5" podUID="9762f718-a4c7-49eb-975e-48fddc6a6070" Oct 30 05:37:26.101851 kubelet[2999]: E1030 05:37:26.101452 2999 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wzqzl" podUID="9e840872-e6a6-422f-a0c7-b6b186a24394" Oct 30 05:37:26.600963 systemd[1]: Started sshd@14-139.178.70.106:22-139.178.68.195:52578.service - OpenSSH per-connection server daemon (139.178.68.195:52578). Oct 30 05:37:26.743457 sshd[5300]: Accepted publickey for core from 139.178.68.195 port 52578 ssh2: RSA SHA256:KIpSDL70YldeAtV2+Yri8KI6blHeuWNqKvGCxAMYgRQ Oct 30 05:37:26.746769 sshd-session[5300]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 05:37:26.753449 systemd-logind[1654]: New session 17 of user core. Oct 30 05:37:26.759490 systemd[1]: Started session-17.scope - Session 17 of User core. Oct 30 05:37:26.851068 sshd[5303]: Connection closed by 139.178.68.195 port 52578 Oct 30 05:37:26.851635 sshd-session[5300]: pam_unix(sshd:session): session closed for user core Oct 30 05:37:26.855039 systemd-logind[1654]: Session 17 logged out. Waiting for processes to exit. Oct 30 05:37:26.855697 systemd[1]: sshd@14-139.178.70.106:22-139.178.68.195:52578.service: Deactivated successfully. Oct 30 05:37:26.857699 systemd[1]: session-17.scope: Deactivated successfully. Oct 30 05:37:26.859857 systemd-logind[1654]: Removed session 17. Oct 30 05:37:27.101353 kubelet[2999]: E1030 05:37:27.101258 2999 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-tznl5" podUID="11de3de5-1c6c-4871-83d5-a6b9faf25770" Oct 30 05:37:28.101130 kubelet[2999]: E1030 05:37:28.100880 2999 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5c564d8bcd-xdxxp" podUID="99f1075f-ef64-457d-a338-2febaf8a005c" Oct 30 05:37:28.483050 containerd[1688]: time="2025-10-30T05:37:28.482948776Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6800f4d88e80d6366f007a8fb3b5a32ce863fca4f3eba69b2742da5146841bad\" id:\"d029faa63012d69fa51456ad0457650fc5c8fc3bc245fb978b347d955cf480a3\" pid:5331 exited_at:{seconds:1761802648 nanos:482751303}" Oct 30 05:37:30.101914 kubelet[2999]: E1030 05:37:30.101855 2999 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5fc4bbb676-9vsss" podUID="a2f201cf-9e70-4227-a745-f73352a6fa53" Oct 30 05:37:31.861455 systemd[1]: Started sshd@15-139.178.70.106:22-139.178.68.195:52592.service - OpenSSH per-connection server daemon (139.178.68.195:52592). Oct 30 05:37:32.076563 sshd[5344]: Accepted publickey for core from 139.178.68.195 port 52592 ssh2: RSA SHA256:KIpSDL70YldeAtV2+Yri8KI6blHeuWNqKvGCxAMYgRQ Oct 30 05:37:32.079235 sshd-session[5344]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 05:37:32.084740 systemd-logind[1654]: New session 18 of user core. Oct 30 05:37:32.090745 systemd[1]: Started session-18.scope - Session 18 of User core. Oct 30 05:37:32.174511 sshd[5347]: Connection closed by 139.178.68.195 port 52592 Oct 30 05:37:32.173797 sshd-session[5344]: pam_unix(sshd:session): session closed for user core Oct 30 05:37:32.178700 systemd[1]: sshd@15-139.178.70.106:22-139.178.68.195:52592.service: Deactivated successfully. Oct 30 05:37:32.181340 systemd[1]: session-18.scope: Deactivated successfully. Oct 30 05:37:32.182771 systemd-logind[1654]: Session 18 logged out. Waiting for processes to exit. Oct 30 05:37:32.184758 systemd-logind[1654]: Removed session 18. Oct 30 05:37:36.100640 kubelet[2999]: E1030 05:37:36.100587 2999 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9d7959b66-vrsrq" podUID="a80719c9-5ba9-4e0b-97b9-ad9c69d9e2be" Oct 30 05:37:37.104363 kubelet[2999]: E1030 05:37:37.104055 2999 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7c65b456cf-hz876" podUID="e883b021-a9bb-48ab-80cc-7b947568b059" Oct 30 05:37:37.105244 kubelet[2999]: E1030 05:37:37.105144 2999 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5c564d8bcd-kspf5" podUID="9762f718-a4c7-49eb-975e-48fddc6a6070" Oct 30 05:37:37.184024 systemd[1]: Started sshd@16-139.178.70.106:22-139.178.68.195:60762.service - OpenSSH per-connection server daemon (139.178.68.195:60762). Oct 30 05:37:37.225153 sshd[5358]: Accepted publickey for core from 139.178.68.195 port 60762 ssh2: RSA SHA256:KIpSDL70YldeAtV2+Yri8KI6blHeuWNqKvGCxAMYgRQ Oct 30 05:37:37.225975 sshd-session[5358]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 05:37:37.230782 systemd-logind[1654]: New session 19 of user core. Oct 30 05:37:37.234432 systemd[1]: Started session-19.scope - Session 19 of User core. Oct 30 05:37:37.328546 sshd[5361]: Connection closed by 139.178.68.195 port 60762 Oct 30 05:37:37.331095 sshd-session[5358]: pam_unix(sshd:session): session closed for user core Oct 30 05:37:37.339873 systemd[1]: sshd@16-139.178.70.106:22-139.178.68.195:60762.service: Deactivated successfully. Oct 30 05:37:37.342764 systemd[1]: session-19.scope: Deactivated successfully. Oct 30 05:37:37.343793 systemd-logind[1654]: Session 19 logged out. Waiting for processes to exit. Oct 30 05:37:37.347331 systemd[1]: Started sshd@17-139.178.70.106:22-139.178.68.195:60766.service - OpenSSH per-connection server daemon (139.178.68.195:60766). Oct 30 05:37:37.348003 systemd-logind[1654]: Removed session 19. Oct 30 05:37:37.401055 sshd[5373]: Accepted publickey for core from 139.178.68.195 port 60766 ssh2: RSA SHA256:KIpSDL70YldeAtV2+Yri8KI6blHeuWNqKvGCxAMYgRQ Oct 30 05:37:37.402031 sshd-session[5373]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 05:37:37.406476 systemd-logind[1654]: New session 20 of user core. Oct 30 05:37:37.411384 systemd[1]: Started session-20.scope - Session 20 of User core. Oct 30 05:37:37.829461 sshd[5376]: Connection closed by 139.178.68.195 port 60766 Oct 30 05:37:37.830716 sshd-session[5373]: pam_unix(sshd:session): session closed for user core Oct 30 05:37:37.839223 systemd[1]: sshd@17-139.178.70.106:22-139.178.68.195:60766.service: Deactivated successfully. Oct 30 05:37:37.840827 systemd[1]: session-20.scope: Deactivated successfully. Oct 30 05:37:37.842558 systemd-logind[1654]: Session 20 logged out. Waiting for processes to exit. Oct 30 05:37:37.846126 systemd[1]: Started sshd@18-139.178.70.106:22-139.178.68.195:60768.service - OpenSSH per-connection server daemon (139.178.68.195:60768). Oct 30 05:37:37.848148 systemd-logind[1654]: Removed session 20. Oct 30 05:37:37.902682 sshd[5386]: Accepted publickey for core from 139.178.68.195 port 60768 ssh2: RSA SHA256:KIpSDL70YldeAtV2+Yri8KI6blHeuWNqKvGCxAMYgRQ Oct 30 05:37:37.903565 sshd-session[5386]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 05:37:37.906444 systemd-logind[1654]: New session 21 of user core. Oct 30 05:37:37.910370 systemd[1]: Started session-21.scope - Session 21 of User core. Oct 30 05:37:38.617347 sshd[5389]: Connection closed by 139.178.68.195 port 60768 Oct 30 05:37:38.625054 sshd-session[5386]: pam_unix(sshd:session): session closed for user core Oct 30 05:37:38.626706 systemd[1]: Started sshd@19-139.178.70.106:22-139.178.68.195:60780.service - OpenSSH per-connection server daemon (139.178.68.195:60780). Oct 30 05:37:38.632488 systemd[1]: sshd@18-139.178.70.106:22-139.178.68.195:60768.service: Deactivated successfully. Oct 30 05:37:38.635138 systemd[1]: session-21.scope: Deactivated successfully. Oct 30 05:37:38.642518 systemd-logind[1654]: Session 21 logged out. Waiting for processes to exit. Oct 30 05:37:38.643818 systemd-logind[1654]: Removed session 21. Oct 30 05:37:38.698549 sshd[5402]: Accepted publickey for core from 139.178.68.195 port 60780 ssh2: RSA SHA256:KIpSDL70YldeAtV2+Yri8KI6blHeuWNqKvGCxAMYgRQ Oct 30 05:37:38.699641 sshd-session[5402]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 05:37:38.702916 systemd-logind[1654]: New session 22 of user core. Oct 30 05:37:38.709411 systemd[1]: Started session-22.scope - Session 22 of User core. Oct 30 05:37:38.962790 sshd[5409]: Connection closed by 139.178.68.195 port 60780 Oct 30 05:37:38.970921 systemd[1]: sshd@19-139.178.70.106:22-139.178.68.195:60780.service: Deactivated successfully. Oct 30 05:37:38.964129 sshd-session[5402]: pam_unix(sshd:session): session closed for user core Oct 30 05:37:38.973570 systemd[1]: session-22.scope: Deactivated successfully. Oct 30 05:37:38.974489 systemd-logind[1654]: Session 22 logged out. Waiting for processes to exit. Oct 30 05:37:38.978848 systemd[1]: Started sshd@20-139.178.70.106:22-139.178.68.195:60796.service - OpenSSH per-connection server daemon (139.178.68.195:60796). Oct 30 05:37:38.980556 systemd-logind[1654]: Removed session 22. Oct 30 05:37:39.016726 sshd[5419]: Accepted publickey for core from 139.178.68.195 port 60796 ssh2: RSA SHA256:KIpSDL70YldeAtV2+Yri8KI6blHeuWNqKvGCxAMYgRQ Oct 30 05:37:39.017874 sshd-session[5419]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 05:37:39.021419 systemd-logind[1654]: New session 23 of user core. Oct 30 05:37:39.026421 systemd[1]: Started session-23.scope - Session 23 of User core. Oct 30 05:37:39.102932 kubelet[2999]: E1030 05:37:39.102592 2999 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-tznl5" podUID="11de3de5-1c6c-4871-83d5-a6b9faf25770" Oct 30 05:37:39.102932 kubelet[2999]: E1030 05:37:39.102888 2999 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wzqzl" podUID="9e840872-e6a6-422f-a0c7-b6b186a24394" Oct 30 05:37:39.132603 sshd[5422]: Connection closed by 139.178.68.195 port 60796 Oct 30 05:37:39.132782 sshd-session[5419]: pam_unix(sshd:session): session closed for user core Oct 30 05:37:39.137134 systemd[1]: sshd@20-139.178.70.106:22-139.178.68.195:60796.service: Deactivated successfully. Oct 30 05:37:39.139244 systemd[1]: session-23.scope: Deactivated successfully. Oct 30 05:37:39.142268 systemd-logind[1654]: Session 23 logged out. Waiting for processes to exit. Oct 30 05:37:39.142993 systemd-logind[1654]: Removed session 23. Oct 30 05:37:42.101084 kubelet[2999]: E1030 05:37:42.101014 2999 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5c564d8bcd-xdxxp" podUID="99f1075f-ef64-457d-a338-2febaf8a005c" Oct 30 05:37:43.103530 kubelet[2999]: E1030 05:37:43.103421 2999 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5fc4bbb676-9vsss" podUID="a2f201cf-9e70-4227-a745-f73352a6fa53" Oct 30 05:37:44.142228 systemd[1]: Started sshd@21-139.178.70.106:22-139.178.68.195:44040.service - OpenSSH per-connection server daemon (139.178.68.195:44040). Oct 30 05:37:44.183880 sshd[5437]: Accepted publickey for core from 139.178.68.195 port 44040 ssh2: RSA SHA256:KIpSDL70YldeAtV2+Yri8KI6blHeuWNqKvGCxAMYgRQ Oct 30 05:37:44.185885 sshd-session[5437]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 05:37:44.190737 systemd-logind[1654]: New session 24 of user core. Oct 30 05:37:44.194170 systemd[1]: Started session-24.scope - Session 24 of User core. Oct 30 05:37:44.290755 sshd[5440]: Connection closed by 139.178.68.195 port 44040 Oct 30 05:37:44.291121 sshd-session[5437]: pam_unix(sshd:session): session closed for user core Oct 30 05:37:44.294703 systemd[1]: sshd@21-139.178.70.106:22-139.178.68.195:44040.service: Deactivated successfully. Oct 30 05:37:44.294986 systemd-logind[1654]: Session 24 logged out. Waiting for processes to exit. Oct 30 05:37:44.296874 systemd[1]: session-24.scope: Deactivated successfully. Oct 30 05:37:44.299918 systemd-logind[1654]: Removed session 24. Oct 30 05:37:48.101893 kubelet[2999]: E1030 05:37:48.101661 2999 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5c564d8bcd-kspf5" podUID="9762f718-a4c7-49eb-975e-48fddc6a6070" Oct 30 05:37:49.305023 systemd[1]: Started sshd@22-139.178.70.106:22-139.178.68.195:44042.service - OpenSSH per-connection server daemon (139.178.68.195:44042). Oct 30 05:37:50.053300 sshd[5458]: Accepted publickey for core from 139.178.68.195 port 44042 ssh2: RSA SHA256:KIpSDL70YldeAtV2+Yri8KI6blHeuWNqKvGCxAMYgRQ Oct 30 05:37:50.056465 sshd-session[5458]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 05:37:50.062128 systemd-logind[1654]: New session 25 of user core. Oct 30 05:37:50.065417 systemd[1]: Started session-25.scope - Session 25 of User core. Oct 30 05:37:50.105029 kubelet[2999]: E1030 05:37:50.104983 2999 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9d7959b66-vrsrq" podUID="a80719c9-5ba9-4e0b-97b9-ad9c69d9e2be" Oct 30 05:37:50.106193 kubelet[2999]: E1030 05:37:50.106167 2999 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wzqzl" podUID="9e840872-e6a6-422f-a0c7-b6b186a24394" Oct 30 05:37:50.546688 sshd[5461]: Connection closed by 139.178.68.195 port 44042 Oct 30 05:37:50.547083 sshd-session[5458]: pam_unix(sshd:session): session closed for user core Oct 30 05:37:50.555485 systemd[1]: sshd@22-139.178.70.106:22-139.178.68.195:44042.service: Deactivated successfully. Oct 30 05:37:50.558497 systemd[1]: session-25.scope: Deactivated successfully. Oct 30 05:37:50.559066 systemd-logind[1654]: Session 25 logged out. Waiting for processes to exit. Oct 30 05:37:50.560810 systemd-logind[1654]: Removed session 25. Oct 30 05:37:51.101322 kubelet[2999]: E1030 05:37:51.101155 2999 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-tznl5" podUID="11de3de5-1c6c-4871-83d5-a6b9faf25770" Oct 30 05:37:51.101733 kubelet[2999]: E1030 05:37:51.101557 2999 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7c65b456cf-hz876" podUID="e883b021-a9bb-48ab-80cc-7b947568b059" Oct 30 05:37:55.102591 containerd[1688]: time="2025-10-30T05:37:55.101575631Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 30 05:37:55.448389 containerd[1688]: time="2025-10-30T05:37:55.448349065Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 05:37:55.456565 containerd[1688]: time="2025-10-30T05:37:55.456521846Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 30 05:37:55.456699 containerd[1688]: time="2025-10-30T05:37:55.456606151Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Oct 30 05:37:55.498651 kubelet[2999]: E1030 05:37:55.498605 2999 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 30 05:37:55.500086 kubelet[2999]: E1030 05:37:55.500061 2999 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 30 05:37:55.500183 kubelet[2999]: E1030 05:37:55.500155 2999 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:3e229c8b9d734849864085095eacfb5b,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jbs2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5fc4bbb676-9vsss_calico-system(a2f201cf-9e70-4227-a745-f73352a6fa53): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 30 05:37:55.502418 containerd[1688]: time="2025-10-30T05:37:55.502397496Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 30 05:37:55.555659 systemd[1]: Started sshd@23-139.178.70.106:22-139.178.68.195:59228.service - OpenSSH per-connection server daemon (139.178.68.195:59228). Oct 30 05:37:55.602222 sshd[5479]: Accepted publickey for core from 139.178.68.195 port 59228 ssh2: RSA SHA256:KIpSDL70YldeAtV2+Yri8KI6blHeuWNqKvGCxAMYgRQ Oct 30 05:37:55.602128 sshd-session[5479]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 05:37:55.605383 systemd-logind[1654]: New session 26 of user core. Oct 30 05:37:55.614445 systemd[1]: Started session-26.scope - Session 26 of User core. Oct 30 05:37:55.684253 sshd[5482]: Connection closed by 139.178.68.195 port 59228 Oct 30 05:37:55.684719 sshd-session[5479]: pam_unix(sshd:session): session closed for user core Oct 30 05:37:55.686946 systemd[1]: sshd@23-139.178.70.106:22-139.178.68.195:59228.service: Deactivated successfully. Oct 30 05:37:55.688356 systemd[1]: session-26.scope: Deactivated successfully. Oct 30 05:37:55.689265 systemd-logind[1654]: Session 26 logged out. Waiting for processes to exit. Oct 30 05:37:55.690163 systemd-logind[1654]: Removed session 26. Oct 30 05:37:55.846133 containerd[1688]: time="2025-10-30T05:37:55.846016571Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 05:37:55.846350 containerd[1688]: time="2025-10-30T05:37:55.846327972Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 30 05:37:55.846400 containerd[1688]: time="2025-10-30T05:37:55.846381527Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Oct 30 05:37:55.846564 kubelet[2999]: E1030 05:37:55.846471 2999 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 30 05:37:55.846564 kubelet[2999]: E1030 05:37:55.846501 2999 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 30 05:37:55.846801 kubelet[2999]: E1030 05:37:55.846615 2999 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jbs2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5fc4bbb676-9vsss_calico-system(a2f201cf-9e70-4227-a745-f73352a6fa53): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 30 05:37:55.848126 kubelet[2999]: E1030 05:37:55.848094 2999 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5fc4bbb676-9vsss" podUID="a2f201cf-9e70-4227-a745-f73352a6fa53" Oct 30 05:37:56.101127 kubelet[2999]: E1030 05:37:56.101053 2999 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5c564d8bcd-xdxxp" podUID="99f1075f-ef64-457d-a338-2febaf8a005c"