Nov 6 05:27:47.614037 kernel: Linux version 6.12.54-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Thu Nov 6 03:32:51 -00 2025 Nov 6 05:27:47.614054 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=42c7eeb79a8ee89597bba4204806137326be9acdbca65a8fd923766f65b62f69 Nov 6 05:27:47.614061 kernel: Disabled fast string operations Nov 6 05:27:47.614065 kernel: BIOS-provided physical RAM map: Nov 6 05:27:47.614069 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ebff] usable Nov 6 05:27:47.614073 kernel: BIOS-e820: [mem 0x000000000009ec00-0x000000000009ffff] reserved Nov 6 05:27:47.614078 kernel: BIOS-e820: [mem 0x00000000000dc000-0x00000000000fffff] reserved Nov 6 05:27:47.614083 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007fedffff] usable Nov 6 05:27:47.614088 kernel: BIOS-e820: [mem 0x000000007fee0000-0x000000007fefefff] ACPI data Nov 6 05:27:47.614092 kernel: BIOS-e820: [mem 0x000000007feff000-0x000000007fefffff] ACPI NVS Nov 6 05:27:47.614096 kernel: BIOS-e820: [mem 0x000000007ff00000-0x000000007fffffff] usable Nov 6 05:27:47.614101 kernel: BIOS-e820: [mem 0x00000000f0000000-0x00000000f7ffffff] reserved Nov 6 05:27:47.614105 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec0ffff] reserved Nov 6 05:27:47.614109 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Nov 6 05:27:47.614115 kernel: BIOS-e820: [mem 0x00000000fffe0000-0x00000000ffffffff] reserved Nov 6 05:27:47.614120 kernel: NX (Execute Disable) protection: active Nov 6 05:27:47.614125 kernel: APIC: Static calls initialized Nov 6 05:27:47.614130 kernel: SMBIOS 2.7 present. Nov 6 05:27:47.614135 kernel: DMI: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 05/28/2020 Nov 6 05:27:47.614140 kernel: DMI: Memory slots populated: 1/128 Nov 6 05:27:47.614145 kernel: vmware: hypercall mode: 0x00 Nov 6 05:27:47.614149 kernel: Hypervisor detected: VMware Nov 6 05:27:47.614154 kernel: vmware: TSC freq read from hypervisor : 3408.000 MHz Nov 6 05:27:47.614159 kernel: vmware: Host bus clock speed read from hypervisor : 66000000 Hz Nov 6 05:27:47.614165 kernel: vmware: using clock offset of 3995602889 ns Nov 6 05:27:47.614169 kernel: tsc: Detected 3408.000 MHz processor Nov 6 05:27:47.614175 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 6 05:27:47.614180 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 6 05:27:47.614185 kernel: last_pfn = 0x80000 max_arch_pfn = 0x400000000 Nov 6 05:27:47.614190 kernel: total RAM covered: 3072M Nov 6 05:27:47.614195 kernel: Found optimal setting for mtrr clean up Nov 6 05:27:47.614200 kernel: gran_size: 64K chunk_size: 64K num_reg: 2 lose cover RAM: 0G Nov 6 05:27:47.614205 kernel: MTRR map: 6 entries (5 fixed + 1 variable; max 21), built from 8 variable MTRRs Nov 6 05:27:47.614211 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 6 05:27:47.614216 kernel: Using GB pages for direct mapping Nov 6 05:27:47.614221 kernel: ACPI: Early table checksum verification disabled Nov 6 05:27:47.614237 kernel: ACPI: RSDP 0x00000000000F6A00 000024 (v02 PTLTD ) Nov 6 05:27:47.614243 kernel: ACPI: XSDT 0x000000007FEE965B 00005C (v01 INTEL 440BX 06040000 VMW 01324272) Nov 6 05:27:47.614248 kernel: ACPI: FACP 0x000000007FEFEE73 0000F4 (v04 INTEL 440BX 06040000 PTL 000F4240) Nov 6 05:27:47.614253 kernel: ACPI: DSDT 0x000000007FEEAD55 01411E (v01 PTLTD Custom 06040000 MSFT 03000001) Nov 6 05:27:47.614260 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Nov 6 05:27:47.614266 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Nov 6 05:27:47.614271 kernel: ACPI: BOOT 0x000000007FEEAD2D 000028 (v01 PTLTD $SBFTBL$ 06040000 LTP 00000001) Nov 6 05:27:47.614276 kernel: ACPI: APIC 0x000000007FEEA5EB 000742 (v01 PTLTD ? APIC 06040000 LTP 00000000) Nov 6 05:27:47.614282 kernel: ACPI: MCFG 0x000000007FEEA5AF 00003C (v01 PTLTD $PCITBL$ 06040000 LTP 00000001) Nov 6 05:27:47.614287 kernel: ACPI: SRAT 0x000000007FEE9757 0008A8 (v02 VMWARE MEMPLUG 06040000 VMW 00000001) Nov 6 05:27:47.614292 kernel: ACPI: HPET 0x000000007FEE971F 000038 (v01 VMWARE VMW HPET 06040000 VMW 00000001) Nov 6 05:27:47.614298 kernel: ACPI: WAET 0x000000007FEE96F7 000028 (v01 VMWARE VMW WAET 06040000 VMW 00000001) Nov 6 05:27:47.614303 kernel: ACPI: Reserving FACP table memory at [mem 0x7fefee73-0x7fefef66] Nov 6 05:27:47.614308 kernel: ACPI: Reserving DSDT table memory at [mem 0x7feead55-0x7fefee72] Nov 6 05:27:47.614314 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Nov 6 05:27:47.614319 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Nov 6 05:27:47.614324 kernel: ACPI: Reserving BOOT table memory at [mem 0x7feead2d-0x7feead54] Nov 6 05:27:47.614329 kernel: ACPI: Reserving APIC table memory at [mem 0x7feea5eb-0x7feead2c] Nov 6 05:27:47.614334 kernel: ACPI: Reserving MCFG table memory at [mem 0x7feea5af-0x7feea5ea] Nov 6 05:27:47.614339 kernel: ACPI: Reserving SRAT table memory at [mem 0x7fee9757-0x7fee9ffe] Nov 6 05:27:47.614344 kernel: ACPI: Reserving HPET table memory at [mem 0x7fee971f-0x7fee9756] Nov 6 05:27:47.614350 kernel: ACPI: Reserving WAET table memory at [mem 0x7fee96f7-0x7fee971e] Nov 6 05:27:47.614355 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Nov 6 05:27:47.614360 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Nov 6 05:27:47.614365 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000-0xbfffffff] hotplug Nov 6 05:27:47.614371 kernel: NUMA: Node 0 [mem 0x00001000-0x0009ffff] + [mem 0x00100000-0x7fffffff] -> [mem 0x00001000-0x7fffffff] Nov 6 05:27:47.614376 kernel: NODE_DATA(0) allocated [mem 0x7fff8dc0-0x7fffffff] Nov 6 05:27:47.614381 kernel: Zone ranges: Nov 6 05:27:47.614386 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 6 05:27:47.614391 kernel: DMA32 [mem 0x0000000001000000-0x000000007fffffff] Nov 6 05:27:47.614397 kernel: Normal empty Nov 6 05:27:47.614403 kernel: Device empty Nov 6 05:27:47.614408 kernel: Movable zone start for each node Nov 6 05:27:47.614413 kernel: Early memory node ranges Nov 6 05:27:47.614418 kernel: node 0: [mem 0x0000000000001000-0x000000000009dfff] Nov 6 05:27:47.614423 kernel: node 0: [mem 0x0000000000100000-0x000000007fedffff] Nov 6 05:27:47.614428 kernel: node 0: [mem 0x000000007ff00000-0x000000007fffffff] Nov 6 05:27:47.614433 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007fffffff] Nov 6 05:27:47.614438 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 6 05:27:47.614443 kernel: On node 0, zone DMA: 98 pages in unavailable ranges Nov 6 05:27:47.614449 kernel: On node 0, zone DMA32: 32 pages in unavailable ranges Nov 6 05:27:47.614454 kernel: ACPI: PM-Timer IO Port: 0x1008 Nov 6 05:27:47.614460 kernel: ACPI: LAPIC_NMI (acpi_id[0x00] high edge lint[0x1]) Nov 6 05:27:47.614465 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Nov 6 05:27:47.614470 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Nov 6 05:27:47.614475 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Nov 6 05:27:47.614480 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Nov 6 05:27:47.614485 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Nov 6 05:27:47.614490 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Nov 6 05:27:47.614496 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Nov 6 05:27:47.614501 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Nov 6 05:27:47.614506 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Nov 6 05:27:47.614511 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Nov 6 05:27:47.614516 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Nov 6 05:27:47.614521 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Nov 6 05:27:47.614526 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Nov 6 05:27:47.614531 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Nov 6 05:27:47.614536 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Nov 6 05:27:47.614541 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Nov 6 05:27:47.614547 kernel: ACPI: LAPIC_NMI (acpi_id[0x11] high edge lint[0x1]) Nov 6 05:27:47.614552 kernel: ACPI: LAPIC_NMI (acpi_id[0x12] high edge lint[0x1]) Nov 6 05:27:47.614557 kernel: ACPI: LAPIC_NMI (acpi_id[0x13] high edge lint[0x1]) Nov 6 05:27:47.614562 kernel: ACPI: LAPIC_NMI (acpi_id[0x14] high edge lint[0x1]) Nov 6 05:27:47.614567 kernel: ACPI: LAPIC_NMI (acpi_id[0x15] high edge lint[0x1]) Nov 6 05:27:47.614572 kernel: ACPI: LAPIC_NMI (acpi_id[0x16] high edge lint[0x1]) Nov 6 05:27:47.614577 kernel: ACPI: LAPIC_NMI (acpi_id[0x17] high edge lint[0x1]) Nov 6 05:27:47.614582 kernel: ACPI: LAPIC_NMI (acpi_id[0x18] high edge lint[0x1]) Nov 6 05:27:47.614588 kernel: ACPI: LAPIC_NMI (acpi_id[0x19] high edge lint[0x1]) Nov 6 05:27:47.614592 kernel: ACPI: LAPIC_NMI (acpi_id[0x1a] high edge lint[0x1]) Nov 6 05:27:47.614598 kernel: ACPI: LAPIC_NMI (acpi_id[0x1b] high edge lint[0x1]) Nov 6 05:27:47.614603 kernel: ACPI: LAPIC_NMI (acpi_id[0x1c] high edge lint[0x1]) Nov 6 05:27:47.614608 kernel: ACPI: LAPIC_NMI (acpi_id[0x1d] high edge lint[0x1]) Nov 6 05:27:47.614613 kernel: ACPI: LAPIC_NMI (acpi_id[0x1e] high edge lint[0x1]) Nov 6 05:27:47.614619 kernel: ACPI: LAPIC_NMI (acpi_id[0x1f] high edge lint[0x1]) Nov 6 05:27:47.614624 kernel: ACPI: LAPIC_NMI (acpi_id[0x20] high edge lint[0x1]) Nov 6 05:27:47.614629 kernel: ACPI: LAPIC_NMI (acpi_id[0x21] high edge lint[0x1]) Nov 6 05:27:47.614634 kernel: ACPI: LAPIC_NMI (acpi_id[0x22] high edge lint[0x1]) Nov 6 05:27:47.614639 kernel: ACPI: LAPIC_NMI (acpi_id[0x23] high edge lint[0x1]) Nov 6 05:27:47.614644 kernel: ACPI: LAPIC_NMI (acpi_id[0x24] high edge lint[0x1]) Nov 6 05:27:47.614650 kernel: ACPI: LAPIC_NMI (acpi_id[0x25] high edge lint[0x1]) Nov 6 05:27:47.614655 kernel: ACPI: LAPIC_NMI (acpi_id[0x26] high edge lint[0x1]) Nov 6 05:27:47.614660 kernel: ACPI: LAPIC_NMI (acpi_id[0x27] high edge lint[0x1]) Nov 6 05:27:47.614669 kernel: ACPI: LAPIC_NMI (acpi_id[0x28] high edge lint[0x1]) Nov 6 05:27:47.614675 kernel: ACPI: LAPIC_NMI (acpi_id[0x29] high edge lint[0x1]) Nov 6 05:27:47.614680 kernel: ACPI: LAPIC_NMI (acpi_id[0x2a] high edge lint[0x1]) Nov 6 05:27:47.614685 kernel: ACPI: LAPIC_NMI (acpi_id[0x2b] high edge lint[0x1]) Nov 6 05:27:47.614691 kernel: ACPI: LAPIC_NMI (acpi_id[0x2c] high edge lint[0x1]) Nov 6 05:27:47.614697 kernel: ACPI: LAPIC_NMI (acpi_id[0x2d] high edge lint[0x1]) Nov 6 05:27:47.614703 kernel: ACPI: LAPIC_NMI (acpi_id[0x2e] high edge lint[0x1]) Nov 6 05:27:47.614708 kernel: ACPI: LAPIC_NMI (acpi_id[0x2f] high edge lint[0x1]) Nov 6 05:27:47.614713 kernel: ACPI: LAPIC_NMI (acpi_id[0x30] high edge lint[0x1]) Nov 6 05:27:47.614719 kernel: ACPI: LAPIC_NMI (acpi_id[0x31] high edge lint[0x1]) Nov 6 05:27:47.614724 kernel: ACPI: LAPIC_NMI (acpi_id[0x32] high edge lint[0x1]) Nov 6 05:27:47.614729 kernel: ACPI: LAPIC_NMI (acpi_id[0x33] high edge lint[0x1]) Nov 6 05:27:47.614735 kernel: ACPI: LAPIC_NMI (acpi_id[0x34] high edge lint[0x1]) Nov 6 05:27:47.614740 kernel: ACPI: LAPIC_NMI (acpi_id[0x35] high edge lint[0x1]) Nov 6 05:27:47.614746 kernel: ACPI: LAPIC_NMI (acpi_id[0x36] high edge lint[0x1]) Nov 6 05:27:47.614752 kernel: ACPI: LAPIC_NMI (acpi_id[0x37] high edge lint[0x1]) Nov 6 05:27:47.614757 kernel: ACPI: LAPIC_NMI (acpi_id[0x38] high edge lint[0x1]) Nov 6 05:27:47.614763 kernel: ACPI: LAPIC_NMI (acpi_id[0x39] high edge lint[0x1]) Nov 6 05:27:47.614768 kernel: ACPI: LAPIC_NMI (acpi_id[0x3a] high edge lint[0x1]) Nov 6 05:27:47.614773 kernel: ACPI: LAPIC_NMI (acpi_id[0x3b] high edge lint[0x1]) Nov 6 05:27:47.614779 kernel: ACPI: LAPIC_NMI (acpi_id[0x3c] high edge lint[0x1]) Nov 6 05:27:47.614784 kernel: ACPI: LAPIC_NMI (acpi_id[0x3d] high edge lint[0x1]) Nov 6 05:27:47.614790 kernel: ACPI: LAPIC_NMI (acpi_id[0x3e] high edge lint[0x1]) Nov 6 05:27:47.614795 kernel: ACPI: LAPIC_NMI (acpi_id[0x3f] high edge lint[0x1]) Nov 6 05:27:47.614801 kernel: ACPI: LAPIC_NMI (acpi_id[0x40] high edge lint[0x1]) Nov 6 05:27:47.614807 kernel: ACPI: LAPIC_NMI (acpi_id[0x41] high edge lint[0x1]) Nov 6 05:27:47.614812 kernel: ACPI: LAPIC_NMI (acpi_id[0x42] high edge lint[0x1]) Nov 6 05:27:47.614818 kernel: ACPI: LAPIC_NMI (acpi_id[0x43] high edge lint[0x1]) Nov 6 05:27:47.614823 kernel: ACPI: LAPIC_NMI (acpi_id[0x44] high edge lint[0x1]) Nov 6 05:27:47.614828 kernel: ACPI: LAPIC_NMI (acpi_id[0x45] high edge lint[0x1]) Nov 6 05:27:47.614834 kernel: ACPI: LAPIC_NMI (acpi_id[0x46] high edge lint[0x1]) Nov 6 05:27:47.614839 kernel: ACPI: LAPIC_NMI (acpi_id[0x47] high edge lint[0x1]) Nov 6 05:27:47.614844 kernel: ACPI: LAPIC_NMI (acpi_id[0x48] high edge lint[0x1]) Nov 6 05:27:47.614850 kernel: ACPI: LAPIC_NMI (acpi_id[0x49] high edge lint[0x1]) Nov 6 05:27:47.614856 kernel: ACPI: LAPIC_NMI (acpi_id[0x4a] high edge lint[0x1]) Nov 6 05:27:47.614861 kernel: ACPI: LAPIC_NMI (acpi_id[0x4b] high edge lint[0x1]) Nov 6 05:27:47.614867 kernel: ACPI: LAPIC_NMI (acpi_id[0x4c] high edge lint[0x1]) Nov 6 05:27:47.614872 kernel: ACPI: LAPIC_NMI (acpi_id[0x4d] high edge lint[0x1]) Nov 6 05:27:47.614878 kernel: ACPI: LAPIC_NMI (acpi_id[0x4e] high edge lint[0x1]) Nov 6 05:27:47.614884 kernel: ACPI: LAPIC_NMI (acpi_id[0x4f] high edge lint[0x1]) Nov 6 05:27:47.614889 kernel: ACPI: LAPIC_NMI (acpi_id[0x50] high edge lint[0x1]) Nov 6 05:27:47.614894 kernel: ACPI: LAPIC_NMI (acpi_id[0x51] high edge lint[0x1]) Nov 6 05:27:47.614904 kernel: ACPI: LAPIC_NMI (acpi_id[0x52] high edge lint[0x1]) Nov 6 05:27:47.614909 kernel: ACPI: LAPIC_NMI (acpi_id[0x53] high edge lint[0x1]) Nov 6 05:27:47.614915 kernel: ACPI: LAPIC_NMI (acpi_id[0x54] high edge lint[0x1]) Nov 6 05:27:47.614921 kernel: ACPI: LAPIC_NMI (acpi_id[0x55] high edge lint[0x1]) Nov 6 05:27:47.614926 kernel: ACPI: LAPIC_NMI (acpi_id[0x56] high edge lint[0x1]) Nov 6 05:27:47.614932 kernel: ACPI: LAPIC_NMI (acpi_id[0x57] high edge lint[0x1]) Nov 6 05:27:47.614937 kernel: ACPI: LAPIC_NMI (acpi_id[0x58] high edge lint[0x1]) Nov 6 05:27:47.614942 kernel: ACPI: LAPIC_NMI (acpi_id[0x59] high edge lint[0x1]) Nov 6 05:27:47.614948 kernel: ACPI: LAPIC_NMI (acpi_id[0x5a] high edge lint[0x1]) Nov 6 05:27:47.614953 kernel: ACPI: LAPIC_NMI (acpi_id[0x5b] high edge lint[0x1]) Nov 6 05:27:47.614958 kernel: ACPI: LAPIC_NMI (acpi_id[0x5c] high edge lint[0x1]) Nov 6 05:27:47.614964 kernel: ACPI: LAPIC_NMI (acpi_id[0x5d] high edge lint[0x1]) Nov 6 05:27:47.614970 kernel: ACPI: LAPIC_NMI (acpi_id[0x5e] high edge lint[0x1]) Nov 6 05:27:47.614976 kernel: ACPI: LAPIC_NMI (acpi_id[0x5f] high edge lint[0x1]) Nov 6 05:27:47.614981 kernel: ACPI: LAPIC_NMI (acpi_id[0x60] high edge lint[0x1]) Nov 6 05:27:47.614986 kernel: ACPI: LAPIC_NMI (acpi_id[0x61] high edge lint[0x1]) Nov 6 05:27:47.614992 kernel: ACPI: LAPIC_NMI (acpi_id[0x62] high edge lint[0x1]) Nov 6 05:27:47.614997 kernel: ACPI: LAPIC_NMI (acpi_id[0x63] high edge lint[0x1]) Nov 6 05:27:47.615002 kernel: ACPI: LAPIC_NMI (acpi_id[0x64] high edge lint[0x1]) Nov 6 05:27:47.615008 kernel: ACPI: LAPIC_NMI (acpi_id[0x65] high edge lint[0x1]) Nov 6 05:27:47.615013 kernel: ACPI: LAPIC_NMI (acpi_id[0x66] high edge lint[0x1]) Nov 6 05:27:47.615018 kernel: ACPI: LAPIC_NMI (acpi_id[0x67] high edge lint[0x1]) Nov 6 05:27:47.615025 kernel: ACPI: LAPIC_NMI (acpi_id[0x68] high edge lint[0x1]) Nov 6 05:27:47.615030 kernel: ACPI: LAPIC_NMI (acpi_id[0x69] high edge lint[0x1]) Nov 6 05:27:47.615035 kernel: ACPI: LAPIC_NMI (acpi_id[0x6a] high edge lint[0x1]) Nov 6 05:27:47.615041 kernel: ACPI: LAPIC_NMI (acpi_id[0x6b] high edge lint[0x1]) Nov 6 05:27:47.615046 kernel: ACPI: LAPIC_NMI (acpi_id[0x6c] high edge lint[0x1]) Nov 6 05:27:47.615051 kernel: ACPI: LAPIC_NMI (acpi_id[0x6d] high edge lint[0x1]) Nov 6 05:27:47.615057 kernel: ACPI: LAPIC_NMI (acpi_id[0x6e] high edge lint[0x1]) Nov 6 05:27:47.615062 kernel: ACPI: LAPIC_NMI (acpi_id[0x6f] high edge lint[0x1]) Nov 6 05:27:47.615067 kernel: ACPI: LAPIC_NMI (acpi_id[0x70] high edge lint[0x1]) Nov 6 05:27:47.615073 kernel: ACPI: LAPIC_NMI (acpi_id[0x71] high edge lint[0x1]) Nov 6 05:27:47.615079 kernel: ACPI: LAPIC_NMI (acpi_id[0x72] high edge lint[0x1]) Nov 6 05:27:47.615084 kernel: ACPI: LAPIC_NMI (acpi_id[0x73] high edge lint[0x1]) Nov 6 05:27:47.615090 kernel: ACPI: LAPIC_NMI (acpi_id[0x74] high edge lint[0x1]) Nov 6 05:27:47.615095 kernel: ACPI: LAPIC_NMI (acpi_id[0x75] high edge lint[0x1]) Nov 6 05:27:47.615100 kernel: ACPI: LAPIC_NMI (acpi_id[0x76] high edge lint[0x1]) Nov 6 05:27:47.615106 kernel: ACPI: LAPIC_NMI (acpi_id[0x77] high edge lint[0x1]) Nov 6 05:27:47.615111 kernel: ACPI: LAPIC_NMI (acpi_id[0x78] high edge lint[0x1]) Nov 6 05:27:47.615116 kernel: ACPI: LAPIC_NMI (acpi_id[0x79] high edge lint[0x1]) Nov 6 05:27:47.615122 kernel: ACPI: LAPIC_NMI (acpi_id[0x7a] high edge lint[0x1]) Nov 6 05:27:47.615128 kernel: ACPI: LAPIC_NMI (acpi_id[0x7b] high edge lint[0x1]) Nov 6 05:27:47.615133 kernel: ACPI: LAPIC_NMI (acpi_id[0x7c] high edge lint[0x1]) Nov 6 05:27:47.615139 kernel: ACPI: LAPIC_NMI (acpi_id[0x7d] high edge lint[0x1]) Nov 6 05:27:47.615144 kernel: ACPI: LAPIC_NMI (acpi_id[0x7e] high edge lint[0x1]) Nov 6 05:27:47.615149 kernel: ACPI: LAPIC_NMI (acpi_id[0x7f] high edge lint[0x1]) Nov 6 05:27:47.615155 kernel: IOAPIC[0]: apic_id 1, version 17, address 0xfec00000, GSI 0-23 Nov 6 05:27:47.615160 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 high edge) Nov 6 05:27:47.615166 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 6 05:27:47.615171 kernel: ACPI: HPET id: 0x8086af01 base: 0xfed00000 Nov 6 05:27:47.615176 kernel: TSC deadline timer available Nov 6 05:27:47.615183 kernel: CPU topo: Max. logical packages: 128 Nov 6 05:27:47.615189 kernel: CPU topo: Max. logical dies: 128 Nov 6 05:27:47.615194 kernel: CPU topo: Max. dies per package: 1 Nov 6 05:27:47.615199 kernel: CPU topo: Max. threads per core: 1 Nov 6 05:27:47.615205 kernel: CPU topo: Num. cores per package: 1 Nov 6 05:27:47.615210 kernel: CPU topo: Num. threads per package: 1 Nov 6 05:27:47.615215 kernel: CPU topo: Allowing 2 present CPUs plus 126 hotplug CPUs Nov 6 05:27:47.615221 kernel: [mem 0x80000000-0xefffffff] available for PCI devices Nov 6 05:27:47.617162 kernel: Booting paravirtualized kernel on VMware hypervisor Nov 6 05:27:47.617174 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 6 05:27:47.617183 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:128 nr_cpu_ids:128 nr_node_ids:1 Nov 6 05:27:47.617189 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u262144 Nov 6 05:27:47.617195 kernel: pcpu-alloc: s207832 r8192 d29736 u262144 alloc=1*2097152 Nov 6 05:27:47.617200 kernel: pcpu-alloc: [0] 000 001 002 003 004 005 006 007 Nov 6 05:27:47.617205 kernel: pcpu-alloc: [0] 008 009 010 011 012 013 014 015 Nov 6 05:27:47.617211 kernel: pcpu-alloc: [0] 016 017 018 019 020 021 022 023 Nov 6 05:27:47.617216 kernel: pcpu-alloc: [0] 024 025 026 027 028 029 030 031 Nov 6 05:27:47.617222 kernel: pcpu-alloc: [0] 032 033 034 035 036 037 038 039 Nov 6 05:27:47.617237 kernel: pcpu-alloc: [0] 040 041 042 043 044 045 046 047 Nov 6 05:27:47.617246 kernel: pcpu-alloc: [0] 048 049 050 051 052 053 054 055 Nov 6 05:27:47.617251 kernel: pcpu-alloc: [0] 056 057 058 059 060 061 062 063 Nov 6 05:27:47.617256 kernel: pcpu-alloc: [0] 064 065 066 067 068 069 070 071 Nov 6 05:27:47.617262 kernel: pcpu-alloc: [0] 072 073 074 075 076 077 078 079 Nov 6 05:27:47.617267 kernel: pcpu-alloc: [0] 080 081 082 083 084 085 086 087 Nov 6 05:27:47.617273 kernel: pcpu-alloc: [0] 088 089 090 091 092 093 094 095 Nov 6 05:27:47.617278 kernel: pcpu-alloc: [0] 096 097 098 099 100 101 102 103 Nov 6 05:27:47.617283 kernel: pcpu-alloc: [0] 104 105 106 107 108 109 110 111 Nov 6 05:27:47.617290 kernel: pcpu-alloc: [0] 112 113 114 115 116 117 118 119 Nov 6 05:27:47.617295 kernel: pcpu-alloc: [0] 120 121 122 123 124 125 126 127 Nov 6 05:27:47.617301 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=42c7eeb79a8ee89597bba4204806137326be9acdbca65a8fd923766f65b62f69 Nov 6 05:27:47.617307 kernel: random: crng init done Nov 6 05:27:47.617313 kernel: printk: log_buf_len individual max cpu contribution: 4096 bytes Nov 6 05:27:47.617318 kernel: printk: log_buf_len total cpu_extra contributions: 520192 bytes Nov 6 05:27:47.617324 kernel: printk: log_buf_len min size: 262144 bytes Nov 6 05:27:47.617329 kernel: printk: log_buf_len: 1048576 bytes Nov 6 05:27:47.617335 kernel: printk: early log buf free: 245688(93%) Nov 6 05:27:47.617341 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 6 05:27:47.617347 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Nov 6 05:27:47.617353 kernel: Fallback order for Node 0: 0 Nov 6 05:27:47.617358 kernel: Built 1 zonelists, mobility grouping on. Total pages: 524157 Nov 6 05:27:47.617364 kernel: Policy zone: DMA32 Nov 6 05:27:47.617369 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 6 05:27:47.617375 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=128, Nodes=1 Nov 6 05:27:47.617380 kernel: ftrace: allocating 40092 entries in 157 pages Nov 6 05:27:47.617385 kernel: ftrace: allocated 157 pages with 5 groups Nov 6 05:27:47.617392 kernel: Dynamic Preempt: voluntary Nov 6 05:27:47.617397 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 6 05:27:47.617403 kernel: rcu: RCU event tracing is enabled. Nov 6 05:27:47.617409 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=128. Nov 6 05:27:47.617415 kernel: Trampoline variant of Tasks RCU enabled. Nov 6 05:27:47.617420 kernel: Rude variant of Tasks RCU enabled. Nov 6 05:27:47.617426 kernel: Tracing variant of Tasks RCU enabled. Nov 6 05:27:47.617431 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 6 05:27:47.617437 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=128 Nov 6 05:27:47.617442 kernel: RCU Tasks: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Nov 6 05:27:47.617449 kernel: RCU Tasks Rude: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Nov 6 05:27:47.617455 kernel: RCU Tasks Trace: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Nov 6 05:27:47.617460 kernel: NR_IRQS: 33024, nr_irqs: 1448, preallocated irqs: 16 Nov 6 05:27:47.617466 kernel: rcu: srcu_init: Setting srcu_struct sizes to big. Nov 6 05:27:47.617471 kernel: Console: colour VGA+ 80x25 Nov 6 05:27:47.617477 kernel: printk: legacy console [tty0] enabled Nov 6 05:27:47.617482 kernel: printk: legacy console [ttyS0] enabled Nov 6 05:27:47.617488 kernel: ACPI: Core revision 20240827 Nov 6 05:27:47.617494 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 133484882848 ns Nov 6 05:27:47.617500 kernel: APIC: Switch to symmetric I/O mode setup Nov 6 05:27:47.617506 kernel: x2apic enabled Nov 6 05:27:47.617511 kernel: APIC: Switched APIC routing to: physical x2apic Nov 6 05:27:47.617517 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Nov 6 05:27:47.617522 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Nov 6 05:27:47.617528 kernel: Calibrating delay loop (skipped) preset value.. 6816.00 BogoMIPS (lpj=3408000) Nov 6 05:27:47.617534 kernel: Disabled fast string operations Nov 6 05:27:47.617539 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Nov 6 05:27:47.617546 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Nov 6 05:27:47.617551 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 6 05:27:47.617557 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall and VM exit Nov 6 05:27:47.617562 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Nov 6 05:27:47.617568 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Nov 6 05:27:47.617574 kernel: RETBleed: Mitigation: Enhanced IBRS Nov 6 05:27:47.617580 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 6 05:27:47.617585 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 6 05:27:47.617591 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Nov 6 05:27:47.617597 kernel: SRBDS: Unknown: Dependent on hypervisor status Nov 6 05:27:47.617603 kernel: GDS: Unknown: Dependent on hypervisor status Nov 6 05:27:47.617609 kernel: active return thunk: its_return_thunk Nov 6 05:27:47.617614 kernel: ITS: Mitigation: Aligned branch/return thunks Nov 6 05:27:47.617620 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 6 05:27:47.617625 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 6 05:27:47.617631 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 6 05:27:47.617636 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 6 05:27:47.617642 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Nov 6 05:27:47.617649 kernel: Freeing SMP alternatives memory: 32K Nov 6 05:27:47.617654 kernel: pid_max: default: 131072 minimum: 1024 Nov 6 05:27:47.617660 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Nov 6 05:27:47.617665 kernel: landlock: Up and running. Nov 6 05:27:47.617671 kernel: SELinux: Initializing. Nov 6 05:27:47.617676 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Nov 6 05:27:47.617682 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Nov 6 05:27:47.617687 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Nov 6 05:27:47.617693 kernel: Performance Events: Skylake events, core PMU driver. Nov 6 05:27:47.617699 kernel: core: CPUID marked event: 'cpu cycles' unavailable Nov 6 05:27:47.617705 kernel: core: CPUID marked event: 'instructions' unavailable Nov 6 05:27:47.617710 kernel: core: CPUID marked event: 'bus cycles' unavailable Nov 6 05:27:47.617716 kernel: core: CPUID marked event: 'cache references' unavailable Nov 6 05:27:47.617721 kernel: core: CPUID marked event: 'cache misses' unavailable Nov 6 05:27:47.617726 kernel: core: CPUID marked event: 'branch instructions' unavailable Nov 6 05:27:47.617732 kernel: core: CPUID marked event: 'branch misses' unavailable Nov 6 05:27:47.617737 kernel: ... version: 1 Nov 6 05:27:47.617743 kernel: ... bit width: 48 Nov 6 05:27:47.617749 kernel: ... generic registers: 4 Nov 6 05:27:47.617755 kernel: ... value mask: 0000ffffffffffff Nov 6 05:27:47.617760 kernel: ... max period: 000000007fffffff Nov 6 05:27:47.617765 kernel: ... fixed-purpose events: 0 Nov 6 05:27:47.617771 kernel: ... event mask: 000000000000000f Nov 6 05:27:47.617776 kernel: signal: max sigframe size: 1776 Nov 6 05:27:47.617782 kernel: rcu: Hierarchical SRCU implementation. Nov 6 05:27:47.617787 kernel: rcu: Max phase no-delay instances is 400. Nov 6 05:27:47.617793 kernel: Timer migration: 3 hierarchy levels; 8 children per group; 3 crossnode level Nov 6 05:27:47.617799 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Nov 6 05:27:47.617805 kernel: smp: Bringing up secondary CPUs ... Nov 6 05:27:47.617810 kernel: smpboot: x86: Booting SMP configuration: Nov 6 05:27:47.617816 kernel: .... node #0, CPUs: #1 Nov 6 05:27:47.617821 kernel: Disabled fast string operations Nov 6 05:27:47.617827 kernel: smp: Brought up 1 node, 2 CPUs Nov 6 05:27:47.617832 kernel: smpboot: Total of 2 processors activated (13632.00 BogoMIPS) Nov 6 05:27:47.617838 kernel: Memory: 1942640K/2096628K available (14336K kernel code, 2443K rwdata, 29892K rodata, 15356K init, 2688K bss, 142608K reserved, 0K cma-reserved) Nov 6 05:27:47.617843 kernel: devtmpfs: initialized Nov 6 05:27:47.617850 kernel: x86/mm: Memory block size: 128MB Nov 6 05:27:47.617856 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7feff000-0x7fefffff] (4096 bytes) Nov 6 05:27:47.617861 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 6 05:27:47.617867 kernel: futex hash table entries: 32768 (order: 9, 2097152 bytes, linear) Nov 6 05:27:47.617872 kernel: pinctrl core: initialized pinctrl subsystem Nov 6 05:27:47.617878 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 6 05:27:47.617883 kernel: audit: initializing netlink subsys (disabled) Nov 6 05:27:47.617889 kernel: audit: type=2000 audit(1762406864.278:1): state=initialized audit_enabled=0 res=1 Nov 6 05:27:47.617894 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 6 05:27:47.617901 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 6 05:27:47.617906 kernel: cpuidle: using governor menu Nov 6 05:27:47.617911 kernel: Simple Boot Flag at 0x36 set to 0x80 Nov 6 05:27:47.617917 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 6 05:27:47.617922 kernel: dca service started, version 1.12.1 Nov 6 05:27:47.617928 kernel: PCI: ECAM [mem 0xf0000000-0xf7ffffff] (base 0xf0000000) for domain 0000 [bus 00-7f] Nov 6 05:27:47.617941 kernel: PCI: Using configuration type 1 for base access Nov 6 05:27:47.617948 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 6 05:27:47.617954 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 6 05:27:47.617961 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 6 05:27:47.617967 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 6 05:27:47.617973 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 6 05:27:47.617978 kernel: ACPI: Added _OSI(Module Device) Nov 6 05:27:47.617984 kernel: ACPI: Added _OSI(Processor Device) Nov 6 05:27:47.617990 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 6 05:27:47.617996 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 6 05:27:47.618002 kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored Nov 6 05:27:47.618008 kernel: ACPI: Interpreter enabled Nov 6 05:27:47.618015 kernel: ACPI: PM: (supports S0 S1 S5) Nov 6 05:27:47.618021 kernel: ACPI: Using IOAPIC for interrupt routing Nov 6 05:27:47.618026 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 6 05:27:47.618032 kernel: PCI: Using E820 reservations for host bridge windows Nov 6 05:27:47.618038 kernel: ACPI: Enabled 4 GPEs in block 00 to 0F Nov 6 05:27:47.618044 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-7f]) Nov 6 05:27:47.618127 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 6 05:27:47.618182 kernel: acpi PNP0A03:00: _OSC: platform does not support [AER LTR] Nov 6 05:27:47.618245 kernel: acpi PNP0A03:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability] Nov 6 05:27:47.618254 kernel: PCI host bridge to bus 0000:00 Nov 6 05:27:47.618308 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 6 05:27:47.618355 kernel: pci_bus 0000:00: root bus resource [mem 0x000cc000-0x000dbfff window] Nov 6 05:27:47.618401 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Nov 6 05:27:47.618446 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 6 05:27:47.618490 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xfeff window] Nov 6 05:27:47.618537 kernel: pci_bus 0000:00: root bus resource [bus 00-7f] Nov 6 05:27:47.618597 kernel: pci 0000:00:00.0: [8086:7190] type 00 class 0x060000 conventional PCI endpoint Nov 6 05:27:47.618655 kernel: pci 0000:00:01.0: [8086:7191] type 01 class 0x060400 conventional PCI bridge Nov 6 05:27:47.618708 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Nov 6 05:27:47.618769 kernel: pci 0000:00:07.0: [8086:7110] type 00 class 0x060100 conventional PCI endpoint Nov 6 05:27:47.618827 kernel: pci 0000:00:07.1: [8086:7111] type 00 class 0x01018a conventional PCI endpoint Nov 6 05:27:47.618880 kernel: pci 0000:00:07.1: BAR 4 [io 0x1060-0x106f] Nov 6 05:27:47.618932 kernel: pci 0000:00:07.1: BAR 0 [io 0x01f0-0x01f7]: legacy IDE quirk Nov 6 05:27:47.618984 kernel: pci 0000:00:07.1: BAR 1 [io 0x03f6]: legacy IDE quirk Nov 6 05:27:47.619034 kernel: pci 0000:00:07.1: BAR 2 [io 0x0170-0x0177]: legacy IDE quirk Nov 6 05:27:47.619088 kernel: pci 0000:00:07.1: BAR 3 [io 0x0376]: legacy IDE quirk Nov 6 05:27:47.619144 kernel: pci 0000:00:07.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint Nov 6 05:27:47.619196 kernel: pci 0000:00:07.3: quirk: [io 0x1000-0x103f] claimed by PIIX4 ACPI Nov 6 05:27:47.619388 kernel: pci 0000:00:07.3: quirk: [io 0x1040-0x104f] claimed by PIIX4 SMB Nov 6 05:27:47.619458 kernel: pci 0000:00:07.7: [15ad:0740] type 00 class 0x088000 conventional PCI endpoint Nov 6 05:27:47.619516 kernel: pci 0000:00:07.7: BAR 0 [io 0x1080-0x10bf] Nov 6 05:27:47.619573 kernel: pci 0000:00:07.7: BAR 1 [mem 0xfebfe000-0xfebfffff 64bit] Nov 6 05:27:47.619631 kernel: pci 0000:00:0f.0: [15ad:0405] type 00 class 0x030000 conventional PCI endpoint Nov 6 05:27:47.619684 kernel: pci 0000:00:0f.0: BAR 0 [io 0x1070-0x107f] Nov 6 05:27:47.619736 kernel: pci 0000:00:0f.0: BAR 1 [mem 0xe8000000-0xefffffff pref] Nov 6 05:27:47.619787 kernel: pci 0000:00:0f.0: BAR 2 [mem 0xfe000000-0xfe7fffff] Nov 6 05:27:47.619839 kernel: pci 0000:00:0f.0: ROM [mem 0x00000000-0x00007fff pref] Nov 6 05:27:47.619890 kernel: pci 0000:00:0f.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 6 05:27:47.619948 kernel: pci 0000:00:11.0: [15ad:0790] type 01 class 0x060401 conventional PCI bridge Nov 6 05:27:47.620001 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] (subtractive decode) Nov 6 05:27:47.620053 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Nov 6 05:27:47.620104 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Nov 6 05:27:47.620170 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Nov 6 05:27:47.620257 kernel: pci 0000:00:15.0: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 6 05:27:47.620316 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Nov 6 05:27:47.621269 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Nov 6 05:27:47.621344 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Nov 6 05:27:47.621400 kernel: pci 0000:00:15.0: PME# supported from D0 D3hot D3cold Nov 6 05:27:47.621459 kernel: pci 0000:00:15.1: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 6 05:27:47.621513 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Nov 6 05:27:47.621566 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Nov 6 05:27:47.621618 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Nov 6 05:27:47.621674 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Nov 6 05:27:47.621727 kernel: pci 0000:00:15.1: PME# supported from D0 D3hot D3cold Nov 6 05:27:47.621786 kernel: pci 0000:00:15.2: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 6 05:27:47.621838 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Nov 6 05:27:47.621890 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Nov 6 05:27:47.621942 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Nov 6 05:27:47.621997 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Nov 6 05:27:47.622049 kernel: pci 0000:00:15.2: PME# supported from D0 D3hot D3cold Nov 6 05:27:47.622106 kernel: pci 0000:00:15.3: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 6 05:27:47.622158 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Nov 6 05:27:47.622209 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Nov 6 05:27:47.622332 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Nov 6 05:27:47.622386 kernel: pci 0000:00:15.3: PME# supported from D0 D3hot D3cold Nov 6 05:27:47.622446 kernel: pci 0000:00:15.4: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 6 05:27:47.622499 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Nov 6 05:27:47.622550 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Nov 6 05:27:47.622602 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Nov 6 05:27:47.622653 kernel: pci 0000:00:15.4: PME# supported from D0 D3hot D3cold Nov 6 05:27:47.622709 kernel: pci 0000:00:15.5: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 6 05:27:47.622763 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Nov 6 05:27:47.622835 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Nov 6 05:27:47.622887 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Nov 6 05:27:47.622939 kernel: pci 0000:00:15.5: PME# supported from D0 D3hot D3cold Nov 6 05:27:47.622995 kernel: pci 0000:00:15.6: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 6 05:27:47.623047 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Nov 6 05:27:47.623099 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Nov 6 05:27:47.623151 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Nov 6 05:27:47.623205 kernel: pci 0000:00:15.6: PME# supported from D0 D3hot D3cold Nov 6 05:27:47.623280 kernel: pci 0000:00:15.7: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 6 05:27:47.623336 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Nov 6 05:27:47.623388 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Nov 6 05:27:47.623440 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Nov 6 05:27:47.623492 kernel: pci 0000:00:15.7: PME# supported from D0 D3hot D3cold Nov 6 05:27:47.623549 kernel: pci 0000:00:16.0: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 6 05:27:47.623605 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Nov 6 05:27:47.623657 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Nov 6 05:27:47.623709 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Nov 6 05:27:47.623761 kernel: pci 0000:00:16.0: PME# supported from D0 D3hot D3cold Nov 6 05:27:47.623817 kernel: pci 0000:00:16.1: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 6 05:27:47.623884 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Nov 6 05:27:47.623945 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Nov 6 05:27:47.623997 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Nov 6 05:27:47.624051 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Nov 6 05:27:47.624104 kernel: pci 0000:00:16.1: PME# supported from D0 D3hot D3cold Nov 6 05:27:47.624160 kernel: pci 0000:00:16.2: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 6 05:27:47.624213 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Nov 6 05:27:47.624473 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Nov 6 05:27:47.624528 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Nov 6 05:27:47.624593 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Nov 6 05:27:47.624650 kernel: pci 0000:00:16.2: PME# supported from D0 D3hot D3cold Nov 6 05:27:47.627746 kernel: pci 0000:00:16.3: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 6 05:27:47.627831 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Nov 6 05:27:47.627890 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Nov 6 05:27:47.627945 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Nov 6 05:27:47.627998 kernel: pci 0000:00:16.3: PME# supported from D0 D3hot D3cold Nov 6 05:27:47.628055 kernel: pci 0000:00:16.4: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 6 05:27:47.628113 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Nov 6 05:27:47.628165 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Nov 6 05:27:47.628225 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Nov 6 05:27:47.628291 kernel: pci 0000:00:16.4: PME# supported from D0 D3hot D3cold Nov 6 05:27:47.628360 kernel: pci 0000:00:16.5: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 6 05:27:47.628419 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Nov 6 05:27:47.628487 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Nov 6 05:27:47.628553 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Nov 6 05:27:47.628606 kernel: pci 0000:00:16.5: PME# supported from D0 D3hot D3cold Nov 6 05:27:47.628662 kernel: pci 0000:00:16.6: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 6 05:27:47.628715 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Nov 6 05:27:47.628771 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Nov 6 05:27:47.630061 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Nov 6 05:27:47.630132 kernel: pci 0000:00:16.6: PME# supported from D0 D3hot D3cold Nov 6 05:27:47.630197 kernel: pci 0000:00:16.7: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 6 05:27:47.630270 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Nov 6 05:27:47.630325 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Nov 6 05:27:47.630377 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Nov 6 05:27:47.630429 kernel: pci 0000:00:16.7: PME# supported from D0 D3hot D3cold Nov 6 05:27:47.630485 kernel: pci 0000:00:17.0: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 6 05:27:47.630537 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Nov 6 05:27:47.630591 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Nov 6 05:27:47.630643 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Nov 6 05:27:47.630695 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Nov 6 05:27:47.630745 kernel: pci 0000:00:17.0: PME# supported from D0 D3hot D3cold Nov 6 05:27:47.630802 kernel: pci 0000:00:17.1: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 6 05:27:47.630854 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Nov 6 05:27:47.630907 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Nov 6 05:27:47.630957 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Nov 6 05:27:47.631007 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Nov 6 05:27:47.631058 kernel: pci 0000:00:17.1: PME# supported from D0 D3hot D3cold Nov 6 05:27:47.631116 kernel: pci 0000:00:17.2: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 6 05:27:47.631171 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Nov 6 05:27:47.631222 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Nov 6 05:27:47.632784 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Nov 6 05:27:47.632845 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Nov 6 05:27:47.632899 kernel: pci 0000:00:17.2: PME# supported from D0 D3hot D3cold Nov 6 05:27:47.632957 kernel: pci 0000:00:17.3: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 6 05:27:47.633010 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Nov 6 05:27:47.633066 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Nov 6 05:27:47.633117 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Nov 6 05:27:47.633169 kernel: pci 0000:00:17.3: PME# supported from D0 D3hot D3cold Nov 6 05:27:47.633225 kernel: pci 0000:00:17.4: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 6 05:27:47.633369 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Nov 6 05:27:47.633422 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Nov 6 05:27:47.633474 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Nov 6 05:27:47.633530 kernel: pci 0000:00:17.4: PME# supported from D0 D3hot D3cold Nov 6 05:27:47.633588 kernel: pci 0000:00:17.5: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 6 05:27:47.633640 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Nov 6 05:27:47.633691 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Nov 6 05:27:47.633746 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Nov 6 05:27:47.633811 kernel: pci 0000:00:17.5: PME# supported from D0 D3hot D3cold Nov 6 05:27:47.633867 kernel: pci 0000:00:17.6: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 6 05:27:47.633923 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Nov 6 05:27:47.633975 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Nov 6 05:27:47.634026 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Nov 6 05:27:47.634079 kernel: pci 0000:00:17.6: PME# supported from D0 D3hot D3cold Nov 6 05:27:47.634180 kernel: pci 0000:00:17.7: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 6 05:27:47.634262 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Nov 6 05:27:47.634326 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Nov 6 05:27:47.634379 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Nov 6 05:27:47.634434 kernel: pci 0000:00:17.7: PME# supported from D0 D3hot D3cold Nov 6 05:27:47.634491 kernel: pci 0000:00:18.0: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 6 05:27:47.634543 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Nov 6 05:27:47.634594 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Nov 6 05:27:47.634657 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Nov 6 05:27:47.634709 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Nov 6 05:27:47.634760 kernel: pci 0000:00:18.0: PME# supported from D0 D3hot D3cold Nov 6 05:27:47.634818 kernel: pci 0000:00:18.1: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 6 05:27:47.634871 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Nov 6 05:27:47.634929 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Nov 6 05:27:47.634980 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Nov 6 05:27:47.635030 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Nov 6 05:27:47.635080 kernel: pci 0000:00:18.1: PME# supported from D0 D3hot D3cold Nov 6 05:27:47.635139 kernel: pci 0000:00:18.2: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 6 05:27:47.635195 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Nov 6 05:27:47.636784 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Nov 6 05:27:47.636857 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Nov 6 05:27:47.636913 kernel: pci 0000:00:18.2: PME# supported from D0 D3hot D3cold Nov 6 05:27:47.636970 kernel: pci 0000:00:18.3: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 6 05:27:47.637024 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Nov 6 05:27:47.637076 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Nov 6 05:27:47.637131 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Nov 6 05:27:47.637182 kernel: pci 0000:00:18.3: PME# supported from D0 D3hot D3cold Nov 6 05:27:47.637265 kernel: pci 0000:00:18.4: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 6 05:27:47.637321 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Nov 6 05:27:47.637374 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Nov 6 05:27:47.637425 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Nov 6 05:27:47.638471 kernel: pci 0000:00:18.4: PME# supported from D0 D3hot D3cold Nov 6 05:27:47.638542 kernel: pci 0000:00:18.5: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 6 05:27:47.638600 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Nov 6 05:27:47.638654 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Nov 6 05:27:47.638708 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Nov 6 05:27:47.638762 kernel: pci 0000:00:18.5: PME# supported from D0 D3hot D3cold Nov 6 05:27:47.638823 kernel: pci 0000:00:18.6: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 6 05:27:47.638877 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Nov 6 05:27:47.638932 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Nov 6 05:27:47.638989 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Nov 6 05:27:47.639048 kernel: pci 0000:00:18.6: PME# supported from D0 D3hot D3cold Nov 6 05:27:47.639106 kernel: pci 0000:00:18.7: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 6 05:27:47.639158 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Nov 6 05:27:47.639210 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Nov 6 05:27:47.639279 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Nov 6 05:27:47.639335 kernel: pci 0000:00:18.7: PME# supported from D0 D3hot D3cold Nov 6 05:27:47.639394 kernel: pci_bus 0000:01: extended config space not accessible Nov 6 05:27:47.639449 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Nov 6 05:27:47.639504 kernel: pci_bus 0000:02: extended config space not accessible Nov 6 05:27:47.639514 kernel: acpiphp: Slot [32] registered Nov 6 05:27:47.639520 kernel: acpiphp: Slot [33] registered Nov 6 05:27:47.639526 kernel: acpiphp: Slot [34] registered Nov 6 05:27:47.639532 kernel: acpiphp: Slot [35] registered Nov 6 05:27:47.639539 kernel: acpiphp: Slot [36] registered Nov 6 05:27:47.639545 kernel: acpiphp: Slot [37] registered Nov 6 05:27:47.639551 kernel: acpiphp: Slot [38] registered Nov 6 05:27:47.639557 kernel: acpiphp: Slot [39] registered Nov 6 05:27:47.639563 kernel: acpiphp: Slot [40] registered Nov 6 05:27:47.639569 kernel: acpiphp: Slot [41] registered Nov 6 05:27:47.639575 kernel: acpiphp: Slot [42] registered Nov 6 05:27:47.639581 kernel: acpiphp: Slot [43] registered Nov 6 05:27:47.639586 kernel: acpiphp: Slot [44] registered Nov 6 05:27:47.639592 kernel: acpiphp: Slot [45] registered Nov 6 05:27:47.639599 kernel: acpiphp: Slot [46] registered Nov 6 05:27:47.639605 kernel: acpiphp: Slot [47] registered Nov 6 05:27:47.639611 kernel: acpiphp: Slot [48] registered Nov 6 05:27:47.639616 kernel: acpiphp: Slot [49] registered Nov 6 05:27:47.639622 kernel: acpiphp: Slot [50] registered Nov 6 05:27:47.639628 kernel: acpiphp: Slot [51] registered Nov 6 05:27:47.639634 kernel: acpiphp: Slot [52] registered Nov 6 05:27:47.639640 kernel: acpiphp: Slot [53] registered Nov 6 05:27:47.639646 kernel: acpiphp: Slot [54] registered Nov 6 05:27:47.639652 kernel: acpiphp: Slot [55] registered Nov 6 05:27:47.639659 kernel: acpiphp: Slot [56] registered Nov 6 05:27:47.639664 kernel: acpiphp: Slot [57] registered Nov 6 05:27:47.639670 kernel: acpiphp: Slot [58] registered Nov 6 05:27:47.639676 kernel: acpiphp: Slot [59] registered Nov 6 05:27:47.639682 kernel: acpiphp: Slot [60] registered Nov 6 05:27:47.639687 kernel: acpiphp: Slot [61] registered Nov 6 05:27:47.639694 kernel: acpiphp: Slot [62] registered Nov 6 05:27:47.639699 kernel: acpiphp: Slot [63] registered Nov 6 05:27:47.639752 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] (subtractive decode) Nov 6 05:27:47.640468 kernel: pci 0000:00:11.0: bridge window [mem 0x000a0000-0x000bffff window] (subtractive decode) Nov 6 05:27:47.640539 kernel: pci 0000:00:11.0: bridge window [mem 0x000cc000-0x000dbfff window] (subtractive decode) Nov 6 05:27:47.640599 kernel: pci 0000:00:11.0: bridge window [mem 0xc0000000-0xfebfffff window] (subtractive decode) Nov 6 05:27:47.640650 kernel: pci 0000:00:11.0: bridge window [io 0x0000-0x0cf7 window] (subtractive decode) Nov 6 05:27:47.640700 kernel: pci 0000:00:11.0: bridge window [io 0x0d00-0xfeff window] (subtractive decode) Nov 6 05:27:47.640761 kernel: pci 0000:03:00.0: [15ad:07c0] type 00 class 0x010700 PCIe Endpoint Nov 6 05:27:47.640816 kernel: pci 0000:03:00.0: BAR 0 [io 0x4000-0x4007] Nov 6 05:27:47.640873 kernel: pci 0000:03:00.0: BAR 1 [mem 0xfd5f8000-0xfd5fffff 64bit] Nov 6 05:27:47.640935 kernel: pci 0000:03:00.0: ROM [mem 0x00000000-0x0000ffff pref] Nov 6 05:27:47.640989 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Nov 6 05:27:47.641042 kernel: pci 0000:03:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Nov 6 05:27:47.641095 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Nov 6 05:27:47.641150 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Nov 6 05:27:47.641215 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Nov 6 05:27:47.641318 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Nov 6 05:27:47.641376 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Nov 6 05:27:47.641429 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Nov 6 05:27:47.641483 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Nov 6 05:27:47.641556 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Nov 6 05:27:47.641617 kernel: pci 0000:0b:00.0: [15ad:07b0] type 00 class 0x020000 PCIe Endpoint Nov 6 05:27:47.641677 kernel: pci 0000:0b:00.0: BAR 0 [mem 0xfd4fc000-0xfd4fcfff] Nov 6 05:27:47.646845 kernel: pci 0000:0b:00.0: BAR 1 [mem 0xfd4fd000-0xfd4fdfff] Nov 6 05:27:47.646929 kernel: pci 0000:0b:00.0: BAR 2 [mem 0xfd4fe000-0xfd4fffff] Nov 6 05:27:47.646997 kernel: pci 0000:0b:00.0: BAR 3 [io 0x5000-0x500f] Nov 6 05:27:47.647062 kernel: pci 0000:0b:00.0: ROM [mem 0x00000000-0x0000ffff pref] Nov 6 05:27:47.647126 kernel: pci 0000:0b:00.0: supports D1 D2 Nov 6 05:27:47.647187 kernel: pci 0000:0b:00.0: PME# supported from D0 D1 D2 D3hot D3cold Nov 6 05:27:47.647256 kernel: pci 0000:0b:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Nov 6 05:27:47.647326 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Nov 6 05:27:47.647403 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Nov 6 05:27:47.647475 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Nov 6 05:27:47.647536 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Nov 6 05:27:47.647594 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Nov 6 05:27:47.647658 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Nov 6 05:27:47.647721 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Nov 6 05:27:47.647779 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Nov 6 05:27:47.647844 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Nov 6 05:27:47.647906 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Nov 6 05:27:47.647961 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Nov 6 05:27:47.648017 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Nov 6 05:27:47.648087 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Nov 6 05:27:47.648168 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Nov 6 05:27:47.650084 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Nov 6 05:27:47.650160 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Nov 6 05:27:47.650224 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Nov 6 05:27:47.650293 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Nov 6 05:27:47.650353 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Nov 6 05:27:47.650421 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Nov 6 05:27:47.650505 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Nov 6 05:27:47.650583 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Nov 6 05:27:47.650646 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Nov 6 05:27:47.650719 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Nov 6 05:27:47.650733 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 9 Nov 6 05:27:47.650743 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 0 Nov 6 05:27:47.650753 kernel: ACPI: PCI: Interrupt link LNKB disabled Nov 6 05:27:47.650763 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 6 05:27:47.650783 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 10 Nov 6 05:27:47.650790 kernel: iommu: Default domain type: Translated Nov 6 05:27:47.650796 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 6 05:27:47.650804 kernel: PCI: Using ACPI for IRQ routing Nov 6 05:27:47.650811 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 6 05:27:47.650819 kernel: e820: reserve RAM buffer [mem 0x0009ec00-0x0009ffff] Nov 6 05:27:47.650825 kernel: e820: reserve RAM buffer [mem 0x7fee0000-0x7fffffff] Nov 6 05:27:47.650885 kernel: pci 0000:00:0f.0: vgaarb: setting as boot VGA device Nov 6 05:27:47.650938 kernel: pci 0000:00:0f.0: vgaarb: bridge control possible Nov 6 05:27:47.650991 kernel: pci 0000:00:0f.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 6 05:27:47.651003 kernel: vgaarb: loaded Nov 6 05:27:47.651011 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 Nov 6 05:27:47.651019 kernel: hpet0: 16 comparators, 64-bit 14.318180 MHz counter Nov 6 05:27:47.651025 kernel: clocksource: Switched to clocksource tsc-early Nov 6 05:27:47.651031 kernel: VFS: Disk quotas dquot_6.6.0 Nov 6 05:27:47.651037 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 6 05:27:47.651043 kernel: pnp: PnP ACPI init Nov 6 05:27:47.651116 kernel: system 00:00: [io 0x1000-0x103f] has been reserved Nov 6 05:27:47.651180 kernel: system 00:00: [io 0x1040-0x104f] has been reserved Nov 6 05:27:47.653257 kernel: system 00:00: [io 0x0cf0-0x0cf1] has been reserved Nov 6 05:27:47.653338 kernel: system 00:04: [mem 0xfed00000-0xfed003ff] has been reserved Nov 6 05:27:47.653414 kernel: pnp 00:06: [dma 2] Nov 6 05:27:47.653484 kernel: system 00:07: [io 0xfce0-0xfcff] has been reserved Nov 6 05:27:47.653543 kernel: system 00:07: [mem 0xf0000000-0xf7ffffff] has been reserved Nov 6 05:27:47.653606 kernel: system 00:07: [mem 0xfe800000-0xfe9fffff] has been reserved Nov 6 05:27:47.653616 kernel: pnp: PnP ACPI: found 8 devices Nov 6 05:27:47.653623 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 6 05:27:47.653633 kernel: NET: Registered PF_INET protocol family Nov 6 05:27:47.653640 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 6 05:27:47.653646 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Nov 6 05:27:47.653652 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 6 05:27:47.653658 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Nov 6 05:27:47.653666 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Nov 6 05:27:47.653672 kernel: TCP: Hash tables configured (established 16384 bind 16384) Nov 6 05:27:47.653678 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Nov 6 05:27:47.653685 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Nov 6 05:27:47.653691 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 6 05:27:47.653697 kernel: NET: Registered PF_XDP protocol family Nov 6 05:27:47.653771 kernel: pci 0000:00:15.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Nov 6 05:27:47.653838 kernel: pci 0000:00:15.3: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Nov 6 05:27:47.653907 kernel: pci 0000:00:15.4: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Nov 6 05:27:47.653985 kernel: pci 0000:00:15.5: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Nov 6 05:27:47.654050 kernel: pci 0000:00:15.6: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Nov 6 05:27:47.654119 kernel: pci 0000:00:15.7: bridge window [io 0x1000-0x0fff] to [bus 0a] add_size 1000 Nov 6 05:27:47.654188 kernel: pci 0000:00:16.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0b] add_size 200000 add_align 100000 Nov 6 05:27:47.655282 kernel: pci 0000:00:16.3: bridge window [io 0x1000-0x0fff] to [bus 0e] add_size 1000 Nov 6 05:27:47.655357 kernel: pci 0000:00:16.4: bridge window [io 0x1000-0x0fff] to [bus 0f] add_size 1000 Nov 6 05:27:47.655436 kernel: pci 0000:00:16.5: bridge window [io 0x1000-0x0fff] to [bus 10] add_size 1000 Nov 6 05:27:47.655502 kernel: pci 0000:00:16.6: bridge window [io 0x1000-0x0fff] to [bus 11] add_size 1000 Nov 6 05:27:47.655570 kernel: pci 0000:00:16.7: bridge window [io 0x1000-0x0fff] to [bus 12] add_size 1000 Nov 6 05:27:47.655634 kernel: pci 0000:00:17.3: bridge window [io 0x1000-0x0fff] to [bus 16] add_size 1000 Nov 6 05:27:47.655693 kernel: pci 0000:00:17.4: bridge window [io 0x1000-0x0fff] to [bus 17] add_size 1000 Nov 6 05:27:47.655756 kernel: pci 0000:00:17.5: bridge window [io 0x1000-0x0fff] to [bus 18] add_size 1000 Nov 6 05:27:47.655815 kernel: pci 0000:00:17.6: bridge window [io 0x1000-0x0fff] to [bus 19] add_size 1000 Nov 6 05:27:47.655882 kernel: pci 0000:00:17.7: bridge window [io 0x1000-0x0fff] to [bus 1a] add_size 1000 Nov 6 05:27:47.655948 kernel: pci 0000:00:18.2: bridge window [io 0x1000-0x0fff] to [bus 1d] add_size 1000 Nov 6 05:27:47.656012 kernel: pci 0000:00:18.3: bridge window [io 0x1000-0x0fff] to [bus 1e] add_size 1000 Nov 6 05:27:47.656079 kernel: pci 0000:00:18.4: bridge window [io 0x1000-0x0fff] to [bus 1f] add_size 1000 Nov 6 05:27:47.656159 kernel: pci 0000:00:18.5: bridge window [io 0x1000-0x0fff] to [bus 20] add_size 1000 Nov 6 05:27:47.656351 kernel: pci 0000:00:18.6: bridge window [io 0x1000-0x0fff] to [bus 21] add_size 1000 Nov 6 05:27:47.656426 kernel: pci 0000:00:18.7: bridge window [io 0x1000-0x0fff] to [bus 22] add_size 1000 Nov 6 05:27:47.656493 kernel: pci 0000:00:15.0: bridge window [mem 0xc0000000-0xc01fffff 64bit pref]: assigned Nov 6 05:27:47.656555 kernel: pci 0000:00:16.0: bridge window [mem 0xc0200000-0xc03fffff 64bit pref]: assigned Nov 6 05:27:47.656626 kernel: pci 0000:00:15.3: bridge window [io size 0x1000]: can't assign; no space Nov 6 05:27:47.656692 kernel: pci 0000:00:15.3: bridge window [io size 0x1000]: failed to assign Nov 6 05:27:47.656768 kernel: pci 0000:00:15.4: bridge window [io size 0x1000]: can't assign; no space Nov 6 05:27:47.656829 kernel: pci 0000:00:15.4: bridge window [io size 0x1000]: failed to assign Nov 6 05:27:47.656900 kernel: pci 0000:00:15.5: bridge window [io size 0x1000]: can't assign; no space Nov 6 05:27:47.656967 kernel: pci 0000:00:15.5: bridge window [io size 0x1000]: failed to assign Nov 6 05:27:47.657036 kernel: pci 0000:00:15.6: bridge window [io size 0x1000]: can't assign; no space Nov 6 05:27:47.657100 kernel: pci 0000:00:15.6: bridge window [io size 0x1000]: failed to assign Nov 6 05:27:47.657173 kernel: pci 0000:00:15.7: bridge window [io size 0x1000]: can't assign; no space Nov 6 05:27:47.657247 kernel: pci 0000:00:15.7: bridge window [io size 0x1000]: failed to assign Nov 6 05:27:47.657313 kernel: pci 0000:00:16.3: bridge window [io size 0x1000]: can't assign; no space Nov 6 05:27:47.657384 kernel: pci 0000:00:16.3: bridge window [io size 0x1000]: failed to assign Nov 6 05:27:47.657455 kernel: pci 0000:00:16.4: bridge window [io size 0x1000]: can't assign; no space Nov 6 05:27:47.657512 kernel: pci 0000:00:16.4: bridge window [io size 0x1000]: failed to assign Nov 6 05:27:47.657606 kernel: pci 0000:00:16.5: bridge window [io size 0x1000]: can't assign; no space Nov 6 05:27:47.657668 kernel: pci 0000:00:16.5: bridge window [io size 0x1000]: failed to assign Nov 6 05:27:47.657731 kernel: pci 0000:00:16.6: bridge window [io size 0x1000]: can't assign; no space Nov 6 05:27:47.657804 kernel: pci 0000:00:16.6: bridge window [io size 0x1000]: failed to assign Nov 6 05:27:47.657864 kernel: pci 0000:00:16.7: bridge window [io size 0x1000]: can't assign; no space Nov 6 05:27:47.657927 kernel: pci 0000:00:16.7: bridge window [io size 0x1000]: failed to assign Nov 6 05:27:47.657979 kernel: pci 0000:00:17.3: bridge window [io size 0x1000]: can't assign; no space Nov 6 05:27:47.658030 kernel: pci 0000:00:17.3: bridge window [io size 0x1000]: failed to assign Nov 6 05:27:47.658083 kernel: pci 0000:00:17.4: bridge window [io size 0x1000]: can't assign; no space Nov 6 05:27:47.658134 kernel: pci 0000:00:17.4: bridge window [io size 0x1000]: failed to assign Nov 6 05:27:47.658188 kernel: pci 0000:00:17.5: bridge window [io size 0x1000]: can't assign; no space Nov 6 05:27:47.658260 kernel: pci 0000:00:17.5: bridge window [io size 0x1000]: failed to assign Nov 6 05:27:47.658315 kernel: pci 0000:00:17.6: bridge window [io size 0x1000]: can't assign; no space Nov 6 05:27:47.658386 kernel: pci 0000:00:17.6: bridge window [io size 0x1000]: failed to assign Nov 6 05:27:47.658455 kernel: pci 0000:00:17.7: bridge window [io size 0x1000]: can't assign; no space Nov 6 05:27:47.658528 kernel: pci 0000:00:17.7: bridge window [io size 0x1000]: failed to assign Nov 6 05:27:47.658590 kernel: pci 0000:00:18.2: bridge window [io size 0x1000]: can't assign; no space Nov 6 05:27:47.658665 kernel: pci 0000:00:18.2: bridge window [io size 0x1000]: failed to assign Nov 6 05:27:47.658727 kernel: pci 0000:00:18.3: bridge window [io size 0x1000]: can't assign; no space Nov 6 05:27:47.658786 kernel: pci 0000:00:18.3: bridge window [io size 0x1000]: failed to assign Nov 6 05:27:47.658854 kernel: pci 0000:00:18.4: bridge window [io size 0x1000]: can't assign; no space Nov 6 05:27:47.658918 kernel: pci 0000:00:18.4: bridge window [io size 0x1000]: failed to assign Nov 6 05:27:47.658975 kernel: pci 0000:00:18.5: bridge window [io size 0x1000]: can't assign; no space Nov 6 05:27:47.659037 kernel: pci 0000:00:18.5: bridge window [io size 0x1000]: failed to assign Nov 6 05:27:47.659110 kernel: pci 0000:00:18.6: bridge window [io size 0x1000]: can't assign; no space Nov 6 05:27:47.659177 kernel: pci 0000:00:18.6: bridge window [io size 0x1000]: failed to assign Nov 6 05:27:47.659249 kernel: pci 0000:00:18.7: bridge window [io size 0x1000]: can't assign; no space Nov 6 05:27:47.659322 kernel: pci 0000:00:18.7: bridge window [io size 0x1000]: failed to assign Nov 6 05:27:47.659383 kernel: pci 0000:00:18.7: bridge window [io size 0x1000]: can't assign; no space Nov 6 05:27:47.659454 kernel: pci 0000:00:18.7: bridge window [io size 0x1000]: failed to assign Nov 6 05:27:47.659512 kernel: pci 0000:00:18.6: bridge window [io size 0x1000]: can't assign; no space Nov 6 05:27:47.659577 kernel: pci 0000:00:18.6: bridge window [io size 0x1000]: failed to assign Nov 6 05:27:47.659642 kernel: pci 0000:00:18.5: bridge window [io size 0x1000]: can't assign; no space Nov 6 05:27:47.659714 kernel: pci 0000:00:18.5: bridge window [io size 0x1000]: failed to assign Nov 6 05:27:47.659773 kernel: pci 0000:00:18.4: bridge window [io size 0x1000]: can't assign; no space Nov 6 05:27:47.659838 kernel: pci 0000:00:18.4: bridge window [io size 0x1000]: failed to assign Nov 6 05:27:47.659905 kernel: pci 0000:00:18.3: bridge window [io size 0x1000]: can't assign; no space Nov 6 05:27:47.659974 kernel: pci 0000:00:18.3: bridge window [io size 0x1000]: failed to assign Nov 6 05:27:47.660039 kernel: pci 0000:00:18.2: bridge window [io size 0x1000]: can't assign; no space Nov 6 05:27:47.660098 kernel: pci 0000:00:18.2: bridge window [io size 0x1000]: failed to assign Nov 6 05:27:47.660162 kernel: pci 0000:00:17.7: bridge window [io size 0x1000]: can't assign; no space Nov 6 05:27:47.660248 kernel: pci 0000:00:17.7: bridge window [io size 0x1000]: failed to assign Nov 6 05:27:47.660319 kernel: pci 0000:00:17.6: bridge window [io size 0x1000]: can't assign; no space Nov 6 05:27:47.660386 kernel: pci 0000:00:17.6: bridge window [io size 0x1000]: failed to assign Nov 6 05:27:47.660451 kernel: pci 0000:00:17.5: bridge window [io size 0x1000]: can't assign; no space Nov 6 05:27:47.660512 kernel: pci 0000:00:17.5: bridge window [io size 0x1000]: failed to assign Nov 6 05:27:47.660584 kernel: pci 0000:00:17.4: bridge window [io size 0x1000]: can't assign; no space Nov 6 05:27:47.660644 kernel: pci 0000:00:17.4: bridge window [io size 0x1000]: failed to assign Nov 6 05:27:47.660710 kernel: pci 0000:00:17.3: bridge window [io size 0x1000]: can't assign; no space Nov 6 05:27:47.660777 kernel: pci 0000:00:17.3: bridge window [io size 0x1000]: failed to assign Nov 6 05:27:47.660840 kernel: pci 0000:00:16.7: bridge window [io size 0x1000]: can't assign; no space Nov 6 05:27:47.660919 kernel: pci 0000:00:16.7: bridge window [io size 0x1000]: failed to assign Nov 6 05:27:47.660999 kernel: pci 0000:00:16.6: bridge window [io size 0x1000]: can't assign; no space Nov 6 05:27:47.661062 kernel: pci 0000:00:16.6: bridge window [io size 0x1000]: failed to assign Nov 6 05:27:47.661131 kernel: pci 0000:00:16.5: bridge window [io size 0x1000]: can't assign; no space Nov 6 05:27:47.661194 kernel: pci 0000:00:16.5: bridge window [io size 0x1000]: failed to assign Nov 6 05:27:47.661276 kernel: pci 0000:00:16.4: bridge window [io size 0x1000]: can't assign; no space Nov 6 05:27:47.661342 kernel: pci 0000:00:16.4: bridge window [io size 0x1000]: failed to assign Nov 6 05:27:47.661407 kernel: pci 0000:00:16.3: bridge window [io size 0x1000]: can't assign; no space Nov 6 05:27:47.661480 kernel: pci 0000:00:16.3: bridge window [io size 0x1000]: failed to assign Nov 6 05:27:47.661562 kernel: pci 0000:00:15.7: bridge window [io size 0x1000]: can't assign; no space Nov 6 05:27:47.661638 kernel: pci 0000:00:15.7: bridge window [io size 0x1000]: failed to assign Nov 6 05:27:47.661703 kernel: pci 0000:00:15.6: bridge window [io size 0x1000]: can't assign; no space Nov 6 05:27:47.661773 kernel: pci 0000:00:15.6: bridge window [io size 0x1000]: failed to assign Nov 6 05:27:47.661834 kernel: pci 0000:00:15.5: bridge window [io size 0x1000]: can't assign; no space Nov 6 05:27:47.661905 kernel: pci 0000:00:15.5: bridge window [io size 0x1000]: failed to assign Nov 6 05:27:47.661965 kernel: pci 0000:00:15.4: bridge window [io size 0x1000]: can't assign; no space Nov 6 05:27:47.662033 kernel: pci 0000:00:15.4: bridge window [io size 0x1000]: failed to assign Nov 6 05:27:47.662109 kernel: pci 0000:00:15.3: bridge window [io size 0x1000]: can't assign; no space Nov 6 05:27:47.662185 kernel: pci 0000:00:15.3: bridge window [io size 0x1000]: failed to assign Nov 6 05:27:47.662270 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Nov 6 05:27:47.662334 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] Nov 6 05:27:47.662396 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Nov 6 05:27:47.662464 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Nov 6 05:27:47.662526 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Nov 6 05:27:47.662595 kernel: pci 0000:03:00.0: ROM [mem 0xfd500000-0xfd50ffff pref]: assigned Nov 6 05:27:47.662674 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Nov 6 05:27:47.662750 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Nov 6 05:27:47.662826 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Nov 6 05:27:47.662892 kernel: pci 0000:00:15.0: bridge window [mem 0xc0000000-0xc01fffff 64bit pref] Nov 6 05:27:47.662965 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Nov 6 05:27:47.663034 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Nov 6 05:27:47.663101 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Nov 6 05:27:47.663168 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Nov 6 05:27:47.663245 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Nov 6 05:27:47.663321 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Nov 6 05:27:47.663391 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Nov 6 05:27:47.663456 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Nov 6 05:27:47.663526 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Nov 6 05:27:47.663585 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Nov 6 05:27:47.663665 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Nov 6 05:27:47.663731 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Nov 6 05:27:47.663807 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Nov 6 05:27:47.663882 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Nov 6 05:27:47.663959 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Nov 6 05:27:47.664025 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Nov 6 05:27:47.664093 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Nov 6 05:27:47.664161 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Nov 6 05:27:47.664246 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Nov 6 05:27:47.664310 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Nov 6 05:27:47.664376 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Nov 6 05:27:47.664442 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Nov 6 05:27:47.664522 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Nov 6 05:27:47.664591 kernel: pci 0000:0b:00.0: ROM [mem 0xfd400000-0xfd40ffff pref]: assigned Nov 6 05:27:47.664656 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Nov 6 05:27:47.664717 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Nov 6 05:27:47.664769 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Nov 6 05:27:47.664821 kernel: pci 0000:00:16.0: bridge window [mem 0xc0200000-0xc03fffff 64bit pref] Nov 6 05:27:47.664891 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Nov 6 05:27:47.664965 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Nov 6 05:27:47.665023 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Nov 6 05:27:47.665093 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Nov 6 05:27:47.665167 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Nov 6 05:27:47.665260 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Nov 6 05:27:47.665334 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Nov 6 05:27:47.665403 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Nov 6 05:27:47.665466 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Nov 6 05:27:47.665534 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Nov 6 05:27:47.665593 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Nov 6 05:27:47.665649 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Nov 6 05:27:47.665708 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Nov 6 05:27:47.665782 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Nov 6 05:27:47.665847 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Nov 6 05:27:47.665916 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Nov 6 05:27:47.665982 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Nov 6 05:27:47.666054 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Nov 6 05:27:47.666121 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Nov 6 05:27:47.666179 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Nov 6 05:27:47.666257 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Nov 6 05:27:47.666333 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Nov 6 05:27:47.666406 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Nov 6 05:27:47.666477 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Nov 6 05:27:47.666549 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Nov 6 05:27:47.666611 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Nov 6 05:27:47.666684 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Nov 6 05:27:47.666743 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Nov 6 05:27:47.666801 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Nov 6 05:27:47.666865 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Nov 6 05:27:47.666947 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Nov 6 05:27:47.667019 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Nov 6 05:27:47.667078 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Nov 6 05:27:47.667141 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Nov 6 05:27:47.667208 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Nov 6 05:27:47.667294 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Nov 6 05:27:47.667355 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Nov 6 05:27:47.667415 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Nov 6 05:27:47.667487 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Nov 6 05:27:47.667566 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Nov 6 05:27:47.667634 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Nov 6 05:27:47.667700 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Nov 6 05:27:47.667771 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Nov 6 05:27:47.667848 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Nov 6 05:27:47.667909 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Nov 6 05:27:47.667968 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Nov 6 05:27:47.668027 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Nov 6 05:27:47.668100 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Nov 6 05:27:47.668163 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Nov 6 05:27:47.668299 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Nov 6 05:27:47.668371 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Nov 6 05:27:47.668447 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Nov 6 05:27:47.668509 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Nov 6 05:27:47.668566 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Nov 6 05:27:47.668631 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Nov 6 05:27:47.668703 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Nov 6 05:27:47.668789 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Nov 6 05:27:47.668860 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Nov 6 05:27:47.668922 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Nov 6 05:27:47.668990 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Nov 6 05:27:47.669057 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Nov 6 05:27:47.669118 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Nov 6 05:27:47.669177 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Nov 6 05:27:47.669252 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Nov 6 05:27:47.669333 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Nov 6 05:27:47.669405 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Nov 6 05:27:47.669468 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Nov 6 05:27:47.669527 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Nov 6 05:27:47.669598 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Nov 6 05:27:47.669671 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Nov 6 05:27:47.669735 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Nov 6 05:27:47.669795 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Nov 6 05:27:47.669855 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Nov 6 05:27:47.669928 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Nov 6 05:27:47.669997 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Nov 6 05:27:47.670059 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Nov 6 05:27:47.670121 kernel: pci_bus 0000:00: resource 4 [mem 0x000a0000-0x000bffff window] Nov 6 05:27:47.670194 kernel: pci_bus 0000:00: resource 5 [mem 0x000cc000-0x000dbfff window] Nov 6 05:27:47.670261 kernel: pci_bus 0000:00: resource 6 [mem 0xc0000000-0xfebfffff window] Nov 6 05:27:47.670309 kernel: pci_bus 0000:00: resource 7 [io 0x0000-0x0cf7 window] Nov 6 05:27:47.670355 kernel: pci_bus 0000:00: resource 8 [io 0x0d00-0xfeff window] Nov 6 05:27:47.670409 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x3fff] Nov 6 05:27:47.670467 kernel: pci_bus 0000:02: resource 1 [mem 0xfd600000-0xfdffffff] Nov 6 05:27:47.670526 kernel: pci_bus 0000:02: resource 2 [mem 0xe7b00000-0xe7ffffff 64bit pref] Nov 6 05:27:47.670582 kernel: pci_bus 0000:02: resource 4 [mem 0x000a0000-0x000bffff window] Nov 6 05:27:47.670645 kernel: pci_bus 0000:02: resource 5 [mem 0x000cc000-0x000dbfff window] Nov 6 05:27:47.670712 kernel: pci_bus 0000:02: resource 6 [mem 0xc0000000-0xfebfffff window] Nov 6 05:27:47.670773 kernel: pci_bus 0000:02: resource 7 [io 0x0000-0x0cf7 window] Nov 6 05:27:47.670832 kernel: pci_bus 0000:02: resource 8 [io 0x0d00-0xfeff window] Nov 6 05:27:47.670892 kernel: pci_bus 0000:03: resource 0 [io 0x4000-0x4fff] Nov 6 05:27:47.670961 kernel: pci_bus 0000:03: resource 1 [mem 0xfd500000-0xfd5fffff] Nov 6 05:27:47.671035 kernel: pci_bus 0000:03: resource 2 [mem 0xc0000000-0xc01fffff 64bit pref] Nov 6 05:27:47.671099 kernel: pci_bus 0000:04: resource 0 [io 0x8000-0x8fff] Nov 6 05:27:47.671153 kernel: pci_bus 0000:04: resource 1 [mem 0xfd100000-0xfd1fffff] Nov 6 05:27:47.671212 kernel: pci_bus 0000:04: resource 2 [mem 0xe7800000-0xe78fffff 64bit pref] Nov 6 05:27:47.671293 kernel: pci_bus 0000:05: resource 0 [io 0xc000-0xcfff] Nov 6 05:27:47.671363 kernel: pci_bus 0000:05: resource 1 [mem 0xfcd00000-0xfcdfffff] Nov 6 05:27:47.671416 kernel: pci_bus 0000:05: resource 2 [mem 0xe7400000-0xe74fffff 64bit pref] Nov 6 05:27:47.671485 kernel: pci_bus 0000:06: resource 1 [mem 0xfc900000-0xfc9fffff] Nov 6 05:27:47.671546 kernel: pci_bus 0000:06: resource 2 [mem 0xe7000000-0xe70fffff 64bit pref] Nov 6 05:27:47.671612 kernel: pci_bus 0000:07: resource 1 [mem 0xfc500000-0xfc5fffff] Nov 6 05:27:47.671665 kernel: pci_bus 0000:07: resource 2 [mem 0xe6c00000-0xe6cfffff 64bit pref] Nov 6 05:27:47.671717 kernel: pci_bus 0000:08: resource 1 [mem 0xfc100000-0xfc1fffff] Nov 6 05:27:47.671769 kernel: pci_bus 0000:08: resource 2 [mem 0xe6800000-0xe68fffff 64bit pref] Nov 6 05:27:47.671840 kernel: pci_bus 0000:09: resource 1 [mem 0xfbd00000-0xfbdfffff] Nov 6 05:27:47.671910 kernel: pci_bus 0000:09: resource 2 [mem 0xe6400000-0xe64fffff 64bit pref] Nov 6 05:27:47.671984 kernel: pci_bus 0000:0a: resource 1 [mem 0xfb900000-0xfb9fffff] Nov 6 05:27:47.672040 kernel: pci_bus 0000:0a: resource 2 [mem 0xe6000000-0xe60fffff 64bit pref] Nov 6 05:27:47.672105 kernel: pci_bus 0000:0b: resource 0 [io 0x5000-0x5fff] Nov 6 05:27:47.672158 kernel: pci_bus 0000:0b: resource 1 [mem 0xfd400000-0xfd4fffff] Nov 6 05:27:47.672216 kernel: pci_bus 0000:0b: resource 2 [mem 0xc0200000-0xc03fffff 64bit pref] Nov 6 05:27:47.672294 kernel: pci_bus 0000:0c: resource 0 [io 0x9000-0x9fff] Nov 6 05:27:47.672352 kernel: pci_bus 0000:0c: resource 1 [mem 0xfd000000-0xfd0fffff] Nov 6 05:27:47.672415 kernel: pci_bus 0000:0c: resource 2 [mem 0xe7700000-0xe77fffff 64bit pref] Nov 6 05:27:47.672495 kernel: pci_bus 0000:0d: resource 0 [io 0xd000-0xdfff] Nov 6 05:27:47.672561 kernel: pci_bus 0000:0d: resource 1 [mem 0xfcc00000-0xfccfffff] Nov 6 05:27:47.672628 kernel: pci_bus 0000:0d: resource 2 [mem 0xe7300000-0xe73fffff 64bit pref] Nov 6 05:27:47.672707 kernel: pci_bus 0000:0e: resource 1 [mem 0xfc800000-0xfc8fffff] Nov 6 05:27:47.672767 kernel: pci_bus 0000:0e: resource 2 [mem 0xe6f00000-0xe6ffffff 64bit pref] Nov 6 05:27:47.672835 kernel: pci_bus 0000:0f: resource 1 [mem 0xfc400000-0xfc4fffff] Nov 6 05:27:47.672900 kernel: pci_bus 0000:0f: resource 2 [mem 0xe6b00000-0xe6bfffff 64bit pref] Nov 6 05:27:47.672978 kernel: pci_bus 0000:10: resource 1 [mem 0xfc000000-0xfc0fffff] Nov 6 05:27:47.673051 kernel: pci_bus 0000:10: resource 2 [mem 0xe6700000-0xe67fffff 64bit pref] Nov 6 05:27:47.673126 kernel: pci_bus 0000:11: resource 1 [mem 0xfbc00000-0xfbcfffff] Nov 6 05:27:47.673201 kernel: pci_bus 0000:11: resource 2 [mem 0xe6300000-0xe63fffff 64bit pref] Nov 6 05:27:47.673284 kernel: pci_bus 0000:12: resource 1 [mem 0xfb800000-0xfb8fffff] Nov 6 05:27:47.673334 kernel: pci_bus 0000:12: resource 2 [mem 0xe5f00000-0xe5ffffff 64bit pref] Nov 6 05:27:47.673390 kernel: pci_bus 0000:13: resource 0 [io 0x6000-0x6fff] Nov 6 05:27:47.673449 kernel: pci_bus 0000:13: resource 1 [mem 0xfd300000-0xfd3fffff] Nov 6 05:27:47.673512 kernel: pci_bus 0000:13: resource 2 [mem 0xe7a00000-0xe7afffff 64bit pref] Nov 6 05:27:47.673583 kernel: pci_bus 0000:14: resource 0 [io 0xa000-0xafff] Nov 6 05:27:47.673637 kernel: pci_bus 0000:14: resource 1 [mem 0xfcf00000-0xfcffffff] Nov 6 05:27:47.673693 kernel: pci_bus 0000:14: resource 2 [mem 0xe7600000-0xe76fffff 64bit pref] Nov 6 05:27:47.673754 kernel: pci_bus 0000:15: resource 0 [io 0xe000-0xefff] Nov 6 05:27:47.673808 kernel: pci_bus 0000:15: resource 1 [mem 0xfcb00000-0xfcbfffff] Nov 6 05:27:47.673860 kernel: pci_bus 0000:15: resource 2 [mem 0xe7200000-0xe72fffff 64bit pref] Nov 6 05:27:47.673929 kernel: pci_bus 0000:16: resource 1 [mem 0xfc700000-0xfc7fffff] Nov 6 05:27:47.673986 kernel: pci_bus 0000:16: resource 2 [mem 0xe6e00000-0xe6efffff 64bit pref] Nov 6 05:27:47.674043 kernel: pci_bus 0000:17: resource 1 [mem 0xfc300000-0xfc3fffff] Nov 6 05:27:47.674103 kernel: pci_bus 0000:17: resource 2 [mem 0xe6a00000-0xe6afffff 64bit pref] Nov 6 05:27:47.674175 kernel: pci_bus 0000:18: resource 1 [mem 0xfbf00000-0xfbffffff] Nov 6 05:27:47.674246 kernel: pci_bus 0000:18: resource 2 [mem 0xe6600000-0xe66fffff 64bit pref] Nov 6 05:27:47.674310 kernel: pci_bus 0000:19: resource 1 [mem 0xfbb00000-0xfbbfffff] Nov 6 05:27:47.674377 kernel: pci_bus 0000:19: resource 2 [mem 0xe6200000-0xe62fffff 64bit pref] Nov 6 05:27:47.674438 kernel: pci_bus 0000:1a: resource 1 [mem 0xfb700000-0xfb7fffff] Nov 6 05:27:47.674490 kernel: pci_bus 0000:1a: resource 2 [mem 0xe5e00000-0xe5efffff 64bit pref] Nov 6 05:27:47.674548 kernel: pci_bus 0000:1b: resource 0 [io 0x7000-0x7fff] Nov 6 05:27:47.674606 kernel: pci_bus 0000:1b: resource 1 [mem 0xfd200000-0xfd2fffff] Nov 6 05:27:47.674666 kernel: pci_bus 0000:1b: resource 2 [mem 0xe7900000-0xe79fffff 64bit pref] Nov 6 05:27:47.674738 kernel: pci_bus 0000:1c: resource 0 [io 0xb000-0xbfff] Nov 6 05:27:47.674798 kernel: pci_bus 0000:1c: resource 1 [mem 0xfce00000-0xfcefffff] Nov 6 05:27:47.674851 kernel: pci_bus 0000:1c: resource 2 [mem 0xe7500000-0xe75fffff 64bit pref] Nov 6 05:27:47.674920 kernel: pci_bus 0000:1d: resource 1 [mem 0xfca00000-0xfcafffff] Nov 6 05:27:47.674980 kernel: pci_bus 0000:1d: resource 2 [mem 0xe7100000-0xe71fffff 64bit pref] Nov 6 05:27:47.675037 kernel: pci_bus 0000:1e: resource 1 [mem 0xfc600000-0xfc6fffff] Nov 6 05:27:47.675095 kernel: pci_bus 0000:1e: resource 2 [mem 0xe6d00000-0xe6dfffff 64bit pref] Nov 6 05:27:47.675153 kernel: pci_bus 0000:1f: resource 1 [mem 0xfc200000-0xfc2fffff] Nov 6 05:27:47.675206 kernel: pci_bus 0000:1f: resource 2 [mem 0xe6900000-0xe69fffff 64bit pref] Nov 6 05:27:47.675283 kernel: pci_bus 0000:20: resource 1 [mem 0xfbe00000-0xfbefffff] Nov 6 05:27:47.675347 kernel: pci_bus 0000:20: resource 2 [mem 0xe6500000-0xe65fffff 64bit pref] Nov 6 05:27:47.675423 kernel: pci_bus 0000:21: resource 1 [mem 0xfba00000-0xfbafffff] Nov 6 05:27:47.675486 kernel: pci_bus 0000:21: resource 2 [mem 0xe6100000-0xe61fffff 64bit pref] Nov 6 05:27:47.675551 kernel: pci_bus 0000:22: resource 1 [mem 0xfb600000-0xfb6fffff] Nov 6 05:27:47.675614 kernel: pci_bus 0000:22: resource 2 [mem 0xe5d00000-0xe5dfffff 64bit pref] Nov 6 05:27:47.675690 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Nov 6 05:27:47.675701 kernel: PCI: CLS 32 bytes, default 64 Nov 6 05:27:47.675709 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Nov 6 05:27:47.675716 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Nov 6 05:27:47.675727 kernel: clocksource: Switched to clocksource tsc Nov 6 05:27:47.675740 kernel: Initialise system trusted keyrings Nov 6 05:27:47.675750 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Nov 6 05:27:47.675756 kernel: Key type asymmetric registered Nov 6 05:27:47.675762 kernel: Asymmetric key parser 'x509' registered Nov 6 05:27:47.675770 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Nov 6 05:27:47.675777 kernel: io scheduler mq-deadline registered Nov 6 05:27:47.675782 kernel: io scheduler kyber registered Nov 6 05:27:47.675788 kernel: io scheduler bfq registered Nov 6 05:27:47.675856 kernel: pcieport 0000:00:15.0: PME: Signaling with IRQ 24 Nov 6 05:27:47.675929 kernel: pcieport 0000:00:15.0: pciehp: Slot #160 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 6 05:27:47.676011 kernel: pcieport 0000:00:15.1: PME: Signaling with IRQ 25 Nov 6 05:27:47.676103 kernel: pcieport 0000:00:15.1: pciehp: Slot #161 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 6 05:27:47.676178 kernel: pcieport 0000:00:15.2: PME: Signaling with IRQ 26 Nov 6 05:27:47.676273 kernel: pcieport 0000:00:15.2: pciehp: Slot #162 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 6 05:27:47.676331 kernel: pcieport 0000:00:15.3: PME: Signaling with IRQ 27 Nov 6 05:27:47.676385 kernel: pcieport 0000:00:15.3: pciehp: Slot #163 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 6 05:27:47.676442 kernel: pcieport 0000:00:15.4: PME: Signaling with IRQ 28 Nov 6 05:27:47.676496 kernel: pcieport 0000:00:15.4: pciehp: Slot #164 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 6 05:27:47.676557 kernel: pcieport 0000:00:15.5: PME: Signaling with IRQ 29 Nov 6 05:27:47.676612 kernel: pcieport 0000:00:15.5: pciehp: Slot #165 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 6 05:27:47.676674 kernel: pcieport 0000:00:15.6: PME: Signaling with IRQ 30 Nov 6 05:27:47.676752 kernel: pcieport 0000:00:15.6: pciehp: Slot #166 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 6 05:27:47.676819 kernel: pcieport 0000:00:15.7: PME: Signaling with IRQ 31 Nov 6 05:27:47.676885 kernel: pcieport 0000:00:15.7: pciehp: Slot #167 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 6 05:27:47.676940 kernel: pcieport 0000:00:16.0: PME: Signaling with IRQ 32 Nov 6 05:27:47.676994 kernel: pcieport 0000:00:16.0: pciehp: Slot #192 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 6 05:27:47.677047 kernel: pcieport 0000:00:16.1: PME: Signaling with IRQ 33 Nov 6 05:27:47.677108 kernel: pcieport 0000:00:16.1: pciehp: Slot #193 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 6 05:27:47.677163 kernel: pcieport 0000:00:16.2: PME: Signaling with IRQ 34 Nov 6 05:27:47.677224 kernel: pcieport 0000:00:16.2: pciehp: Slot #194 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 6 05:27:47.677327 kernel: pcieport 0000:00:16.3: PME: Signaling with IRQ 35 Nov 6 05:27:47.677386 kernel: pcieport 0000:00:16.3: pciehp: Slot #195 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 6 05:27:47.677468 kernel: pcieport 0000:00:16.4: PME: Signaling with IRQ 36 Nov 6 05:27:47.677533 kernel: pcieport 0000:00:16.4: pciehp: Slot #196 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 6 05:27:47.677587 kernel: pcieport 0000:00:16.5: PME: Signaling with IRQ 37 Nov 6 05:27:47.677641 kernel: pcieport 0000:00:16.5: pciehp: Slot #197 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 6 05:27:47.677697 kernel: pcieport 0000:00:16.6: PME: Signaling with IRQ 38 Nov 6 05:27:47.677750 kernel: pcieport 0000:00:16.6: pciehp: Slot #198 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 6 05:27:47.677812 kernel: pcieport 0000:00:16.7: PME: Signaling with IRQ 39 Nov 6 05:27:47.677866 kernel: pcieport 0000:00:16.7: pciehp: Slot #199 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 6 05:27:47.677943 kernel: pcieport 0000:00:17.0: PME: Signaling with IRQ 40 Nov 6 05:27:47.678005 kernel: pcieport 0000:00:17.0: pciehp: Slot #224 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 6 05:27:47.678071 kernel: pcieport 0000:00:17.1: PME: Signaling with IRQ 41 Nov 6 05:27:47.678125 kernel: pcieport 0000:00:17.1: pciehp: Slot #225 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 6 05:27:47.678178 kernel: pcieport 0000:00:17.2: PME: Signaling with IRQ 42 Nov 6 05:27:47.678253 kernel: pcieport 0000:00:17.2: pciehp: Slot #226 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 6 05:27:47.678311 kernel: pcieport 0000:00:17.3: PME: Signaling with IRQ 43 Nov 6 05:27:47.678364 kernel: pcieport 0000:00:17.3: pciehp: Slot #227 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 6 05:27:47.678417 kernel: pcieport 0000:00:17.4: PME: Signaling with IRQ 44 Nov 6 05:27:47.678470 kernel: pcieport 0000:00:17.4: pciehp: Slot #228 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 6 05:27:47.678524 kernel: pcieport 0000:00:17.5: PME: Signaling with IRQ 45 Nov 6 05:27:47.678598 kernel: pcieport 0000:00:17.5: pciehp: Slot #229 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 6 05:27:47.678667 kernel: pcieport 0000:00:17.6: PME: Signaling with IRQ 46 Nov 6 05:27:47.678721 kernel: pcieport 0000:00:17.6: pciehp: Slot #230 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 6 05:27:47.678781 kernel: pcieport 0000:00:17.7: PME: Signaling with IRQ 47 Nov 6 05:27:47.678834 kernel: pcieport 0000:00:17.7: pciehp: Slot #231 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 6 05:27:47.678896 kernel: pcieport 0000:00:18.0: PME: Signaling with IRQ 48 Nov 6 05:27:47.678952 kernel: pcieport 0000:00:18.0: pciehp: Slot #256 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 6 05:27:47.679005 kernel: pcieport 0000:00:18.1: PME: Signaling with IRQ 49 Nov 6 05:27:47.679058 kernel: pcieport 0000:00:18.1: pciehp: Slot #257 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 6 05:27:47.679115 kernel: pcieport 0000:00:18.2: PME: Signaling with IRQ 50 Nov 6 05:27:47.679169 kernel: pcieport 0000:00:18.2: pciehp: Slot #258 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 6 05:27:47.679222 kernel: pcieport 0000:00:18.3: PME: Signaling with IRQ 51 Nov 6 05:27:47.679287 kernel: pcieport 0000:00:18.3: pciehp: Slot #259 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 6 05:27:47.679341 kernel: pcieport 0000:00:18.4: PME: Signaling with IRQ 52 Nov 6 05:27:47.679394 kernel: pcieport 0000:00:18.4: pciehp: Slot #260 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 6 05:27:47.679448 kernel: pcieport 0000:00:18.5: PME: Signaling with IRQ 53 Nov 6 05:27:47.679502 kernel: pcieport 0000:00:18.5: pciehp: Slot #261 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 6 05:27:47.679558 kernel: pcieport 0000:00:18.6: PME: Signaling with IRQ 54 Nov 6 05:27:47.679612 kernel: pcieport 0000:00:18.6: pciehp: Slot #262 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 6 05:27:47.679666 kernel: pcieport 0000:00:18.7: PME: Signaling with IRQ 55 Nov 6 05:27:47.679720 kernel: pcieport 0000:00:18.7: pciehp: Slot #263 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 6 05:27:47.679733 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 6 05:27:47.679739 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 6 05:27:47.679747 kernel: 00:05: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 6 05:27:47.679754 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBC,PNP0f13:MOUS] at 0x60,0x64 irq 1,12 Nov 6 05:27:47.679760 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 6 05:27:47.679766 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 6 05:27:47.679823 kernel: rtc_cmos 00:01: registered as rtc0 Nov 6 05:27:47.679873 kernel: rtc_cmos 00:01: setting system clock to 2025-11-06T05:27:46 UTC (1762406866) Nov 6 05:27:47.679883 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Nov 6 05:27:47.679928 kernel: rtc_cmos 00:01: alarms up to one month, y3k, 114 bytes nvram Nov 6 05:27:47.679939 kernel: intel_pstate: CPU model not supported Nov 6 05:27:47.679945 kernel: NET: Registered PF_INET6 protocol family Nov 6 05:27:47.679951 kernel: Segment Routing with IPv6 Nov 6 05:27:47.679958 kernel: In-situ OAM (IOAM) with IPv6 Nov 6 05:27:47.679964 kernel: NET: Registered PF_PACKET protocol family Nov 6 05:27:47.679970 kernel: Key type dns_resolver registered Nov 6 05:27:47.679976 kernel: IPI shorthand broadcast: enabled Nov 6 05:27:47.679983 kernel: sched_clock: Marking stable (1857003648, 174452866)->(2046345872, -14889358) Nov 6 05:27:47.679989 kernel: registered taskstats version 1 Nov 6 05:27:47.679996 kernel: Loading compiled-in X.509 certificates Nov 6 05:27:47.680003 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.54-flatcar: edee08bd79f57120bcf336d97df00a0ad5e85412' Nov 6 05:27:47.680009 kernel: Demotion targets for Node 0: null Nov 6 05:27:47.680015 kernel: Key type .fscrypt registered Nov 6 05:27:47.680021 kernel: Key type fscrypt-provisioning registered Nov 6 05:27:47.680028 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 6 05:27:47.680034 kernel: ima: Allocated hash algorithm: sha1 Nov 6 05:27:47.680040 kernel: ima: No architecture policies found Nov 6 05:27:47.680047 kernel: clk: Disabling unused clocks Nov 6 05:27:47.680054 kernel: Freeing unused kernel image (initmem) memory: 15356K Nov 6 05:27:47.680060 kernel: Write protecting the kernel read-only data: 45056k Nov 6 05:27:47.680066 kernel: Freeing unused kernel image (rodata/data gap) memory: 828K Nov 6 05:27:47.680073 kernel: Run /init as init process Nov 6 05:27:47.680079 kernel: with arguments: Nov 6 05:27:47.680086 kernel: /init Nov 6 05:27:47.680093 kernel: with environment: Nov 6 05:27:47.680099 kernel: HOME=/ Nov 6 05:27:47.680105 kernel: TERM=linux Nov 6 05:27:47.680111 kernel: SCSI subsystem initialized Nov 6 05:27:47.680118 kernel: VMware PVSCSI driver - version 1.0.7.0-k Nov 6 05:27:47.680125 kernel: vmw_pvscsi: using 64bit dma Nov 6 05:27:47.680131 kernel: vmw_pvscsi: max_id: 16 Nov 6 05:27:47.680138 kernel: vmw_pvscsi: setting ring_pages to 8 Nov 6 05:27:47.680144 kernel: vmw_pvscsi: enabling reqCallThreshold Nov 6 05:27:47.680150 kernel: vmw_pvscsi: driver-based request coalescing enabled Nov 6 05:27:47.680156 kernel: vmw_pvscsi: using MSI-X Nov 6 05:27:47.680218 kernel: scsi host0: VMware PVSCSI storage adapter rev 2, req/cmp/msg rings: 8/8/1 pages, cmd_per_lun=254 Nov 6 05:27:47.680299 kernel: vmw_pvscsi 0000:03:00.0: VMware PVSCSI rev 2 host #0 Nov 6 05:27:47.680368 kernel: scsi 0:0:0:0: Direct-Access VMware Virtual disk 2.0 PQ: 0 ANSI: 6 Nov 6 05:27:47.680427 kernel: sd 0:0:0:0: [sda] 25804800 512-byte logical blocks: (13.2 GB/12.3 GiB) Nov 6 05:27:47.680485 kernel: sd 0:0:0:0: [sda] Write Protect is off Nov 6 05:27:47.680543 kernel: sd 0:0:0:0: [sda] Mode Sense: 31 00 00 00 Nov 6 05:27:47.680600 kernel: sd 0:0:0:0: [sda] Cache data unavailable Nov 6 05:27:47.680658 kernel: sd 0:0:0:0: [sda] Assuming drive cache: write through Nov 6 05:27:47.680669 kernel: libata version 3.00 loaded. Nov 6 05:27:47.680676 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 6 05:27:47.680732 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Nov 6 05:27:47.680789 kernel: ata_piix 0000:00:07.1: version 2.13 Nov 6 05:27:47.680847 kernel: scsi host1: ata_piix Nov 6 05:27:47.680909 kernel: scsi host2: ata_piix Nov 6 05:27:47.680920 kernel: ata1: PATA max UDMA/33 cmd 0x1f0 ctl 0x3f6 bmdma 0x1060 irq 14 lpm-pol 0 Nov 6 05:27:47.680926 kernel: ata2: PATA max UDMA/33 cmd 0x170 ctl 0x376 bmdma 0x1068 irq 15 lpm-pol 0 Nov 6 05:27:47.680935 kernel: ata2.00: ATAPI: VMware Virtual IDE CDROM Drive, 00000001, max UDMA/33 Nov 6 05:27:47.680997 kernel: scsi 2:0:0:0: CD-ROM NECVMWar VMware IDE CDR10 1.00 PQ: 0 ANSI: 5 Nov 6 05:27:47.681007 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 6 05:27:47.681014 kernel: device-mapper: uevent: version 1.0.3 Nov 6 05:27:47.681020 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Nov 6 05:27:47.681075 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 1x/1x writer dvd-ram cd/rw xa/form2 cdda tray Nov 6 05:27:47.681085 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Nov 6 05:27:47.681091 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Nov 6 05:27:47.681152 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Nov 6 05:27:47.681161 kernel: raid6: avx2x4 gen() 47077 MB/s Nov 6 05:27:47.681167 kernel: raid6: avx2x2 gen() 52970 MB/s Nov 6 05:27:47.681174 kernel: raid6: avx2x1 gen() 43491 MB/s Nov 6 05:27:47.681180 kernel: raid6: using algorithm avx2x2 gen() 52970 MB/s Nov 6 05:27:47.681186 kernel: raid6: .... xor() 30659 MB/s, rmw enabled Nov 6 05:27:47.681193 kernel: raid6: using avx2x2 recovery algorithm Nov 6 05:27:47.681199 kernel: xor: automatically using best checksumming function avx Nov 6 05:27:47.681205 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 6 05:27:47.681213 kernel: BTRFS: device fsid b5cf1d69-dae6-4f65-bb6f-44a747495a60 devid 1 transid 35 /dev/mapper/usr (254:0) scanned by mount (197) Nov 6 05:27:47.681220 kernel: BTRFS info (device dm-0): first mount of filesystem b5cf1d69-dae6-4f65-bb6f-44a747495a60 Nov 6 05:27:47.681248 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 6 05:27:47.681256 kernel: BTRFS info (device dm-0): enabling ssd optimizations Nov 6 05:27:47.681262 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 6 05:27:47.681269 kernel: BTRFS info (device dm-0): enabling free space tree Nov 6 05:27:47.681275 kernel: loop: module loaded Nov 6 05:27:47.681281 kernel: loop0: detected capacity change from 0 to 101000 Nov 6 05:27:47.681288 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 6 05:27:47.681297 systemd[1]: Successfully made /usr/ read-only. Nov 6 05:27:47.681306 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 6 05:27:47.681313 systemd[1]: Detected virtualization vmware. Nov 6 05:27:47.681319 systemd[1]: Detected architecture x86-64. Nov 6 05:27:47.681325 systemd[1]: Running in initrd. Nov 6 05:27:47.681331 systemd[1]: No hostname configured, using default hostname. Nov 6 05:27:47.681338 systemd[1]: Hostname set to . Nov 6 05:27:47.681346 systemd[1]: Initializing machine ID from random generator. Nov 6 05:27:47.681352 systemd[1]: Queued start job for default target initrd.target. Nov 6 05:27:47.681359 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Nov 6 05:27:47.681365 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 6 05:27:47.681372 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 6 05:27:47.681379 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 6 05:27:47.681386 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 6 05:27:47.681394 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 6 05:27:47.681400 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 6 05:27:47.681407 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 6 05:27:47.681413 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 6 05:27:47.681420 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Nov 6 05:27:47.681426 systemd[1]: Reached target paths.target - Path Units. Nov 6 05:27:47.681433 systemd[1]: Reached target slices.target - Slice Units. Nov 6 05:27:47.681439 systemd[1]: Reached target swap.target - Swaps. Nov 6 05:27:47.681446 systemd[1]: Reached target timers.target - Timer Units. Nov 6 05:27:47.681453 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 6 05:27:47.681459 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 6 05:27:47.681466 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 6 05:27:47.681472 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Nov 6 05:27:47.681478 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 6 05:27:47.681485 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 6 05:27:47.681491 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 6 05:27:47.681498 systemd[1]: Reached target sockets.target - Socket Units. Nov 6 05:27:47.681505 systemd[1]: Starting afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments... Nov 6 05:27:47.681512 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 6 05:27:47.681518 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 6 05:27:47.681525 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 6 05:27:47.681532 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Nov 6 05:27:47.681538 systemd[1]: Starting systemd-fsck-usr.service... Nov 6 05:27:47.681545 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 6 05:27:47.681551 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 6 05:27:47.681559 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 6 05:27:47.681566 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 6 05:27:47.681573 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 6 05:27:47.681579 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 6 05:27:47.681587 systemd[1]: Finished systemd-fsck-usr.service. Nov 6 05:27:47.681611 systemd-journald[334]: Collecting audit messages is disabled. Nov 6 05:27:47.681628 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 6 05:27:47.681636 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 6 05:27:47.681644 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 6 05:27:47.681651 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 6 05:27:47.681657 kernel: Bridge firewalling registered Nov 6 05:27:47.681664 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 6 05:27:47.681670 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 6 05:27:47.681678 systemd-journald[334]: Journal started Nov 6 05:27:47.681693 systemd-journald[334]: Runtime Journal (/run/log/journal/998d1d34a2594ab2bc9679e9e80796aa) is 4.8M, max 38.4M, 33.6M free. Nov 6 05:27:47.654251 systemd-modules-load[335]: Inserted module 'br_netfilter' Nov 6 05:27:47.683246 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 6 05:27:47.685253 systemd[1]: Started systemd-journald.service - Journal Service. Nov 6 05:27:47.687331 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 6 05:27:47.689314 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 6 05:27:47.700569 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 6 05:27:47.701691 systemd[1]: Finished afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Nov 6 05:27:47.703832 systemd-tmpfiles[358]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Nov 6 05:27:47.707245 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 6 05:27:47.708581 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 6 05:27:47.715340 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 6 05:27:47.717299 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 6 05:27:47.731050 dracut-cmdline[378]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 ip=139.178.70.103::139.178.70.97:28::ens192:off:1.1.1.1:1.0.0.1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=42c7eeb79a8ee89597bba4204806137326be9acdbca65a8fd923766f65b62f69 Nov 6 05:27:47.743204 systemd-resolved[366]: Positive Trust Anchors: Nov 6 05:27:47.743213 systemd-resolved[366]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 6 05:27:47.743264 systemd-resolved[366]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 6 05:27:47.755497 systemd-resolved[366]: Defaulting to hostname 'linux'. Nov 6 05:27:47.756197 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 6 05:27:47.756466 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 6 05:27:47.808249 kernel: Loading iSCSI transport class v2.0-870. Nov 6 05:27:47.821248 kernel: iscsi: registered transport (tcp) Nov 6 05:27:47.849333 kernel: iscsi: registered transport (qla4xxx) Nov 6 05:27:47.849386 kernel: QLogic iSCSI HBA Driver Nov 6 05:27:47.871542 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 6 05:27:47.883607 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 6 05:27:47.884611 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 6 05:27:47.908292 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 6 05:27:47.909386 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 6 05:27:47.910302 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 6 05:27:47.930603 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 6 05:27:47.931802 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 6 05:27:47.953589 systemd-udevd[614]: Using default interface naming scheme 'v255'. Nov 6 05:27:47.959976 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 6 05:27:47.964661 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 6 05:27:47.973207 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 6 05:27:47.974118 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 6 05:27:47.979707 dracut-pre-trigger[699]: rd.md=0: removing MD RAID activation Nov 6 05:27:47.997071 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 6 05:27:47.998349 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 6 05:27:48.010372 systemd-networkd[720]: lo: Link UP Nov 6 05:27:48.010379 systemd-networkd[720]: lo: Gained carrier Nov 6 05:27:48.010747 systemd-networkd[720]: Enumeration completed Nov 6 05:27:48.010888 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 6 05:27:48.011041 systemd[1]: Reached target network.target - Network. Nov 6 05:27:48.085676 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 6 05:27:48.086688 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 6 05:27:48.146965 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_disk EFI-SYSTEM. Nov 6 05:27:48.170956 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_disk USR-A. Nov 6 05:27:48.180789 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. Nov 6 05:27:48.188487 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_disk ROOT. Nov 6 05:27:48.190417 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 6 05:27:48.262098 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 6 05:27:48.262187 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 6 05:27:48.262952 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 6 05:27:48.267281 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 6 05:27:48.275246 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input2 Nov 6 05:27:48.288378 kernel: cryptd: max_cpu_qlen set to 1000 Nov 6 05:27:48.295462 (udev-worker)[759]: id: Truncating stdout of 'dmi_memory_id' up to 16384 byte. Nov 6 05:27:48.305249 kernel: VMware vmxnet3 virtual NIC driver - version 1.9.0.0-k-NAPI Nov 6 05:27:48.305284 kernel: vmxnet3 0000:0b:00.0: # of Tx queues : 2, # of Rx queues : 2 Nov 6 05:27:48.307241 kernel: vmxnet3 0000:0b:00.0 eth0: NIC Link is Up 10000 Mbps Nov 6 05:27:48.311247 kernel: vmxnet3 0000:0b:00.0 ens192: renamed from eth0 Nov 6 05:27:48.312817 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 6 05:27:48.313537 systemd-networkd[720]: eth0: Interface name change detected, renamed to ens192. Nov 6 05:27:48.322257 kernel: AES CTR mode by8 optimization enabled Nov 6 05:27:48.322048 systemd-networkd[720]: ens192: Configuring with /etc/systemd/network/10-dracut-cmdline-99.network. Nov 6 05:27:48.325438 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Nov 6 05:27:48.325538 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Nov 6 05:27:48.328009 systemd-networkd[720]: ens192: Link UP Nov 6 05:27:48.328152 systemd-networkd[720]: ens192: Gained carrier Nov 6 05:27:48.377325 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 6 05:27:48.377876 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 6 05:27:48.378096 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 6 05:27:48.378323 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 6 05:27:48.379109 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 6 05:27:48.394056 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 6 05:27:49.291965 disk-uuid[789]: Warning: The kernel is still using the old partition table. Nov 6 05:27:49.291965 disk-uuid[789]: The new table will be used at the next reboot or after you Nov 6 05:27:49.291965 disk-uuid[789]: run partprobe(8) or kpartx(8) Nov 6 05:27:49.291965 disk-uuid[789]: The operation has completed successfully. Nov 6 05:27:49.298680 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 6 05:27:49.298754 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 6 05:27:49.299641 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 6 05:27:49.340248 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (879) Nov 6 05:27:49.342537 kernel: BTRFS info (device sda6): first mount of filesystem 8a1691a9-0f9b-492f-9a94-8ffa2a579e5c Nov 6 05:27:49.342560 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 6 05:27:49.346279 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 6 05:27:49.346329 kernel: BTRFS info (device sda6): enabling free space tree Nov 6 05:27:49.351247 kernel: BTRFS info (device sda6): last unmount of filesystem 8a1691a9-0f9b-492f-9a94-8ffa2a579e5c Nov 6 05:27:49.351947 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 6 05:27:49.352749 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 6 05:27:49.553339 ignition[898]: Ignition 2.22.0 Nov 6 05:27:49.553351 ignition[898]: Stage: fetch-offline Nov 6 05:27:49.553381 ignition[898]: no configs at "/usr/lib/ignition/base.d" Nov 6 05:27:49.553390 ignition[898]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Nov 6 05:27:49.553472 ignition[898]: parsed url from cmdline: "" Nov 6 05:27:49.553475 ignition[898]: no config URL provided Nov 6 05:27:49.553479 ignition[898]: reading system config file "/usr/lib/ignition/user.ign" Nov 6 05:27:49.553488 ignition[898]: no config at "/usr/lib/ignition/user.ign" Nov 6 05:27:49.553940 ignition[898]: config successfully fetched Nov 6 05:27:49.553971 ignition[898]: parsing config with SHA512: 266a01d566e91663c1d385269e17cba844714ae3aeaffac19ae6b6069c0dfff164aa1e0b262a544a39a4c7188b527befc8a561a5440d8461cdf1bc20a26dea63 Nov 6 05:27:49.557979 unknown[898]: fetched base config from "system" Nov 6 05:27:49.557985 unknown[898]: fetched user config from "vmware" Nov 6 05:27:49.558192 ignition[898]: fetch-offline: fetch-offline passed Nov 6 05:27:49.558226 ignition[898]: Ignition finished successfully Nov 6 05:27:49.559441 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 6 05:27:49.559801 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Nov 6 05:27:49.560575 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 6 05:27:49.579843 ignition[917]: Ignition 2.22.0 Nov 6 05:27:49.579853 ignition[917]: Stage: kargs Nov 6 05:27:49.580115 ignition[917]: no configs at "/usr/lib/ignition/base.d" Nov 6 05:27:49.580121 ignition[917]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Nov 6 05:27:49.580573 ignition[917]: kargs: kargs passed Nov 6 05:27:49.580598 ignition[917]: Ignition finished successfully Nov 6 05:27:49.582332 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 6 05:27:49.583298 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 6 05:27:49.605481 ignition[923]: Ignition 2.22.0 Nov 6 05:27:49.605489 ignition[923]: Stage: disks Nov 6 05:27:49.605587 ignition[923]: no configs at "/usr/lib/ignition/base.d" Nov 6 05:27:49.605592 ignition[923]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Nov 6 05:27:49.606130 ignition[923]: disks: disks passed Nov 6 05:27:49.606161 ignition[923]: Ignition finished successfully Nov 6 05:27:49.607966 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 6 05:27:49.608340 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 6 05:27:49.608600 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 6 05:27:49.608855 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 6 05:27:49.609091 systemd[1]: Reached target sysinit.target - System Initialization. Nov 6 05:27:49.609337 systemd[1]: Reached target basic.target - Basic System. Nov 6 05:27:49.610093 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 6 05:27:49.655375 systemd-fsck[932]: ROOT: clean, 15/1631200 files, 112378/1617920 blocks Nov 6 05:27:49.656586 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 6 05:27:49.658275 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 6 05:27:49.753251 kernel: EXT4-fs (sda9): mounted filesystem 05065f18-b1e1-4b9e-83f5-1a1189e0d083 r/w with ordered data mode. Quota mode: none. Nov 6 05:27:49.752699 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 6 05:27:49.753058 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 6 05:27:49.763382 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 6 05:27:49.773270 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 6 05:27:49.773700 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 6 05:27:49.773729 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 6 05:27:49.773745 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 6 05:27:49.777502 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 6 05:27:49.778329 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 6 05:27:49.791246 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (940) Nov 6 05:27:49.794799 kernel: BTRFS info (device sda6): first mount of filesystem 8a1691a9-0f9b-492f-9a94-8ffa2a579e5c Nov 6 05:27:49.794825 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 6 05:27:49.799723 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 6 05:27:49.799758 kernel: BTRFS info (device sda6): enabling free space tree Nov 6 05:27:49.801183 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 6 05:27:49.863883 initrd-setup-root[964]: cut: /sysroot/etc/passwd: No such file or directory Nov 6 05:27:49.866512 initrd-setup-root[971]: cut: /sysroot/etc/group: No such file or directory Nov 6 05:27:49.869505 initrd-setup-root[978]: cut: /sysroot/etc/shadow: No such file or directory Nov 6 05:27:49.871806 initrd-setup-root[985]: cut: /sysroot/etc/gshadow: No such file or directory Nov 6 05:27:49.970118 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 6 05:27:49.971153 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 6 05:27:49.973299 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 6 05:27:49.983403 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 6 05:27:49.984928 kernel: BTRFS info (device sda6): last unmount of filesystem 8a1691a9-0f9b-492f-9a94-8ffa2a579e5c Nov 6 05:27:50.007252 ignition[1052]: INFO : Ignition 2.22.0 Nov 6 05:27:50.007252 ignition[1052]: INFO : Stage: mount Nov 6 05:27:50.007252 ignition[1052]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 6 05:27:50.007252 ignition[1052]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Nov 6 05:27:50.008592 ignition[1052]: INFO : mount: mount passed Nov 6 05:27:50.008592 ignition[1052]: INFO : Ignition finished successfully Nov 6 05:27:50.008425 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 6 05:27:50.011411 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 6 05:27:50.017671 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 6 05:27:50.023315 systemd-networkd[720]: ens192: Gained IPv6LL Nov 6 05:27:50.754218 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 6 05:27:50.771253 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (1064) Nov 6 05:27:50.774382 kernel: BTRFS info (device sda6): first mount of filesystem 8a1691a9-0f9b-492f-9a94-8ffa2a579e5c Nov 6 05:27:50.774401 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 6 05:27:50.777777 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 6 05:27:50.777797 kernel: BTRFS info (device sda6): enabling free space tree Nov 6 05:27:50.778813 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 6 05:27:50.801926 ignition[1080]: INFO : Ignition 2.22.0 Nov 6 05:27:50.801926 ignition[1080]: INFO : Stage: files Nov 6 05:27:50.802335 ignition[1080]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 6 05:27:50.802335 ignition[1080]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Nov 6 05:27:50.802582 ignition[1080]: DEBUG : files: compiled without relabeling support, skipping Nov 6 05:27:50.805955 ignition[1080]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 6 05:27:50.805955 ignition[1080]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 6 05:27:50.828328 ignition[1080]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 6 05:27:50.828603 ignition[1080]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 6 05:27:50.828874 ignition[1080]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 6 05:27:50.828632 unknown[1080]: wrote ssh authorized keys file for user: core Nov 6 05:27:50.851641 ignition[1080]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 6 05:27:50.852044 ignition[1080]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Nov 6 05:27:50.911310 ignition[1080]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 6 05:27:50.983442 ignition[1080]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 6 05:27:50.983442 ignition[1080]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 6 05:27:50.983824 ignition[1080]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 6 05:27:50.983824 ignition[1080]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 6 05:27:50.983824 ignition[1080]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 6 05:27:50.983824 ignition[1080]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 6 05:27:50.983824 ignition[1080]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 6 05:27:50.983824 ignition[1080]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 6 05:27:50.983824 ignition[1080]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 6 05:27:50.985798 ignition[1080]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 6 05:27:50.985959 ignition[1080]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 6 05:27:50.985959 ignition[1080]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 6 05:27:50.988119 ignition[1080]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 6 05:27:50.988119 ignition[1080]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 6 05:27:50.988508 ignition[1080]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-x86-64.raw: attempt #1 Nov 6 05:27:51.400485 ignition[1080]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 6 05:27:51.660184 ignition[1080]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 6 05:27:51.660184 ignition[1080]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/systemd/network/00-vmware.network" Nov 6 05:27:51.661464 ignition[1080]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/systemd/network/00-vmware.network" Nov 6 05:27:51.661464 ignition[1080]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Nov 6 05:27:51.661874 ignition[1080]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 6 05:27:51.662340 ignition[1080]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 6 05:27:51.662340 ignition[1080]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Nov 6 05:27:51.662340 ignition[1080]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Nov 6 05:27:51.662795 ignition[1080]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 6 05:27:51.662795 ignition[1080]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 6 05:27:51.662795 ignition[1080]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Nov 6 05:27:51.662795 ignition[1080]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Nov 6 05:27:51.685735 ignition[1080]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Nov 6 05:27:51.687930 ignition[1080]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Nov 6 05:27:51.688099 ignition[1080]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Nov 6 05:27:51.688099 ignition[1080]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Nov 6 05:27:51.688099 ignition[1080]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Nov 6 05:27:51.689247 ignition[1080]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 6 05:27:51.689247 ignition[1080]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 6 05:27:51.689247 ignition[1080]: INFO : files: files passed Nov 6 05:27:51.689247 ignition[1080]: INFO : Ignition finished successfully Nov 6 05:27:51.689370 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 6 05:27:51.690421 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 6 05:27:51.690906 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 6 05:27:51.706965 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 6 05:27:51.707201 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 6 05:27:51.710520 initrd-setup-root-after-ignition[1115]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 6 05:27:51.710520 initrd-setup-root-after-ignition[1115]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 6 05:27:51.711455 initrd-setup-root-after-ignition[1119]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 6 05:27:51.712474 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 6 05:27:51.712797 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 6 05:27:51.713485 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 6 05:27:51.740473 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 6 05:27:51.740551 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 6 05:27:51.740834 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 6 05:27:51.740986 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 6 05:27:51.741288 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 6 05:27:51.741780 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 6 05:27:51.757204 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 6 05:27:51.758156 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 6 05:27:51.772212 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Nov 6 05:27:51.772313 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 6 05:27:51.772542 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 6 05:27:51.772746 systemd[1]: Stopped target timers.target - Timer Units. Nov 6 05:27:51.772944 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 6 05:27:51.773015 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 6 05:27:51.773409 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 6 05:27:51.773560 systemd[1]: Stopped target basic.target - Basic System. Nov 6 05:27:51.773739 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 6 05:27:51.773932 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 6 05:27:51.774139 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 6 05:27:51.774377 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Nov 6 05:27:51.774582 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 6 05:27:51.774794 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 6 05:27:51.775013 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 6 05:27:51.775234 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 6 05:27:51.775428 systemd[1]: Stopped target swap.target - Swaps. Nov 6 05:27:51.775617 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 6 05:27:51.775683 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 6 05:27:51.776030 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 6 05:27:51.776187 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 6 05:27:51.776390 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 6 05:27:51.776435 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 6 05:27:51.776614 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 6 05:27:51.776676 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 6 05:27:51.776980 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 6 05:27:51.777044 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 6 05:27:51.777327 systemd[1]: Stopped target paths.target - Path Units. Nov 6 05:27:51.777481 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 6 05:27:51.781259 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 6 05:27:51.781444 systemd[1]: Stopped target slices.target - Slice Units. Nov 6 05:27:51.781667 systemd[1]: Stopped target sockets.target - Socket Units. Nov 6 05:27:51.781864 systemd[1]: iscsid.socket: Deactivated successfully. Nov 6 05:27:51.781921 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 6 05:27:51.782074 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 6 05:27:51.782118 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 6 05:27:51.782312 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 6 05:27:51.782381 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 6 05:27:51.782636 systemd[1]: ignition-files.service: Deactivated successfully. Nov 6 05:27:51.782695 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 6 05:27:51.783446 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 6 05:27:51.783548 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 6 05:27:51.783614 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 6 05:27:51.785324 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 6 05:27:51.785491 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 6 05:27:51.785565 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 6 05:27:51.785736 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 6 05:27:51.785809 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 6 05:27:51.790425 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 6 05:27:51.792334 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 6 05:27:51.799798 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 6 05:27:51.804264 ignition[1139]: INFO : Ignition 2.22.0 Nov 6 05:27:51.804581 ignition[1139]: INFO : Stage: umount Nov 6 05:27:51.805255 ignition[1139]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 6 05:27:51.805255 ignition[1139]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Nov 6 05:27:51.805569 ignition[1139]: INFO : umount: umount passed Nov 6 05:27:51.805709 ignition[1139]: INFO : Ignition finished successfully Nov 6 05:27:51.806860 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 6 05:27:51.807088 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 6 05:27:51.807451 systemd[1]: Stopped target network.target - Network. Nov 6 05:27:51.807677 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 6 05:27:51.807831 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 6 05:27:51.808085 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 6 05:27:51.808275 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 6 05:27:51.808525 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 6 05:27:51.808659 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 6 05:27:51.808920 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 6 05:27:51.809052 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 6 05:27:51.809385 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 6 05:27:51.809812 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 6 05:27:51.817223 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 6 05:27:51.817305 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 6 05:27:51.818751 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Nov 6 05:27:51.818878 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 6 05:27:51.818943 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 6 05:27:51.819670 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Nov 6 05:27:51.820108 systemd[1]: Stopped target network-pre.target - Preparation for Network. Nov 6 05:27:51.820394 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 6 05:27:51.820414 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 6 05:27:51.821058 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 6 05:27:51.821151 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 6 05:27:51.821177 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 6 05:27:51.821392 systemd[1]: afterburn-network-kargs.service: Deactivated successfully. Nov 6 05:27:51.821415 systemd[1]: Stopped afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Nov 6 05:27:51.821571 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 6 05:27:51.821594 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 6 05:27:51.823131 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 6 05:27:51.823161 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 6 05:27:51.823388 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 6 05:27:51.823412 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 6 05:27:51.823813 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 6 05:27:51.825144 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Nov 6 05:27:51.825176 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Nov 6 05:27:51.833523 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 6 05:27:51.833758 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 6 05:27:51.834262 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 6 05:27:51.834421 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 6 05:27:51.834663 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 6 05:27:51.834784 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 6 05:27:51.835038 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 6 05:27:51.835161 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 6 05:27:51.835474 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 6 05:27:51.835591 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 6 05:27:51.835867 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 6 05:27:51.835894 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 6 05:27:51.836731 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 6 05:27:51.836964 systemd[1]: systemd-network-generator.service: Deactivated successfully. Nov 6 05:27:51.837093 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Nov 6 05:27:51.837475 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 6 05:27:51.837629 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 6 05:27:51.837925 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 6 05:27:51.838053 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 6 05:27:51.839066 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Nov 6 05:27:51.839099 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Nov 6 05:27:51.839126 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Nov 6 05:27:51.851395 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 6 05:27:51.851465 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 6 05:27:51.886242 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 6 05:27:51.886465 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 6 05:27:52.069722 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 6 05:27:52.069788 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 6 05:27:52.070177 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 6 05:27:52.070282 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 6 05:27:52.070316 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 6 05:27:52.071413 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 6 05:27:52.091673 systemd[1]: Switching root. Nov 6 05:27:52.133852 systemd-journald[334]: Journal stopped Nov 6 05:27:53.029649 systemd-journald[334]: Received SIGTERM from PID 1 (systemd). Nov 6 05:27:53.029677 kernel: SELinux: policy capability network_peer_controls=1 Nov 6 05:27:53.029686 kernel: SELinux: policy capability open_perms=1 Nov 6 05:27:53.029692 kernel: SELinux: policy capability extended_socket_class=1 Nov 6 05:27:53.029697 kernel: SELinux: policy capability always_check_network=0 Nov 6 05:27:53.029702 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 6 05:27:53.029708 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 6 05:27:53.029716 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 6 05:27:53.029722 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 6 05:27:53.029727 kernel: SELinux: policy capability userspace_initial_context=0 Nov 6 05:27:53.029733 kernel: audit: type=1403 audit(1762406872.485:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 6 05:27:53.029740 systemd[1]: Successfully loaded SELinux policy in 57.409ms. Nov 6 05:27:53.029747 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 4.127ms. Nov 6 05:27:53.029754 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 6 05:27:53.029762 systemd[1]: Detected virtualization vmware. Nov 6 05:27:53.029769 systemd[1]: Detected architecture x86-64. Nov 6 05:27:53.029775 systemd[1]: Detected first boot. Nov 6 05:27:53.029782 systemd[1]: Initializing machine ID from random generator. Nov 6 05:27:53.029788 zram_generator::config[1184]: No configuration found. Nov 6 05:27:53.029874 kernel: vmw_vmci 0000:00:07.7: Using capabilities 0xc Nov 6 05:27:53.029886 kernel: Guest personality initialized and is active Nov 6 05:27:53.029892 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Nov 6 05:27:53.029902 kernel: Initialized host personality Nov 6 05:27:53.029908 kernel: NET: Registered PF_VSOCK protocol family Nov 6 05:27:53.029915 systemd[1]: Populated /etc with preset unit settings. Nov 6 05:27:53.029925 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Nov 6 05:27:53.029932 systemd[1]: COREOS_CUSTOM_PUBLIC_IPV4=$(ip addr show ens192 | grep -v "inet 10." | grep -Po "inet \K[\d.]+")" > ${OUTPUT}" Nov 6 05:27:53.029939 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Nov 6 05:27:53.029946 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 6 05:27:53.029952 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 6 05:27:53.029959 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 6 05:27:53.029966 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 6 05:27:53.029974 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 6 05:27:53.029981 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 6 05:27:53.029988 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 6 05:27:53.029995 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 6 05:27:53.030002 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 6 05:27:53.030009 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 6 05:27:53.030016 systemd[1]: Created slice user.slice - User and Session Slice. Nov 6 05:27:53.030023 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 6 05:27:53.030031 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 6 05:27:53.030039 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 6 05:27:53.030046 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 6 05:27:53.030054 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 6 05:27:53.030061 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 6 05:27:53.030067 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 6 05:27:53.030074 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 6 05:27:53.030082 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 6 05:27:53.030089 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 6 05:27:53.030096 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 6 05:27:53.030102 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 6 05:27:53.030109 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 6 05:27:53.030116 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 6 05:27:53.030123 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 6 05:27:53.030130 systemd[1]: Reached target slices.target - Slice Units. Nov 6 05:27:53.030137 systemd[1]: Reached target swap.target - Swaps. Nov 6 05:27:53.030144 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 6 05:27:53.030152 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 6 05:27:53.030159 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Nov 6 05:27:53.030166 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 6 05:27:53.030174 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 6 05:27:53.030181 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 6 05:27:53.030188 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 6 05:27:53.030195 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 6 05:27:53.030202 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 6 05:27:53.030209 systemd[1]: Mounting media.mount - External Media Directory... Nov 6 05:27:53.030216 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 05:27:53.030223 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 6 05:27:53.030240 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 6 05:27:53.030251 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 6 05:27:53.030259 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 6 05:27:53.030266 systemd[1]: Reached target machines.target - Containers. Nov 6 05:27:53.030273 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 6 05:27:53.030280 systemd[1]: Starting ignition-delete-config.service - Ignition (delete config)... Nov 6 05:27:53.030287 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 6 05:27:53.030294 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 6 05:27:53.030301 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 6 05:27:53.030309 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 6 05:27:53.030316 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 6 05:27:53.030323 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 6 05:27:53.030330 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 6 05:27:53.030337 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 6 05:27:53.030344 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 6 05:27:53.030351 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 6 05:27:53.030358 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 6 05:27:53.030366 systemd[1]: Stopped systemd-fsck-usr.service. Nov 6 05:27:53.030373 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 6 05:27:53.030381 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 6 05:27:53.030387 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 6 05:27:53.030394 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 6 05:27:53.030401 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 6 05:27:53.030408 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Nov 6 05:27:53.030415 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 6 05:27:53.030423 kernel: fuse: init (API version 7.41) Nov 6 05:27:53.030430 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 05:27:53.030437 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 6 05:27:53.030445 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 6 05:27:53.030452 systemd[1]: Mounted media.mount - External Media Directory. Nov 6 05:27:53.030459 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 6 05:27:53.030466 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 6 05:27:53.030473 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 6 05:27:53.030479 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 6 05:27:53.030488 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 6 05:27:53.030495 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 6 05:27:53.030502 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 6 05:27:53.030509 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 6 05:27:53.030516 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 6 05:27:53.030523 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 6 05:27:53.030530 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 6 05:27:53.030537 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 6 05:27:53.030544 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 6 05:27:53.030553 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 6 05:27:53.030560 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 6 05:27:53.030567 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 6 05:27:53.030574 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 6 05:27:53.030581 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 6 05:27:53.030588 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Nov 6 05:27:53.030595 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 6 05:27:53.030620 systemd-journald[1284]: Collecting audit messages is disabled. Nov 6 05:27:53.030638 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 6 05:27:53.030646 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 6 05:27:53.030653 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 6 05:27:53.030661 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 6 05:27:53.030669 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Nov 6 05:27:53.030677 systemd-journald[1284]: Journal started Nov 6 05:27:53.030692 systemd-journald[1284]: Runtime Journal (/run/log/journal/4b035ba95e364a4fa35317c0a182d8bb) is 4.8M, max 38.4M, 33.6M free. Nov 6 05:27:52.840792 systemd[1]: Queued start job for default target multi-user.target. Nov 6 05:27:52.847068 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Nov 6 05:27:52.847314 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 6 05:27:53.031168 jq[1254]: true Nov 6 05:27:53.034273 kernel: ACPI: bus type drm_connector registered Nov 6 05:27:53.034292 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 6 05:27:53.034797 jq[1300]: true Nov 6 05:27:53.037318 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 6 05:27:53.040266 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 6 05:27:53.040293 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 6 05:27:53.044882 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 6 05:27:53.044919 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 6 05:27:53.047438 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 6 05:27:53.052474 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 6 05:27:53.055599 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 6 05:27:53.055991 ignition[1301]: Ignition 2.22.0 Nov 6 05:27:53.057735 ignition[1301]: deleting config from guestinfo properties Nov 6 05:27:53.060659 systemd[1]: Started systemd-journald.service - Journal Service. Nov 6 05:27:53.058592 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 6 05:27:53.058730 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 6 05:27:53.058904 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 6 05:27:53.059046 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 6 05:27:53.068503 ignition[1301]: Successfully deleted config Nov 6 05:27:53.073991 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 6 05:27:53.074449 systemd[1]: Finished ignition-delete-config.service - Ignition (delete config). Nov 6 05:27:53.078842 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 6 05:27:53.079087 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 6 05:27:53.080934 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Nov 6 05:27:53.091203 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 6 05:27:53.101425 kernel: loop1: detected capacity change from 0 to 111544 Nov 6 05:27:53.106281 systemd-journald[1284]: Time spent on flushing to /var/log/journal/4b035ba95e364a4fa35317c0a182d8bb is 56.861ms for 1762 entries. Nov 6 05:27:53.106281 systemd-journald[1284]: System Journal (/var/log/journal/4b035ba95e364a4fa35317c0a182d8bb) is 8M, max 588.1M, 580.1M free. Nov 6 05:27:53.174765 systemd-journald[1284]: Received client request to flush runtime journal. Nov 6 05:27:53.174791 kernel: loop2: detected capacity change from 0 to 2960 Nov 6 05:27:53.118489 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Nov 6 05:27:53.150506 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 6 05:27:53.153312 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 6 05:27:53.176088 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 6 05:27:53.201490 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 6 05:27:53.258384 kernel: loop3: detected capacity change from 0 to 119080 Nov 6 05:27:53.266400 systemd-tmpfiles[1349]: ACLs are not supported, ignoring. Nov 6 05:27:53.266596 systemd-tmpfiles[1349]: ACLs are not supported, ignoring. Nov 6 05:27:53.269879 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 6 05:27:53.288284 kernel: loop4: detected capacity change from 0 to 219144 Nov 6 05:27:53.330154 kernel: loop5: detected capacity change from 0 to 111544 Nov 6 05:27:53.347620 kernel: loop6: detected capacity change from 0 to 2960 Nov 6 05:27:53.358245 kernel: loop7: detected capacity change from 0 to 119080 Nov 6 05:27:53.373328 kernel: loop1: detected capacity change from 0 to 219144 Nov 6 05:27:53.406370 (sd-merge)[1357]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-vmware'. Nov 6 05:27:53.406648 (sd-merge)[1357]: Merged extensions into '/usr'. Nov 6 05:27:53.419135 systemd[1]: Reload requested from client PID 1313 ('systemd-sysext') (unit systemd-sysext.service)... Nov 6 05:27:53.419148 systemd[1]: Reloading... Nov 6 05:27:53.481247 zram_generator::config[1379]: No configuration found. Nov 6 05:27:53.626923 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Nov 6 05:27:53.677158 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 6 05:27:53.677392 systemd[1]: Reloading finished in 257 ms. Nov 6 05:27:53.693749 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 6 05:27:53.694079 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 6 05:27:53.706318 systemd[1]: Starting ensure-sysext.service... Nov 6 05:27:53.707433 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 6 05:27:53.712475 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 6 05:27:53.721409 systemd[1]: Reload requested from client PID 1440 ('systemctl') (unit ensure-sysext.service)... Nov 6 05:27:53.721420 systemd[1]: Reloading... Nov 6 05:27:53.723360 systemd-tmpfiles[1441]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Nov 6 05:27:53.723811 systemd-tmpfiles[1441]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Nov 6 05:27:53.724166 systemd-tmpfiles[1441]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 6 05:27:53.724783 systemd-tmpfiles[1441]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 6 05:27:53.726330 systemd-tmpfiles[1441]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 6 05:27:53.728206 systemd-tmpfiles[1441]: ACLs are not supported, ignoring. Nov 6 05:27:53.728830 systemd-tmpfiles[1441]: ACLs are not supported, ignoring. Nov 6 05:27:53.734130 systemd-tmpfiles[1441]: Detected autofs mount point /boot during canonicalization of boot. Nov 6 05:27:53.734136 systemd-tmpfiles[1441]: Skipping /boot Nov 6 05:27:53.740522 systemd-tmpfiles[1441]: Detected autofs mount point /boot during canonicalization of boot. Nov 6 05:27:53.740528 systemd-tmpfiles[1441]: Skipping /boot Nov 6 05:27:53.747087 ldconfig[1305]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 6 05:27:53.761789 systemd-udevd[1442]: Using default interface naming scheme 'v255'. Nov 6 05:27:53.767260 zram_generator::config[1468]: No configuration found. Nov 6 05:27:53.861258 kernel: mousedev: PS/2 mouse device common for all mice Nov 6 05:27:53.872246 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Nov 6 05:27:53.882325 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Nov 6 05:27:53.889252 kernel: ACPI: button: Power Button [PWRF] Nov 6 05:27:53.940613 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Nov 6 05:27:53.940789 systemd[1]: Reloading finished in 219 ms. Nov 6 05:27:53.948899 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 6 05:27:53.949269 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 6 05:27:53.953831 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 6 05:27:53.969891 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. Nov 6 05:27:53.975437 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 05:27:53.976823 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 6 05:27:53.979397 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 6 05:27:53.980890 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 6 05:27:53.985162 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 6 05:27:53.989077 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 6 05:27:53.989290 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 6 05:27:53.992095 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 6 05:27:53.994595 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 6 05:27:53.996053 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 6 05:27:53.998982 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 6 05:27:54.004380 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 6 05:27:54.011354 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 6 05:27:54.011501 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 05:27:54.012765 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 6 05:27:54.014359 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 6 05:27:54.014727 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 6 05:27:54.014847 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 6 05:27:54.015191 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 6 05:27:54.015315 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 6 05:27:54.019741 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 05:27:54.023244 kernel: piix4_smbus 0000:00:07.3: SMBus Host Controller not enabled! Nov 6 05:27:54.023566 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 6 05:27:54.030102 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 6 05:27:54.034396 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 6 05:27:54.034564 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 6 05:27:54.034650 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 6 05:27:54.034722 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 05:27:54.041930 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 6 05:27:54.043172 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 6 05:27:54.046674 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 05:27:54.049136 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 6 05:27:54.050331 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 6 05:27:54.050405 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 6 05:27:54.050510 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 05:27:54.053262 systemd[1]: Finished ensure-sysext.service. Nov 6 05:27:54.053560 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 6 05:27:54.058360 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 6 05:27:54.059268 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 6 05:27:54.063968 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 6 05:27:54.078509 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 6 05:27:54.078675 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 6 05:27:54.079053 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 6 05:27:54.079183 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 6 05:27:54.079486 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 6 05:27:54.084319 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 6 05:27:54.088967 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 6 05:27:54.089333 augenrules[1612]: No rules Nov 6 05:27:54.090971 systemd[1]: audit-rules.service: Deactivated successfully. Nov 6 05:27:54.092008 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 6 05:27:54.092566 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 6 05:27:54.093293 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 6 05:27:54.094607 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 6 05:27:54.095293 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 6 05:27:54.095917 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 6 05:27:54.110535 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 6 05:27:54.114609 (udev-worker)[1499]: id: Truncating stdout of 'dmi_memory_id' up to 16384 byte. Nov 6 05:27:54.133388 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 6 05:27:54.136160 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 6 05:27:54.239487 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 6 05:27:54.253351 systemd-networkd[1562]: lo: Link UP Nov 6 05:27:54.253351 systemd-resolved[1563]: Positive Trust Anchors: Nov 6 05:27:54.253356 systemd-networkd[1562]: lo: Gained carrier Nov 6 05:27:54.254123 systemd-networkd[1562]: Enumeration completed Nov 6 05:27:54.254316 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 6 05:27:54.254446 systemd-resolved[1563]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 6 05:27:54.254495 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 6 05:27:54.254537 systemd-resolved[1563]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 6 05:27:54.254644 systemd[1]: Reached target time-set.target - System Time Set. Nov 6 05:27:54.255623 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Nov 6 05:27:54.256368 systemd-networkd[1562]: ens192: Configuring with /etc/systemd/network/00-vmware.network. Nov 6 05:27:54.258149 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 6 05:27:54.258276 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Nov 6 05:27:54.258440 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Nov 6 05:27:54.258411 systemd-networkd[1562]: ens192: Link UP Nov 6 05:27:54.258496 systemd-networkd[1562]: ens192: Gained carrier Nov 6 05:27:54.261725 systemd-timesyncd[1586]: Network configuration changed, trying to establish connection. Nov 6 05:27:54.269973 systemd-resolved[1563]: Defaulting to hostname 'linux'. Nov 6 05:27:54.271033 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 6 05:27:54.271225 systemd[1]: Reached target network.target - Network. Nov 6 05:27:54.271361 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 6 05:27:54.271511 systemd[1]: Reached target sysinit.target - System Initialization. Nov 6 05:27:54.271703 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 6 05:27:54.271866 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 6 05:27:54.272013 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Nov 6 05:27:54.272255 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 6 05:27:54.272434 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 6 05:27:54.272583 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 6 05:27:54.272728 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 6 05:27:54.272756 systemd[1]: Reached target paths.target - Path Units. Nov 6 05:27:54.272874 systemd[1]: Reached target timers.target - Timer Units. Nov 6 05:27:54.285331 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 6 05:27:54.286887 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 6 05:27:54.288593 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Nov 6 05:27:54.288852 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Nov 6 05:27:54.289024 systemd[1]: Reached target ssh-access.target - SSH Access Available. Nov 6 05:27:54.291689 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 6 05:27:54.292050 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Nov 6 05:27:54.292817 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Nov 6 05:27:54.293054 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 6 05:27:54.293892 systemd[1]: Reached target sockets.target - Socket Units. Nov 6 05:27:54.294055 systemd[1]: Reached target basic.target - Basic System. Nov 6 05:27:54.294278 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 6 05:27:54.294312 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 6 05:27:54.295329 systemd[1]: Starting containerd.service - containerd container runtime... Nov 6 05:27:54.296486 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 6 05:27:54.299199 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 6 05:27:54.301374 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 6 05:27:54.302756 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 6 05:27:54.302892 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 6 05:27:54.304659 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Nov 6 05:27:54.306400 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 6 05:27:54.309302 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 6 05:27:54.318367 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 6 05:27:54.320501 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 6 05:27:54.324097 jq[1650]: false Nov 6 05:27:54.326365 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 6 05:27:54.327046 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 6 05:27:54.330797 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 6 05:27:54.332476 systemd[1]: Starting update-engine.service - Update Engine... Nov 6 05:27:54.337312 extend-filesystems[1651]: Found /dev/sda6 Nov 6 05:27:54.339961 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 6 05:27:54.342290 google_oslogin_nss_cache[1652]: oslogin_cache_refresh[1652]: Refreshing passwd entry cache Nov 6 05:27:54.341459 oslogin_cache_refresh[1652]: Refreshing passwd entry cache Nov 6 05:29:32.733540 systemd-timesyncd[1586]: Contacted time server 172.233.177.198:123 (0.flatcar.pool.ntp.org). Nov 6 05:29:32.733620 systemd-timesyncd[1586]: Initial clock synchronization to Thu 2025-11-06 05:29:32.733463 UTC. Nov 6 05:29:32.734387 systemd-resolved[1563]: Clock change detected. Flushing caches. Nov 6 05:29:32.734510 extend-filesystems[1651]: Found /dev/sda9 Nov 6 05:29:32.735740 systemd[1]: Starting vgauthd.service - VGAuth Service for open-vm-tools... Nov 6 05:29:32.738579 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 6 05:29:32.738832 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 6 05:29:32.738947 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 6 05:29:32.739088 systemd[1]: motdgen.service: Deactivated successfully. Nov 6 05:29:32.739252 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 6 05:29:32.739779 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 6 05:29:32.740251 oslogin_cache_refresh[1652]: Failure getting users, quitting Nov 6 05:29:32.740644 google_oslogin_nss_cache[1652]: oslogin_cache_refresh[1652]: Failure getting users, quitting Nov 6 05:29:32.740644 google_oslogin_nss_cache[1652]: oslogin_cache_refresh[1652]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Nov 6 05:29:32.740644 google_oslogin_nss_cache[1652]: oslogin_cache_refresh[1652]: Refreshing group entry cache Nov 6 05:29:32.739900 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 6 05:29:32.740261 oslogin_cache_refresh[1652]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Nov 6 05:29:32.740281 oslogin_cache_refresh[1652]: Refreshing group entry cache Nov 6 05:29:32.745211 google_oslogin_nss_cache[1652]: oslogin_cache_refresh[1652]: Failure getting groups, quitting Nov 6 05:29:32.745211 google_oslogin_nss_cache[1652]: oslogin_cache_refresh[1652]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Nov 6 05:29:32.744775 oslogin_cache_refresh[1652]: Failure getting groups, quitting Nov 6 05:29:32.744781 oslogin_cache_refresh[1652]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Nov 6 05:29:32.745355 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Nov 6 05:29:32.746272 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Nov 6 05:29:32.747793 extend-filesystems[1651]: Checking size of /dev/sda9 Nov 6 05:29:32.748523 jq[1668]: true Nov 6 05:29:32.763496 update_engine[1667]: I20251106 05:29:32.763453 1667 main.cc:92] Flatcar Update Engine starting Nov 6 05:29:32.764882 jq[1687]: true Nov 6 05:29:32.770678 extend-filesystems[1651]: Resized partition /dev/sda9 Nov 6 05:29:32.773473 systemd[1]: Started vgauthd.service - VGAuth Service for open-vm-tools. Nov 6 05:29:32.777821 systemd[1]: Starting vmtoolsd.service - Service for virtual machines hosted on VMware... Nov 6 05:29:32.788597 extend-filesystems[1703]: resize2fs 1.47.3 (8-Jul-2025) Nov 6 05:29:32.801254 tar[1675]: linux-amd64/LICENSE Nov 6 05:29:32.801495 tar[1675]: linux-amd64/helm Nov 6 05:29:32.809553 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 1635323 blocks Nov 6 05:29:32.809591 kernel: EXT4-fs (sda9): resized filesystem to 1635323 Nov 6 05:29:32.830930 systemd[1]: Started vmtoolsd.service - Service for virtual machines hosted on VMware. Nov 6 05:29:32.840940 extend-filesystems[1703]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Nov 6 05:29:32.840940 extend-filesystems[1703]: old_desc_blocks = 1, new_desc_blocks = 1 Nov 6 05:29:32.840940 extend-filesystems[1703]: The filesystem on /dev/sda9 is now 1635323 (4k) blocks long. Nov 6 05:29:32.840711 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 6 05:29:32.841729 extend-filesystems[1651]: Resized filesystem in /dev/sda9 Nov 6 05:29:32.840850 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 6 05:29:32.852000 unknown[1694]: Pref_Init: Using '/etc/vmware-tools/vgauth.conf' as preferences filepath Nov 6 05:29:32.858820 unknown[1694]: Core dump limit set to -1 Nov 6 05:29:32.880659 dbus-daemon[1648]: [system] SELinux support is enabled Nov 6 05:29:32.880740 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 6 05:29:32.884363 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 6 05:29:32.884380 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 6 05:29:32.884517 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 6 05:29:32.884527 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 6 05:29:32.886062 systemd-logind[1657]: Watching system buttons on /dev/input/event2 (Power Button) Nov 6 05:29:32.886079 systemd-logind[1657]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 6 05:29:32.888384 bash[1716]: Updated "/home/core/.ssh/authorized_keys" Nov 6 05:29:32.891095 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 6 05:29:32.892719 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Nov 6 05:29:32.892764 systemd-logind[1657]: New seat seat0. Nov 6 05:29:32.898861 systemd[1]: Started systemd-logind.service - User Login Management. Nov 6 05:29:32.906077 update_engine[1667]: I20251106 05:29:32.905535 1667 update_check_scheduler.cc:74] Next update check in 3m56s Nov 6 05:29:32.910807 systemd[1]: Started update-engine.service - Update Engine. Nov 6 05:29:32.917970 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 6 05:29:33.069545 containerd[1686]: time="2025-11-06T05:29:33Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Nov 6 05:29:33.070220 containerd[1686]: time="2025-11-06T05:29:33.069885417Z" level=info msg="starting containerd" revision=75cb2b7193e4e490e9fbdc236c0e811ccaba3376 version=v2.1.4 Nov 6 05:29:33.091169 containerd[1686]: time="2025-11-06T05:29:33.090489134Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="5.83µs" Nov 6 05:29:33.091169 containerd[1686]: time="2025-11-06T05:29:33.090511995Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Nov 6 05:29:33.091169 containerd[1686]: time="2025-11-06T05:29:33.090537648Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Nov 6 05:29:33.091169 containerd[1686]: time="2025-11-06T05:29:33.090545957Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Nov 6 05:29:33.091169 containerd[1686]: time="2025-11-06T05:29:33.090649419Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Nov 6 05:29:33.091169 containerd[1686]: time="2025-11-06T05:29:33.090659447Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 6 05:29:33.091169 containerd[1686]: time="2025-11-06T05:29:33.090695617Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 6 05:29:33.091169 containerd[1686]: time="2025-11-06T05:29:33.090703087Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 6 05:29:33.091169 containerd[1686]: time="2025-11-06T05:29:33.090856597Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 6 05:29:33.091169 containerd[1686]: time="2025-11-06T05:29:33.090866256Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 6 05:29:33.091169 containerd[1686]: time="2025-11-06T05:29:33.090872705Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 6 05:29:33.091169 containerd[1686]: time="2025-11-06T05:29:33.090877156Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Nov 6 05:29:33.091421 containerd[1686]: time="2025-11-06T05:29:33.090959746Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Nov 6 05:29:33.091421 containerd[1686]: time="2025-11-06T05:29:33.090966959Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Nov 6 05:29:33.091421 containerd[1686]: time="2025-11-06T05:29:33.091010244Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Nov 6 05:29:33.091421 containerd[1686]: time="2025-11-06T05:29:33.091111469Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 6 05:29:33.093439 containerd[1686]: time="2025-11-06T05:29:33.093147665Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 6 05:29:33.093439 containerd[1686]: time="2025-11-06T05:29:33.093162192Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Nov 6 05:29:33.093439 containerd[1686]: time="2025-11-06T05:29:33.093187025Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Nov 6 05:29:33.093439 containerd[1686]: time="2025-11-06T05:29:33.093343305Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Nov 6 05:29:33.093439 containerd[1686]: time="2025-11-06T05:29:33.093392347Z" level=info msg="metadata content store policy set" policy=shared Nov 6 05:29:33.094830 containerd[1686]: time="2025-11-06T05:29:33.094813502Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Nov 6 05:29:33.094871 containerd[1686]: time="2025-11-06T05:29:33.094852803Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Nov 6 05:29:33.096859 containerd[1686]: time="2025-11-06T05:29:33.094894981Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Nov 6 05:29:33.096859 containerd[1686]: time="2025-11-06T05:29:33.096855640Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Nov 6 05:29:33.096913 containerd[1686]: time="2025-11-06T05:29:33.096867264Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Nov 6 05:29:33.096913 containerd[1686]: time="2025-11-06T05:29:33.096875099Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Nov 6 05:29:33.096913 containerd[1686]: time="2025-11-06T05:29:33.096881927Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Nov 6 05:29:33.096913 containerd[1686]: time="2025-11-06T05:29:33.096888157Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Nov 6 05:29:33.096913 containerd[1686]: time="2025-11-06T05:29:33.096896093Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Nov 6 05:29:33.096913 containerd[1686]: time="2025-11-06T05:29:33.096903272Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Nov 6 05:29:33.097022 containerd[1686]: time="2025-11-06T05:29:33.096915685Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Nov 6 05:29:33.097022 containerd[1686]: time="2025-11-06T05:29:33.096925996Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Nov 6 05:29:33.097022 containerd[1686]: time="2025-11-06T05:29:33.096932550Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Nov 6 05:29:33.097022 containerd[1686]: time="2025-11-06T05:29:33.096939599Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Nov 6 05:29:33.097022 containerd[1686]: time="2025-11-06T05:29:33.096997814Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Nov 6 05:29:33.097022 containerd[1686]: time="2025-11-06T05:29:33.097010447Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Nov 6 05:29:33.097022 containerd[1686]: time="2025-11-06T05:29:33.097020451Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Nov 6 05:29:33.097380 containerd[1686]: time="2025-11-06T05:29:33.097028839Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Nov 6 05:29:33.097380 containerd[1686]: time="2025-11-06T05:29:33.097036631Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Nov 6 05:29:33.097380 containerd[1686]: time="2025-11-06T05:29:33.097043867Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Nov 6 05:29:33.097380 containerd[1686]: time="2025-11-06T05:29:33.097050602Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Nov 6 05:29:33.097380 containerd[1686]: time="2025-11-06T05:29:33.097057057Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Nov 6 05:29:33.097380 containerd[1686]: time="2025-11-06T05:29:33.097063352Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Nov 6 05:29:33.097380 containerd[1686]: time="2025-11-06T05:29:33.097069390Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Nov 6 05:29:33.097380 containerd[1686]: time="2025-11-06T05:29:33.097074888Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Nov 6 05:29:33.097380 containerd[1686]: time="2025-11-06T05:29:33.097087617Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Nov 6 05:29:33.097380 containerd[1686]: time="2025-11-06T05:29:33.097113552Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Nov 6 05:29:33.097380 containerd[1686]: time="2025-11-06T05:29:33.097124888Z" level=info msg="Start snapshots syncer" Nov 6 05:29:33.097380 containerd[1686]: time="2025-11-06T05:29:33.097152490Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Nov 6 05:29:33.097557 containerd[1686]: time="2025-11-06T05:29:33.097375279Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Nov 6 05:29:33.097557 containerd[1686]: time="2025-11-06T05:29:33.097406316Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Nov 6 05:29:33.097666 containerd[1686]: time="2025-11-06T05:29:33.097451913Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Nov 6 05:29:33.097666 containerd[1686]: time="2025-11-06T05:29:33.097529753Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Nov 6 05:29:33.097666 containerd[1686]: time="2025-11-06T05:29:33.097545331Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Nov 6 05:29:33.097666 containerd[1686]: time="2025-11-06T05:29:33.097552386Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Nov 6 05:29:33.097666 containerd[1686]: time="2025-11-06T05:29:33.097558396Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Nov 6 05:29:33.097666 containerd[1686]: time="2025-11-06T05:29:33.097565179Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Nov 6 05:29:33.097666 containerd[1686]: time="2025-11-06T05:29:33.097573936Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Nov 6 05:29:33.097666 containerd[1686]: time="2025-11-06T05:29:33.097584077Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Nov 6 05:29:33.097666 containerd[1686]: time="2025-11-06T05:29:33.097590620Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Nov 6 05:29:33.097666 containerd[1686]: time="2025-11-06T05:29:33.097597712Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Nov 6 05:29:33.099318 containerd[1686]: time="2025-11-06T05:29:33.099175121Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 6 05:29:33.099318 containerd[1686]: time="2025-11-06T05:29:33.099191206Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 6 05:29:33.099318 containerd[1686]: time="2025-11-06T05:29:33.099197229Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 6 05:29:33.099318 containerd[1686]: time="2025-11-06T05:29:33.099202677Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 6 05:29:33.099318 containerd[1686]: time="2025-11-06T05:29:33.099208576Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Nov 6 05:29:33.099318 containerd[1686]: time="2025-11-06T05:29:33.099214780Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Nov 6 05:29:33.099318 containerd[1686]: time="2025-11-06T05:29:33.099221694Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Nov 6 05:29:33.099318 containerd[1686]: time="2025-11-06T05:29:33.099232231Z" level=info msg="runtime interface created" Nov 6 05:29:33.099318 containerd[1686]: time="2025-11-06T05:29:33.099236091Z" level=info msg="created NRI interface" Nov 6 05:29:33.099318 containerd[1686]: time="2025-11-06T05:29:33.099243381Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Nov 6 05:29:33.099318 containerd[1686]: time="2025-11-06T05:29:33.099251672Z" level=info msg="Connect containerd service" Nov 6 05:29:33.099318 containerd[1686]: time="2025-11-06T05:29:33.099265437Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 6 05:29:33.103469 containerd[1686]: time="2025-11-06T05:29:33.103361133Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 6 05:29:33.120195 locksmithd[1726]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 6 05:29:33.244282 tar[1675]: linux-amd64/README.md Nov 6 05:29:33.255082 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 6 05:29:33.260311 containerd[1686]: time="2025-11-06T05:29:33.260284929Z" level=info msg="Start subscribing containerd event" Nov 6 05:29:33.260358 containerd[1686]: time="2025-11-06T05:29:33.260323856Z" level=info msg="Start recovering state" Nov 6 05:29:33.260479 containerd[1686]: time="2025-11-06T05:29:33.260467303Z" level=info msg="Start event monitor" Nov 6 05:29:33.260512 containerd[1686]: time="2025-11-06T05:29:33.260483326Z" level=info msg="Start cni network conf syncer for default" Nov 6 05:29:33.260512 containerd[1686]: time="2025-11-06T05:29:33.260489311Z" level=info msg="Start streaming server" Nov 6 05:29:33.262173 containerd[1686]: time="2025-11-06T05:29:33.260495361Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Nov 6 05:29:33.262214 containerd[1686]: time="2025-11-06T05:29:33.262205621Z" level=info msg="runtime interface starting up..." Nov 6 05:29:33.262250 containerd[1686]: time="2025-11-06T05:29:33.262244200Z" level=info msg="starting plugins..." Nov 6 05:29:33.262281 containerd[1686]: time="2025-11-06T05:29:33.260470736Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 6 05:29:33.262319 containerd[1686]: time="2025-11-06T05:29:33.262285563Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Nov 6 05:29:33.262398 containerd[1686]: time="2025-11-06T05:29:33.262389825Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 6 05:29:33.262500 systemd[1]: Started containerd.service - containerd container runtime. Nov 6 05:29:33.263113 containerd[1686]: time="2025-11-06T05:29:33.263105404Z" level=info msg="containerd successfully booted in 0.193774s" Nov 6 05:29:33.321423 sshd_keygen[1684]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 6 05:29:33.334030 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 6 05:29:33.335489 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 6 05:29:33.342304 systemd[1]: issuegen.service: Deactivated successfully. Nov 6 05:29:33.342443 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 6 05:29:33.343775 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 6 05:29:33.360466 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 6 05:29:33.361669 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 6 05:29:33.363294 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 6 05:29:33.363500 systemd[1]: Reached target getty.target - Login Prompts. Nov 6 05:29:34.238250 systemd-networkd[1562]: ens192: Gained IPv6LL Nov 6 05:29:34.239649 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 6 05:29:34.240342 systemd[1]: Reached target network-online.target - Network is Online. Nov 6 05:29:34.241394 systemd[1]: Starting coreos-metadata.service - VMware metadata agent... Nov 6 05:29:34.248206 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 05:29:34.249329 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 6 05:29:34.292979 systemd[1]: coreos-metadata.service: Deactivated successfully. Nov 6 05:29:34.293271 systemd[1]: Finished coreos-metadata.service - VMware metadata agent. Nov 6 05:29:34.293932 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 6 05:29:34.295084 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 6 05:29:35.904700 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 05:29:35.905117 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 6 05:29:35.905674 systemd[1]: Startup finished in 2.677s (kernel) + 5.100s (initrd) + 5.084s (userspace) = 12.863s. Nov 6 05:29:35.914665 (kubelet)[1847]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 6 05:29:35.961005 login[1814]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Nov 6 05:29:35.962423 login[1815]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Nov 6 05:29:35.971428 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 6 05:29:35.976280 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 6 05:29:35.978039 systemd-logind[1657]: New session 1 of user core. Nov 6 05:29:35.984539 systemd-logind[1657]: New session 2 of user core. Nov 6 05:29:35.987598 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 6 05:29:35.989236 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 6 05:29:36.008269 (systemd)[1854]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 6 05:29:36.010124 systemd-logind[1657]: New session c1 of user core. Nov 6 05:29:36.117563 systemd[1854]: Queued start job for default target default.target. Nov 6 05:29:36.137711 systemd[1854]: Created slice app.slice - User Application Slice. Nov 6 05:29:36.137834 systemd[1854]: Reached target paths.target - Paths. Nov 6 05:29:36.137981 systemd[1854]: Reached target timers.target - Timers. Nov 6 05:29:36.139209 systemd[1854]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 6 05:29:36.150065 systemd[1854]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 6 05:29:36.150174 systemd[1854]: Reached target sockets.target - Sockets. Nov 6 05:29:36.150212 systemd[1854]: Reached target basic.target - Basic System. Nov 6 05:29:36.150243 systemd[1854]: Reached target default.target - Main User Target. Nov 6 05:29:36.150261 systemd[1854]: Startup finished in 135ms. Nov 6 05:29:36.150271 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 6 05:29:36.159307 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 6 05:29:36.160178 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 6 05:29:36.692931 kubelet[1847]: E1106 05:29:36.692895 1847 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 6 05:29:36.694492 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 6 05:29:36.694577 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 6 05:29:36.694790 systemd[1]: kubelet.service: Consumed 627ms CPU time, 256.3M memory peak. Nov 6 05:29:46.816559 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 6 05:29:46.817569 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 05:29:47.044732 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 05:29:47.049285 (kubelet)[1896]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 6 05:29:47.089359 kubelet[1896]: E1106 05:29:47.089292 1896 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 6 05:29:47.091721 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 6 05:29:47.091874 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 6 05:29:47.092256 systemd[1]: kubelet.service: Consumed 91ms CPU time, 110.6M memory peak. Nov 6 05:29:57.316679 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 6 05:29:57.318456 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 05:29:57.663811 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 05:29:57.666865 (kubelet)[1911]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 6 05:29:57.688816 kubelet[1911]: E1106 05:29:57.688790 1911 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 6 05:29:57.689997 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 6 05:29:57.690080 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 6 05:29:57.690407 systemd[1]: kubelet.service: Consumed 100ms CPU time, 110M memory peak. Nov 6 05:30:03.024740 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 6 05:30:03.028256 systemd[1]: Started sshd@0-139.178.70.103:22-139.178.68.195:38362.service - OpenSSH per-connection server daemon (139.178.68.195:38362). Nov 6 05:30:03.081032 sshd[1918]: Accepted publickey for core from 139.178.68.195 port 38362 ssh2: RSA SHA256:w/qImmbEUNw9+sIBk+qE22O5sIpXkB5Q8JXFGUWR9JU Nov 6 05:30:03.081800 sshd-session[1918]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 05:30:03.084980 systemd-logind[1657]: New session 3 of user core. Nov 6 05:30:03.092298 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 6 05:30:03.106643 systemd[1]: Started sshd@1-139.178.70.103:22-139.178.68.195:38368.service - OpenSSH per-connection server daemon (139.178.68.195:38368). Nov 6 05:30:03.142317 sshd[1924]: Accepted publickey for core from 139.178.68.195 port 38368 ssh2: RSA SHA256:w/qImmbEUNw9+sIBk+qE22O5sIpXkB5Q8JXFGUWR9JU Nov 6 05:30:03.142976 sshd-session[1924]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 05:30:03.145973 systemd-logind[1657]: New session 4 of user core. Nov 6 05:30:03.153299 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 6 05:30:03.161063 sshd[1927]: Connection closed by 139.178.68.195 port 38368 Nov 6 05:30:03.161342 sshd-session[1924]: pam_unix(sshd:session): session closed for user core Nov 6 05:30:03.170596 systemd[1]: sshd@1-139.178.70.103:22-139.178.68.195:38368.service: Deactivated successfully. Nov 6 05:30:03.171648 systemd[1]: session-4.scope: Deactivated successfully. Nov 6 05:30:03.172275 systemd-logind[1657]: Session 4 logged out. Waiting for processes to exit. Nov 6 05:30:03.173461 systemd[1]: Started sshd@2-139.178.70.103:22-139.178.68.195:38376.service - OpenSSH per-connection server daemon (139.178.68.195:38376). Nov 6 05:30:03.174452 systemd-logind[1657]: Removed session 4. Nov 6 05:30:03.207530 sshd[1933]: Accepted publickey for core from 139.178.68.195 port 38376 ssh2: RSA SHA256:w/qImmbEUNw9+sIBk+qE22O5sIpXkB5Q8JXFGUWR9JU Nov 6 05:30:03.208267 sshd-session[1933]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 05:30:03.211362 systemd-logind[1657]: New session 5 of user core. Nov 6 05:30:03.221226 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 6 05:30:03.227162 sshd[1936]: Connection closed by 139.178.68.195 port 38376 Nov 6 05:30:03.227493 sshd-session[1933]: pam_unix(sshd:session): session closed for user core Nov 6 05:30:03.236690 systemd[1]: sshd@2-139.178.70.103:22-139.178.68.195:38376.service: Deactivated successfully. Nov 6 05:30:03.237745 systemd[1]: session-5.scope: Deactivated successfully. Nov 6 05:30:03.238369 systemd-logind[1657]: Session 5 logged out. Waiting for processes to exit. Nov 6 05:30:03.239842 systemd[1]: Started sshd@3-139.178.70.103:22-139.178.68.195:38390.service - OpenSSH per-connection server daemon (139.178.68.195:38390). Nov 6 05:30:03.241164 systemd-logind[1657]: Removed session 5. Nov 6 05:30:03.278398 sshd[1942]: Accepted publickey for core from 139.178.68.195 port 38390 ssh2: RSA SHA256:w/qImmbEUNw9+sIBk+qE22O5sIpXkB5Q8JXFGUWR9JU Nov 6 05:30:03.279509 sshd-session[1942]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 05:30:03.282741 systemd-logind[1657]: New session 6 of user core. Nov 6 05:30:03.294301 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 6 05:30:03.303151 sshd[1945]: Connection closed by 139.178.68.195 port 38390 Nov 6 05:30:03.303424 sshd-session[1942]: pam_unix(sshd:session): session closed for user core Nov 6 05:30:03.309584 systemd[1]: sshd@3-139.178.70.103:22-139.178.68.195:38390.service: Deactivated successfully. Nov 6 05:30:03.310569 systemd[1]: session-6.scope: Deactivated successfully. Nov 6 05:30:03.311121 systemd-logind[1657]: Session 6 logged out. Waiting for processes to exit. Nov 6 05:30:03.312509 systemd[1]: Started sshd@4-139.178.70.103:22-139.178.68.195:38402.service - OpenSSH per-connection server daemon (139.178.68.195:38402). Nov 6 05:30:03.314432 systemd-logind[1657]: Removed session 6. Nov 6 05:30:03.350003 sshd[1951]: Accepted publickey for core from 139.178.68.195 port 38402 ssh2: RSA SHA256:w/qImmbEUNw9+sIBk+qE22O5sIpXkB5Q8JXFGUWR9JU Nov 6 05:30:03.350734 sshd-session[1951]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 05:30:03.353945 systemd-logind[1657]: New session 7 of user core. Nov 6 05:30:03.363237 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 6 05:30:03.382348 sudo[1955]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 6 05:30:03.382551 sudo[1955]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 6 05:30:03.392436 sudo[1955]: pam_unix(sudo:session): session closed for user root Nov 6 05:30:03.393232 sshd[1954]: Connection closed by 139.178.68.195 port 38402 Nov 6 05:30:03.394004 sshd-session[1951]: pam_unix(sshd:session): session closed for user core Nov 6 05:30:03.399395 systemd[1]: sshd@4-139.178.70.103:22-139.178.68.195:38402.service: Deactivated successfully. Nov 6 05:30:03.400301 systemd[1]: session-7.scope: Deactivated successfully. Nov 6 05:30:03.401103 systemd-logind[1657]: Session 7 logged out. Waiting for processes to exit. Nov 6 05:30:03.402488 systemd[1]: Started sshd@5-139.178.70.103:22-139.178.68.195:38408.service - OpenSSH per-connection server daemon (139.178.68.195:38408). Nov 6 05:30:03.405319 systemd-logind[1657]: Removed session 7. Nov 6 05:30:03.441652 sshd[1961]: Accepted publickey for core from 139.178.68.195 port 38408 ssh2: RSA SHA256:w/qImmbEUNw9+sIBk+qE22O5sIpXkB5Q8JXFGUWR9JU Nov 6 05:30:03.442430 sshd-session[1961]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 05:30:03.446171 systemd-logind[1657]: New session 8 of user core. Nov 6 05:30:03.455237 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 6 05:30:03.463313 sudo[1966]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 6 05:30:03.463509 sudo[1966]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 6 05:30:03.466240 sudo[1966]: pam_unix(sudo:session): session closed for user root Nov 6 05:30:03.469940 sudo[1965]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Nov 6 05:30:03.470124 sudo[1965]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 6 05:30:03.477450 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 6 05:30:03.505828 augenrules[1988]: No rules Nov 6 05:30:03.506612 systemd[1]: audit-rules.service: Deactivated successfully. Nov 6 05:30:03.506774 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 6 05:30:03.507279 sudo[1965]: pam_unix(sudo:session): session closed for user root Nov 6 05:30:03.508259 sshd[1964]: Connection closed by 139.178.68.195 port 38408 Nov 6 05:30:03.508509 sshd-session[1961]: pam_unix(sshd:session): session closed for user core Nov 6 05:30:03.515102 systemd[1]: sshd@5-139.178.70.103:22-139.178.68.195:38408.service: Deactivated successfully. Nov 6 05:30:03.516778 systemd[1]: session-8.scope: Deactivated successfully. Nov 6 05:30:03.517365 systemd-logind[1657]: Session 8 logged out. Waiting for processes to exit. Nov 6 05:30:03.518660 systemd[1]: Started sshd@6-139.178.70.103:22-139.178.68.195:38414.service - OpenSSH per-connection server daemon (139.178.68.195:38414). Nov 6 05:30:03.520454 systemd-logind[1657]: Removed session 8. Nov 6 05:30:03.550500 sshd[1997]: Accepted publickey for core from 139.178.68.195 port 38414 ssh2: RSA SHA256:w/qImmbEUNw9+sIBk+qE22O5sIpXkB5Q8JXFGUWR9JU Nov 6 05:30:03.551932 sshd-session[1997]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 05:30:03.555754 systemd-logind[1657]: New session 9 of user core. Nov 6 05:30:03.560313 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 6 05:30:03.568299 sudo[2001]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 6 05:30:03.568494 sudo[2001]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 6 05:30:03.918191 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 6 05:30:03.933491 (dockerd)[2018]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 6 05:30:04.136988 dockerd[2018]: time="2025-11-06T05:30:04.136828350Z" level=info msg="Starting up" Nov 6 05:30:04.137884 dockerd[2018]: time="2025-11-06T05:30:04.137684112Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Nov 6 05:30:04.143672 dockerd[2018]: time="2025-11-06T05:30:04.143654404Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Nov 6 05:30:04.152185 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1060596108-merged.mount: Deactivated successfully. Nov 6 05:30:04.168551 dockerd[2018]: time="2025-11-06T05:30:04.168404242Z" level=info msg="Loading containers: start." Nov 6 05:30:04.186142 kernel: Initializing XFRM netlink socket Nov 6 05:30:04.367222 systemd-networkd[1562]: docker0: Link UP Nov 6 05:30:04.368527 dockerd[2018]: time="2025-11-06T05:30:04.368493233Z" level=info msg="Loading containers: done." Nov 6 05:30:04.376077 dockerd[2018]: time="2025-11-06T05:30:04.375892848Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 6 05:30:04.376077 dockerd[2018]: time="2025-11-06T05:30:04.375943338Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Nov 6 05:30:04.376077 dockerd[2018]: time="2025-11-06T05:30:04.375981692Z" level=info msg="Initializing buildkit" Nov 6 05:30:04.384999 dockerd[2018]: time="2025-11-06T05:30:04.384989177Z" level=info msg="Completed buildkit initialization" Nov 6 05:30:04.388973 dockerd[2018]: time="2025-11-06T05:30:04.388960578Z" level=info msg="Daemon has completed initialization" Nov 6 05:30:04.389061 dockerd[2018]: time="2025-11-06T05:30:04.389036189Z" level=info msg="API listen on /run/docker.sock" Nov 6 05:30:04.389389 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 6 05:30:05.044563 containerd[1686]: time="2025-11-06T05:30:05.044519680Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\"" Nov 6 05:30:06.088463 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount761035160.mount: Deactivated successfully. Nov 6 05:30:06.629004 containerd[1686]: time="2025-11-06T05:30:06.628975988Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 05:30:06.629746 containerd[1686]: time="2025-11-06T05:30:06.629728234Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.1: active requests=0, bytes read=25393225" Nov 6 05:30:06.629883 containerd[1686]: time="2025-11-06T05:30:06.629871507Z" level=info msg="ImageCreate event name:\"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 05:30:06.631386 containerd[1686]: time="2025-11-06T05:30:06.631369614Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 05:30:06.631988 containerd[1686]: time="2025-11-06T05:30:06.631974729Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.1\" with image id \"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.1\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902\", size \"27061991\" in 1.587415593s" Nov 6 05:30:06.632040 containerd[1686]: time="2025-11-06T05:30:06.632032359Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\" returns image reference \"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\"" Nov 6 05:30:06.632431 containerd[1686]: time="2025-11-06T05:30:06.632367741Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\"" Nov 6 05:30:07.816701 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Nov 6 05:30:07.818789 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 05:30:07.993652 containerd[1686]: time="2025-11-06T05:30:07.993363761Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 05:30:07.994493 containerd[1686]: time="2025-11-06T05:30:07.994464464Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.1: active requests=0, bytes read=21151604" Nov 6 05:30:07.995706 containerd[1686]: time="2025-11-06T05:30:07.995676354Z" level=info msg="ImageCreate event name:\"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 05:30:07.999201 containerd[1686]: time="2025-11-06T05:30:07.998121549Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 05:30:08.000558 containerd[1686]: time="2025-11-06T05:30:08.000480080Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.1\" with image id \"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.1\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89\", size \"22820214\" in 1.368054407s" Nov 6 05:30:08.000558 containerd[1686]: time="2025-11-06T05:30:08.000501058Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\" returns image reference \"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\"" Nov 6 05:30:08.004389 containerd[1686]: time="2025-11-06T05:30:08.002483791Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\"" Nov 6 05:30:08.404887 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 05:30:08.413339 (kubelet)[2299]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 6 05:30:08.447422 kubelet[2299]: E1106 05:30:08.447394 2299 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 6 05:30:08.448801 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 6 05:30:08.448941 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 6 05:30:08.449326 systemd[1]: kubelet.service: Consumed 113ms CPU time, 110.3M memory peak. Nov 6 05:30:09.343809 containerd[1686]: time="2025-11-06T05:30:09.343778725Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 05:30:09.344496 containerd[1686]: time="2025-11-06T05:30:09.344478397Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.1: active requests=0, bytes read=0" Nov 6 05:30:09.345155 containerd[1686]: time="2025-11-06T05:30:09.344542463Z" level=info msg="ImageCreate event name:\"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 05:30:09.345965 containerd[1686]: time="2025-11-06T05:30:09.345953302Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 05:30:09.346589 containerd[1686]: time="2025-11-06T05:30:09.346572531Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.1\" with image id \"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.1\", repo digest \"registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500\", size \"17385568\" in 1.342206939s" Nov 6 05:30:09.346619 containerd[1686]: time="2025-11-06T05:30:09.346589674Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\" returns image reference \"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\"" Nov 6 05:30:09.346887 containerd[1686]: time="2025-11-06T05:30:09.346876071Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\"" Nov 6 05:30:10.380203 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1798151930.mount: Deactivated successfully. Nov 6 05:30:10.616341 containerd[1686]: time="2025-11-06T05:30:10.616254107Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 05:30:10.622467 containerd[1686]: time="2025-11-06T05:30:10.622437849Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.1: active requests=0, bytes read=25960977" Nov 6 05:30:10.632507 containerd[1686]: time="2025-11-06T05:30:10.632440085Z" level=info msg="ImageCreate event name:\"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 05:30:10.642895 containerd[1686]: time="2025-11-06T05:30:10.642847970Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 05:30:10.643448 containerd[1686]: time="2025-11-06T05:30:10.643226808Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.1\" with image id \"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\", repo tag \"registry.k8s.io/kube-proxy:v1.34.1\", repo digest \"registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a\", size \"25963718\" in 1.296302096s" Nov 6 05:30:10.643448 containerd[1686]: time="2025-11-06T05:30:10.643250550Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\" returns image reference \"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\"" Nov 6 05:30:10.643592 containerd[1686]: time="2025-11-06T05:30:10.643573823Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Nov 6 05:30:11.425743 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1485095275.mount: Deactivated successfully. Nov 6 05:30:12.307837 containerd[1686]: time="2025-11-06T05:30:12.307805089Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 05:30:12.310386 containerd[1686]: time="2025-11-06T05:30:12.310115309Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=21568893" Nov 6 05:30:12.310730 containerd[1686]: time="2025-11-06T05:30:12.310711995Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 05:30:12.313720 containerd[1686]: time="2025-11-06T05:30:12.313693775Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 05:30:12.314332 containerd[1686]: time="2025-11-06T05:30:12.314308433Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 1.670712011s" Nov 6 05:30:12.314368 containerd[1686]: time="2025-11-06T05:30:12.314331063Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Nov 6 05:30:12.314719 containerd[1686]: time="2025-11-06T05:30:12.314703225Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Nov 6 05:30:12.931791 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount82832312.mount: Deactivated successfully. Nov 6 05:30:12.940982 containerd[1686]: time="2025-11-06T05:30:12.940948355Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 05:30:12.941559 containerd[1686]: time="2025-11-06T05:30:12.941542974Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=0" Nov 6 05:30:12.941857 containerd[1686]: time="2025-11-06T05:30:12.941842873Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 05:30:12.943143 containerd[1686]: time="2025-11-06T05:30:12.943106803Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 05:30:12.943500 containerd[1686]: time="2025-11-06T05:30:12.943479834Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 628.761338ms" Nov 6 05:30:12.943500 containerd[1686]: time="2025-11-06T05:30:12.943496664Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Nov 6 05:30:12.943947 containerd[1686]: time="2025-11-06T05:30:12.943841488Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Nov 6 05:30:18.049225 update_engine[1667]: I20251106 05:30:18.049175 1667 update_attempter.cc:509] Updating boot flags... Nov 6 05:30:18.276767 containerd[1686]: time="2025-11-06T05:30:18.276734127Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 05:30:18.277301 containerd[1686]: time="2025-11-06T05:30:18.277235072Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=61186606" Nov 6 05:30:18.277664 containerd[1686]: time="2025-11-06T05:30:18.277650585Z" level=info msg="ImageCreate event name:\"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 05:30:18.279452 containerd[1686]: time="2025-11-06T05:30:18.279149073Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 05:30:18.280172 containerd[1686]: time="2025-11-06T05:30:18.280156536Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"74311308\" in 5.336298686s" Nov 6 05:30:18.280208 containerd[1686]: time="2025-11-06T05:30:18.280175953Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\"" Nov 6 05:30:18.525715 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Nov 6 05:30:18.527259 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 05:30:18.968039 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 05:30:18.974306 (kubelet)[2461]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 6 05:30:19.008838 kubelet[2461]: E1106 05:30:19.008809 2461 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 6 05:30:19.011672 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 6 05:30:19.011758 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 6 05:30:19.011963 systemd[1]: kubelet.service: Consumed 90ms CPU time, 109.2M memory peak. Nov 6 05:30:20.933269 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 05:30:20.933430 systemd[1]: kubelet.service: Consumed 90ms CPU time, 109.2M memory peak. Nov 6 05:30:20.936472 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 05:30:20.968834 systemd[1]: Reload requested from client PID 2477 ('systemctl') (unit session-9.scope)... Nov 6 05:30:20.968848 systemd[1]: Reloading... Nov 6 05:30:21.059158 zram_generator::config[2523]: No configuration found. Nov 6 05:30:21.125145 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Nov 6 05:30:21.215125 systemd[1]: Reloading finished in 245 ms. Nov 6 05:30:21.238866 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 6 05:30:21.238943 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 6 05:30:21.239144 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 05:30:21.240565 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 05:30:21.727533 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 05:30:21.730368 (kubelet)[2588]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 6 05:30:21.778007 kubelet[2588]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 6 05:30:21.778233 kubelet[2588]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 6 05:30:21.778341 kubelet[2588]: I1106 05:30:21.778323 2588 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 6 05:30:22.037749 kubelet[2588]: I1106 05:30:22.037668 2588 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Nov 6 05:30:22.037842 kubelet[2588]: I1106 05:30:22.037835 2588 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 6 05:30:22.037890 kubelet[2588]: I1106 05:30:22.037885 2588 watchdog_linux.go:95] "Systemd watchdog is not enabled" Nov 6 05:30:22.037930 kubelet[2588]: I1106 05:30:22.037924 2588 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 6 05:30:22.038230 kubelet[2588]: I1106 05:30:22.038222 2588 server.go:956] "Client rotation is on, will bootstrap in background" Nov 6 05:30:22.319403 kubelet[2588]: E1106 05:30:22.319375 2588 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://139.178.70.103:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 139.178.70.103:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 6 05:30:22.323206 kubelet[2588]: I1106 05:30:22.323187 2588 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 6 05:30:22.362246 kubelet[2588]: I1106 05:30:22.362218 2588 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 6 05:30:22.370197 kubelet[2588]: I1106 05:30:22.370166 2588 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Nov 6 05:30:22.373736 kubelet[2588]: I1106 05:30:22.373683 2588 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 6 05:30:22.375171 kubelet[2588]: I1106 05:30:22.373735 2588 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 6 05:30:22.375301 kubelet[2588]: I1106 05:30:22.375173 2588 topology_manager.go:138] "Creating topology manager with none policy" Nov 6 05:30:22.375301 kubelet[2588]: I1106 05:30:22.375185 2588 container_manager_linux.go:306] "Creating device plugin manager" Nov 6 05:30:22.375301 kubelet[2588]: I1106 05:30:22.375285 2588 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Nov 6 05:30:22.376376 kubelet[2588]: I1106 05:30:22.376354 2588 state_mem.go:36] "Initialized new in-memory state store" Nov 6 05:30:22.376563 kubelet[2588]: I1106 05:30:22.376545 2588 kubelet.go:475] "Attempting to sync node with API server" Nov 6 05:30:22.376563 kubelet[2588]: I1106 05:30:22.376563 2588 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 6 05:30:22.377117 kubelet[2588]: E1106 05:30:22.377089 2588 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://139.178.70.103:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 139.178.70.103:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 6 05:30:22.378151 kubelet[2588]: I1106 05:30:22.377845 2588 kubelet.go:387] "Adding apiserver pod source" Nov 6 05:30:22.378151 kubelet[2588]: I1106 05:30:22.377872 2588 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 6 05:30:22.381007 kubelet[2588]: E1106 05:30:22.380983 2588 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://139.178.70.103:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.70.103:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 6 05:30:22.382144 kubelet[2588]: I1106 05:30:22.382119 2588 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.1.4" apiVersion="v1" Nov 6 05:30:22.385339 kubelet[2588]: I1106 05:30:22.385315 2588 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 6 05:30:22.385420 kubelet[2588]: I1106 05:30:22.385350 2588 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Nov 6 05:30:22.388288 kubelet[2588]: W1106 05:30:22.388261 2588 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 6 05:30:22.395858 kubelet[2588]: I1106 05:30:22.395817 2588 server.go:1262] "Started kubelet" Nov 6 05:30:22.396579 kubelet[2588]: I1106 05:30:22.396570 2588 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 6 05:30:22.407221 kubelet[2588]: I1106 05:30:22.407184 2588 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 6 05:30:22.422673 kubelet[2588]: I1106 05:30:22.422618 2588 server.go:310] "Adding debug handlers to kubelet server" Nov 6 05:30:22.432318 kubelet[2588]: I1106 05:30:22.432283 2588 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 6 05:30:22.432403 kubelet[2588]: I1106 05:30:22.432327 2588 server_v1.go:49] "podresources" method="list" useActivePods=true Nov 6 05:30:22.432467 kubelet[2588]: I1106 05:30:22.432454 2588 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 6 05:30:22.440995 kubelet[2588]: I1106 05:30:22.440781 2588 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 6 05:30:22.442353 kubelet[2588]: I1106 05:30:22.442339 2588 volume_manager.go:313] "Starting Kubelet Volume Manager" Nov 6 05:30:22.442488 kubelet[2588]: E1106 05:30:22.442468 2588 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 6 05:30:22.442815 kubelet[2588]: I1106 05:30:22.442802 2588 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 6 05:30:22.442846 kubelet[2588]: I1106 05:30:22.442834 2588 reconciler.go:29] "Reconciler: start to sync state" Nov 6 05:30:22.448009 kubelet[2588]: E1106 05:30:22.447902 2588 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://139.178.70.103:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.70.103:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 6 05:30:22.448009 kubelet[2588]: E1106 05:30:22.447946 2588 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.103:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.103:6443: connect: connection refused" interval="200ms" Nov 6 05:30:22.455780 kubelet[2588]: E1106 05:30:22.443156 2588 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://139.178.70.103:6443/api/v1/namespaces/default/events\": dial tcp 139.178.70.103:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.187553d9678de497 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-11-06 05:30:22.395778199 +0000 UTC m=+0.662822553,LastTimestamp:2025-11-06 05:30:22.395778199 +0000 UTC m=+0.662822553,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Nov 6 05:30:22.458514 kubelet[2588]: I1106 05:30:22.458492 2588 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Nov 6 05:30:22.459202 kubelet[2588]: I1106 05:30:22.459174 2588 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Nov 6 05:30:22.459202 kubelet[2588]: I1106 05:30:22.459188 2588 status_manager.go:244] "Starting to sync pod status with apiserver" Nov 6 05:30:22.459406 kubelet[2588]: I1106 05:30:22.459267 2588 kubelet.go:2427] "Starting kubelet main sync loop" Nov 6 05:30:22.459406 kubelet[2588]: E1106 05:30:22.459291 2588 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 6 05:30:22.460006 kubelet[2588]: E1106 05:30:22.459990 2588 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://139.178.70.103:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 139.178.70.103:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 6 05:30:22.489863 kubelet[2588]: I1106 05:30:22.489839 2588 factory.go:223] Registration of the systemd container factory successfully Nov 6 05:30:22.489944 kubelet[2588]: I1106 05:30:22.489929 2588 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 6 05:30:22.492307 kubelet[2588]: I1106 05:30:22.492012 2588 factory.go:223] Registration of the containerd container factory successfully Nov 6 05:30:22.506702 kubelet[2588]: I1106 05:30:22.506685 2588 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 6 05:30:22.506795 kubelet[2588]: I1106 05:30:22.506789 2588 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 6 05:30:22.506828 kubelet[2588]: I1106 05:30:22.506824 2588 state_mem.go:36] "Initialized new in-memory state store" Nov 6 05:30:22.542541 kubelet[2588]: E1106 05:30:22.542511 2588 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 6 05:30:22.563784 kubelet[2588]: E1106 05:30:22.559727 2588 kubelet.go:2451] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 6 05:30:22.566749 kubelet[2588]: I1106 05:30:22.566637 2588 policy_none.go:49] "None policy: Start" Nov 6 05:30:22.566749 kubelet[2588]: I1106 05:30:22.566656 2588 memory_manager.go:187] "Starting memorymanager" policy="None" Nov 6 05:30:22.566749 kubelet[2588]: I1106 05:30:22.566665 2588 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Nov 6 05:30:22.573520 kubelet[2588]: I1106 05:30:22.573442 2588 policy_none.go:47] "Start" Nov 6 05:30:22.582895 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 6 05:30:22.595552 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 6 05:30:22.598845 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 6 05:30:22.606333 kubelet[2588]: E1106 05:30:22.606310 2588 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 6 05:30:22.608812 kubelet[2588]: I1106 05:30:22.608705 2588 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 6 05:30:22.608812 kubelet[2588]: I1106 05:30:22.608716 2588 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 6 05:30:22.609732 kubelet[2588]: E1106 05:30:22.609716 2588 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 6 05:30:22.609791 kubelet[2588]: E1106 05:30:22.609764 2588 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Nov 6 05:30:22.619059 kubelet[2588]: I1106 05:30:22.619008 2588 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 6 05:30:22.648453 kubelet[2588]: E1106 05:30:22.648401 2588 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.103:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.103:6443: connect: connection refused" interval="400ms" Nov 6 05:30:22.710680 kubelet[2588]: I1106 05:30:22.710647 2588 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 6 05:30:22.710872 kubelet[2588]: E1106 05:30:22.710849 2588 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.70.103:6443/api/v1/nodes\": dial tcp 139.178.70.103:6443: connect: connection refused" node="localhost" Nov 6 05:30:22.801730 systemd[1]: Created slice kubepods-burstable-pod0aec103f82290345bcc057479e5c6dde.slice - libcontainer container kubepods-burstable-pod0aec103f82290345bcc057479e5c6dde.slice. Nov 6 05:30:22.822440 kubelet[2588]: E1106 05:30:22.822400 2588 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 6 05:30:22.825152 systemd[1]: Created slice kubepods-burstable-podce161b3b11c90b0b844f2e4f86b4e8cd.slice - libcontainer container kubepods-burstable-podce161b3b11c90b0b844f2e4f86b4e8cd.slice. Nov 6 05:30:22.837054 kubelet[2588]: E1106 05:30:22.837035 2588 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 6 05:30:22.839106 systemd[1]: Created slice kubepods-burstable-pod72ae43bf624d285361487631af8a6ba6.slice - libcontainer container kubepods-burstable-pod72ae43bf624d285361487631af8a6ba6.slice. Nov 6 05:30:22.840796 kubelet[2588]: E1106 05:30:22.840775 2588 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 6 05:30:22.846151 kubelet[2588]: I1106 05:30:22.846114 2588 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0aec103f82290345bcc057479e5c6dde-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"0aec103f82290345bcc057479e5c6dde\") " pod="kube-system/kube-apiserver-localhost" Nov 6 05:30:22.846207 kubelet[2588]: I1106 05:30:22.846157 2588 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 6 05:30:22.846207 kubelet[2588]: I1106 05:30:22.846167 2588 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0aec103f82290345bcc057479e5c6dde-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"0aec103f82290345bcc057479e5c6dde\") " pod="kube-system/kube-apiserver-localhost" Nov 6 05:30:22.846207 kubelet[2588]: I1106 05:30:22.846177 2588 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 6 05:30:22.846207 kubelet[2588]: I1106 05:30:22.846188 2588 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 6 05:30:22.846207 kubelet[2588]: I1106 05:30:22.846195 2588 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 6 05:30:22.846306 kubelet[2588]: I1106 05:30:22.846203 2588 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 6 05:30:22.846306 kubelet[2588]: I1106 05:30:22.846212 2588 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72ae43bf624d285361487631af8a6ba6-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72ae43bf624d285361487631af8a6ba6\") " pod="kube-system/kube-scheduler-localhost" Nov 6 05:30:22.846306 kubelet[2588]: I1106 05:30:22.846231 2588 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0aec103f82290345bcc057479e5c6dde-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"0aec103f82290345bcc057479e5c6dde\") " pod="kube-system/kube-apiserver-localhost" Nov 6 05:30:22.912506 kubelet[2588]: I1106 05:30:22.912352 2588 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 6 05:30:22.912649 kubelet[2588]: E1106 05:30:22.912636 2588 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.70.103:6443/api/v1/nodes\": dial tcp 139.178.70.103:6443: connect: connection refused" node="localhost" Nov 6 05:30:23.048842 kubelet[2588]: E1106 05:30:23.048804 2588 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.103:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.103:6443: connect: connection refused" interval="800ms" Nov 6 05:30:23.138697 containerd[1686]: time="2025-11-06T05:30:23.138527732Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:0aec103f82290345bcc057479e5c6dde,Namespace:kube-system,Attempt:0,}" Nov 6 05:30:23.154550 containerd[1686]: time="2025-11-06T05:30:23.154462156Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:ce161b3b11c90b0b844f2e4f86b4e8cd,Namespace:kube-system,Attempt:0,}" Nov 6 05:30:23.170860 containerd[1686]: time="2025-11-06T05:30:23.170839053Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72ae43bf624d285361487631af8a6ba6,Namespace:kube-system,Attempt:0,}" Nov 6 05:30:23.208758 kubelet[2588]: E1106 05:30:23.208719 2588 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://139.178.70.103:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.70.103:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 6 05:30:23.314391 kubelet[2588]: I1106 05:30:23.314368 2588 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 6 05:30:23.314592 kubelet[2588]: E1106 05:30:23.314577 2588 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.70.103:6443/api/v1/nodes\": dial tcp 139.178.70.103:6443: connect: connection refused" node="localhost" Nov 6 05:30:23.385770 kubelet[2588]: E1106 05:30:23.385689 2588 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://139.178.70.103:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 139.178.70.103:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 6 05:30:23.498566 kubelet[2588]: E1106 05:30:23.498464 2588 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://139.178.70.103:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.70.103:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 6 05:30:23.677919 kubelet[2588]: E1106 05:30:23.677886 2588 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://139.178.70.103:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 139.178.70.103:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 6 05:30:23.850102 kubelet[2588]: E1106 05:30:23.850074 2588 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.103:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.103:6443: connect: connection refused" interval="1.6s" Nov 6 05:30:23.920713 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3988761060.mount: Deactivated successfully. Nov 6 05:30:23.974694 containerd[1686]: time="2025-11-06T05:30:23.974283356Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 6 05:30:23.988582 containerd[1686]: time="2025-11-06T05:30:23.988550249Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Nov 6 05:30:24.003431 containerd[1686]: time="2025-11-06T05:30:24.003392419Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 6 05:30:24.004068 containerd[1686]: time="2025-11-06T05:30:24.003883780Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 6 05:30:24.004593 containerd[1686]: time="2025-11-06T05:30:24.004485651Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Nov 6 05:30:24.005104 containerd[1686]: time="2025-11-06T05:30:24.005092364Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Nov 6 05:30:24.006022 containerd[1686]: time="2025-11-06T05:30:24.005990183Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 6 05:30:24.007289 containerd[1686]: time="2025-11-06T05:30:24.006838020Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 817.907474ms" Nov 6 05:30:24.007793 containerd[1686]: time="2025-11-06T05:30:24.007730329Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 827.069247ms" Nov 6 05:30:24.007891 containerd[1686]: time="2025-11-06T05:30:24.007872482Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 6 05:30:24.008832 containerd[1686]: time="2025-11-06T05:30:24.008665905Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 819.741491ms" Nov 6 05:30:24.116161 kubelet[2588]: I1106 05:30:24.115835 2588 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 6 05:30:24.116161 kubelet[2588]: E1106 05:30:24.116076 2588 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.70.103:6443/api/v1/nodes\": dial tcp 139.178.70.103:6443: connect: connection refused" node="localhost" Nov 6 05:30:24.135432 containerd[1686]: time="2025-11-06T05:30:24.135404973Z" level=info msg="connecting to shim 91c980c101882ddabaf1d5743fa220a9bba7ac5ae0cd48c51b0a53bcb8c5c7a7" address="unix:///run/containerd/s/4a120c8b265a73314e6966cabf6eec667641a6be04125a2a40d3ed6bbc32a92d" namespace=k8s.io protocol=ttrpc version=3 Nov 6 05:30:24.140998 containerd[1686]: time="2025-11-06T05:30:24.140734467Z" level=info msg="connecting to shim 7bffdb17eb2427f19f08e40eeeacc6d287fd11371654339f15eb2764a1579710" address="unix:///run/containerd/s/3f11dde1506424e41bbe1166de0358e6b9be283d0d95bfe1a24f4047d49efacb" namespace=k8s.io protocol=ttrpc version=3 Nov 6 05:30:24.141966 containerd[1686]: time="2025-11-06T05:30:24.141779811Z" level=info msg="connecting to shim 267e77e1d039c321e566530098c42ac82a90bf208352a89b7c6ef3e2e3faeb68" address="unix:///run/containerd/s/096ff613bd7f849f08159f5804675c527b87dd6a01a0967b8dc39ebc1cdaa0d5" namespace=k8s.io protocol=ttrpc version=3 Nov 6 05:30:24.216301 systemd[1]: Started cri-containerd-267e77e1d039c321e566530098c42ac82a90bf208352a89b7c6ef3e2e3faeb68.scope - libcontainer container 267e77e1d039c321e566530098c42ac82a90bf208352a89b7c6ef3e2e3faeb68. Nov 6 05:30:24.218232 systemd[1]: Started cri-containerd-7bffdb17eb2427f19f08e40eeeacc6d287fd11371654339f15eb2764a1579710.scope - libcontainer container 7bffdb17eb2427f19f08e40eeeacc6d287fd11371654339f15eb2764a1579710. Nov 6 05:30:24.220151 systemd[1]: Started cri-containerd-91c980c101882ddabaf1d5743fa220a9bba7ac5ae0cd48c51b0a53bcb8c5c7a7.scope - libcontainer container 91c980c101882ddabaf1d5743fa220a9bba7ac5ae0cd48c51b0a53bcb8c5c7a7. Nov 6 05:30:24.284229 containerd[1686]: time="2025-11-06T05:30:24.284207089Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:ce161b3b11c90b0b844f2e4f86b4e8cd,Namespace:kube-system,Attempt:0,} returns sandbox id \"267e77e1d039c321e566530098c42ac82a90bf208352a89b7c6ef3e2e3faeb68\"" Nov 6 05:30:24.292824 containerd[1686]: time="2025-11-06T05:30:24.292796200Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:0aec103f82290345bcc057479e5c6dde,Namespace:kube-system,Attempt:0,} returns sandbox id \"7bffdb17eb2427f19f08e40eeeacc6d287fd11371654339f15eb2764a1579710\"" Nov 6 05:30:24.300001 containerd[1686]: time="2025-11-06T05:30:24.299951569Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72ae43bf624d285361487631af8a6ba6,Namespace:kube-system,Attempt:0,} returns sandbox id \"91c980c101882ddabaf1d5743fa220a9bba7ac5ae0cd48c51b0a53bcb8c5c7a7\"" Nov 6 05:30:24.301597 containerd[1686]: time="2025-11-06T05:30:24.301178885Z" level=info msg="CreateContainer within sandbox \"267e77e1d039c321e566530098c42ac82a90bf208352a89b7c6ef3e2e3faeb68\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 6 05:30:24.309909 containerd[1686]: time="2025-11-06T05:30:24.309511773Z" level=info msg="Container 1445b631f7fba3f3f852cfcc97a5e8ce99bfb6d8aec2e9bba100e0b77c890349: CDI devices from CRI Config.CDIDevices: []" Nov 6 05:30:24.311002 containerd[1686]: time="2025-11-06T05:30:24.310976317Z" level=info msg="CreateContainer within sandbox \"91c980c101882ddabaf1d5743fa220a9bba7ac5ae0cd48c51b0a53bcb8c5c7a7\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 6 05:30:24.315089 containerd[1686]: time="2025-11-06T05:30:24.315064782Z" level=info msg="CreateContainer within sandbox \"7bffdb17eb2427f19f08e40eeeacc6d287fd11371654339f15eb2764a1579710\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 6 05:30:24.318999 containerd[1686]: time="2025-11-06T05:30:24.318981964Z" level=info msg="Container e304142ce37f848effcaa855ef85fef8488987d2a84a7ae638c019436014ef61: CDI devices from CRI Config.CDIDevices: []" Nov 6 05:30:24.334784 containerd[1686]: time="2025-11-06T05:30:24.334761808Z" level=info msg="Container b094518ea2974cce683effbfe94150e5fa89eecd8fff4e6c56a2710226909303: CDI devices from CRI Config.CDIDevices: []" Nov 6 05:30:24.359457 containerd[1686]: time="2025-11-06T05:30:24.359414550Z" level=info msg="CreateContainer within sandbox \"7bffdb17eb2427f19f08e40eeeacc6d287fd11371654339f15eb2764a1579710\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"b094518ea2974cce683effbfe94150e5fa89eecd8fff4e6c56a2710226909303\"" Nov 6 05:30:24.360247 containerd[1686]: time="2025-11-06T05:30:24.360224556Z" level=info msg="CreateContainer within sandbox \"91c980c101882ddabaf1d5743fa220a9bba7ac5ae0cd48c51b0a53bcb8c5c7a7\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"e304142ce37f848effcaa855ef85fef8488987d2a84a7ae638c019436014ef61\"" Nov 6 05:30:24.360382 containerd[1686]: time="2025-11-06T05:30:24.360364905Z" level=info msg="StartContainer for \"b094518ea2974cce683effbfe94150e5fa89eecd8fff4e6c56a2710226909303\"" Nov 6 05:30:24.361198 containerd[1686]: time="2025-11-06T05:30:24.360903331Z" level=info msg="CreateContainer within sandbox \"267e77e1d039c321e566530098c42ac82a90bf208352a89b7c6ef3e2e3faeb68\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"1445b631f7fba3f3f852cfcc97a5e8ce99bfb6d8aec2e9bba100e0b77c890349\"" Nov 6 05:30:24.361198 containerd[1686]: time="2025-11-06T05:30:24.360995138Z" level=info msg="StartContainer for \"e304142ce37f848effcaa855ef85fef8488987d2a84a7ae638c019436014ef61\"" Nov 6 05:30:24.361198 containerd[1686]: time="2025-11-06T05:30:24.361145576Z" level=info msg="connecting to shim b094518ea2974cce683effbfe94150e5fa89eecd8fff4e6c56a2710226909303" address="unix:///run/containerd/s/3f11dde1506424e41bbe1166de0358e6b9be283d0d95bfe1a24f4047d49efacb" protocol=ttrpc version=3 Nov 6 05:30:24.361810 containerd[1686]: time="2025-11-06T05:30:24.361794456Z" level=info msg="connecting to shim e304142ce37f848effcaa855ef85fef8488987d2a84a7ae638c019436014ef61" address="unix:///run/containerd/s/4a120c8b265a73314e6966cabf6eec667641a6be04125a2a40d3ed6bbc32a92d" protocol=ttrpc version=3 Nov 6 05:30:24.363066 kubelet[2588]: E1106 05:30:24.363049 2588 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://139.178.70.103:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 139.178.70.103:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 6 05:30:24.363122 containerd[1686]: time="2025-11-06T05:30:24.363057839Z" level=info msg="StartContainer for \"1445b631f7fba3f3f852cfcc97a5e8ce99bfb6d8aec2e9bba100e0b77c890349\"" Nov 6 05:30:24.363772 containerd[1686]: time="2025-11-06T05:30:24.363753797Z" level=info msg="connecting to shim 1445b631f7fba3f3f852cfcc97a5e8ce99bfb6d8aec2e9bba100e0b77c890349" address="unix:///run/containerd/s/096ff613bd7f849f08159f5804675c527b87dd6a01a0967b8dc39ebc1cdaa0d5" protocol=ttrpc version=3 Nov 6 05:30:24.379294 systemd[1]: Started cri-containerd-e304142ce37f848effcaa855ef85fef8488987d2a84a7ae638c019436014ef61.scope - libcontainer container e304142ce37f848effcaa855ef85fef8488987d2a84a7ae638c019436014ef61. Nov 6 05:30:24.392272 systemd[1]: Started cri-containerd-b094518ea2974cce683effbfe94150e5fa89eecd8fff4e6c56a2710226909303.scope - libcontainer container b094518ea2974cce683effbfe94150e5fa89eecd8fff4e6c56a2710226909303. Nov 6 05:30:24.395252 systemd[1]: Started cri-containerd-1445b631f7fba3f3f852cfcc97a5e8ce99bfb6d8aec2e9bba100e0b77c890349.scope - libcontainer container 1445b631f7fba3f3f852cfcc97a5e8ce99bfb6d8aec2e9bba100e0b77c890349. Nov 6 05:30:24.460800 containerd[1686]: time="2025-11-06T05:30:24.460744219Z" level=info msg="StartContainer for \"b094518ea2974cce683effbfe94150e5fa89eecd8fff4e6c56a2710226909303\" returns successfully" Nov 6 05:30:24.463255 containerd[1686]: time="2025-11-06T05:30:24.463233525Z" level=info msg="StartContainer for \"e304142ce37f848effcaa855ef85fef8488987d2a84a7ae638c019436014ef61\" returns successfully" Nov 6 05:30:24.475978 containerd[1686]: time="2025-11-06T05:30:24.475759274Z" level=info msg="StartContainer for \"1445b631f7fba3f3f852cfcc97a5e8ce99bfb6d8aec2e9bba100e0b77c890349\" returns successfully" Nov 6 05:30:24.497172 kubelet[2588]: E1106 05:30:24.497114 2588 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 6 05:30:24.497586 kubelet[2588]: E1106 05:30:24.497286 2588 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 6 05:30:24.499114 kubelet[2588]: E1106 05:30:24.499005 2588 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 6 05:30:24.715037 kubelet[2588]: E1106 05:30:24.714648 2588 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://139.178.70.103:6443/api/v1/namespaces/default/events\": dial tcp 139.178.70.103:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.187553d9678de497 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-11-06 05:30:22.395778199 +0000 UTC m=+0.662822553,LastTimestamp:2025-11-06 05:30:22.395778199 +0000 UTC m=+0.662822553,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Nov 6 05:30:25.214890 kubelet[2588]: E1106 05:30:25.214846 2588 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://139.178.70.103:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 139.178.70.103:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 6 05:30:25.450365 kubelet[2588]: E1106 05:30:25.450337 2588 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.103:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.103:6443: connect: connection refused" interval="3.2s" Nov 6 05:30:25.501877 kubelet[2588]: E1106 05:30:25.501515 2588 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 6 05:30:25.502087 kubelet[2588]: E1106 05:30:25.502075 2588 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 6 05:30:25.502572 kubelet[2588]: E1106 05:30:25.502488 2588 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 6 05:30:25.531955 kubelet[2588]: E1106 05:30:25.531890 2588 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://139.178.70.103:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.70.103:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 6 05:30:25.719325 kubelet[2588]: I1106 05:30:25.718096 2588 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 6 05:30:25.719482 kubelet[2588]: E1106 05:30:25.719469 2588 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.70.103:6443/api/v1/nodes\": dial tcp 139.178.70.103:6443: connect: connection refused" node="localhost" Nov 6 05:30:25.746999 kubelet[2588]: E1106 05:30:25.746980 2588 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://139.178.70.103:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.70.103:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 6 05:30:26.502261 kubelet[2588]: E1106 05:30:26.502243 2588 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 6 05:30:26.502471 kubelet[2588]: E1106 05:30:26.502425 2588 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 6 05:30:27.910524 kubelet[2588]: E1106 05:30:27.910481 2588 csi_plugin.go:399] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Nov 6 05:30:28.336872 kubelet[2588]: E1106 05:30:28.336849 2588 csi_plugin.go:399] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Nov 6 05:30:28.390971 kubelet[2588]: I1106 05:30:28.390866 2588 apiserver.go:52] "Watching apiserver" Nov 6 05:30:28.449650 kubelet[2588]: I1106 05:30:28.449618 2588 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 6 05:30:28.653031 kubelet[2588]: E1106 05:30:28.652745 2588 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Nov 6 05:30:28.811776 kubelet[2588]: E1106 05:30:28.811742 2588 csi_plugin.go:399] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Nov 6 05:30:28.921590 kubelet[2588]: I1106 05:30:28.921438 2588 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 6 05:30:28.959532 kubelet[2588]: I1106 05:30:28.959375 2588 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 6 05:30:29.043491 kubelet[2588]: I1106 05:30:29.043456 2588 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 6 05:30:29.093839 kubelet[2588]: I1106 05:30:29.093665 2588 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 6 05:30:29.105779 kubelet[2588]: I1106 05:30:29.105553 2588 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 6 05:30:30.230827 systemd[1]: Reload requested from client PID 2868 ('systemctl') (unit session-9.scope)... Nov 6 05:30:30.230838 systemd[1]: Reloading... Nov 6 05:30:30.296148 zram_generator::config[2911]: No configuration found. Nov 6 05:30:30.400761 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Nov 6 05:30:30.492676 systemd[1]: Reloading finished in 261 ms. Nov 6 05:30:30.517795 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 05:30:30.527370 systemd[1]: kubelet.service: Deactivated successfully. Nov 6 05:30:30.527564 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 05:30:30.527604 systemd[1]: kubelet.service: Consumed 554ms CPU time, 123.3M memory peak. Nov 6 05:30:30.529519 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 05:30:31.423410 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 05:30:31.435481 (kubelet)[2979]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 6 05:30:31.500216 kubelet[2979]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 6 05:30:31.500216 kubelet[2979]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 6 05:30:31.520395 kubelet[2979]: I1106 05:30:31.520247 2979 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 6 05:30:31.537226 kubelet[2979]: I1106 05:30:31.537201 2979 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Nov 6 05:30:31.537226 kubelet[2979]: I1106 05:30:31.537218 2979 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 6 05:30:31.537226 kubelet[2979]: I1106 05:30:31.537233 2979 watchdog_linux.go:95] "Systemd watchdog is not enabled" Nov 6 05:30:31.537362 kubelet[2979]: I1106 05:30:31.537239 2979 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 6 05:30:31.537387 kubelet[2979]: I1106 05:30:31.537366 2979 server.go:956] "Client rotation is on, will bootstrap in background" Nov 6 05:30:31.538160 kubelet[2979]: I1106 05:30:31.538145 2979 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Nov 6 05:30:31.539433 kubelet[2979]: I1106 05:30:31.539345 2979 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 6 05:30:31.541715 kubelet[2979]: I1106 05:30:31.541702 2979 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 6 05:30:31.546141 kubelet[2979]: I1106 05:30:31.544966 2979 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Nov 6 05:30:31.546141 kubelet[2979]: I1106 05:30:31.545091 2979 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 6 05:30:31.546141 kubelet[2979]: I1106 05:30:31.545109 2979 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 6 05:30:31.546141 kubelet[2979]: I1106 05:30:31.545298 2979 topology_manager.go:138] "Creating topology manager with none policy" Nov 6 05:30:31.547234 kubelet[2979]: I1106 05:30:31.545306 2979 container_manager_linux.go:306] "Creating device plugin manager" Nov 6 05:30:31.547234 kubelet[2979]: I1106 05:30:31.545324 2979 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Nov 6 05:30:31.547234 kubelet[2979]: I1106 05:30:31.546112 2979 state_mem.go:36] "Initialized new in-memory state store" Nov 6 05:30:31.547234 kubelet[2979]: I1106 05:30:31.546343 2979 kubelet.go:475] "Attempting to sync node with API server" Nov 6 05:30:31.547234 kubelet[2979]: I1106 05:30:31.546353 2979 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 6 05:30:31.547234 kubelet[2979]: I1106 05:30:31.546370 2979 kubelet.go:387] "Adding apiserver pod source" Nov 6 05:30:31.547234 kubelet[2979]: I1106 05:30:31.546389 2979 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 6 05:30:31.571211 kubelet[2979]: I1106 05:30:31.571193 2979 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.1.4" apiVersion="v1" Nov 6 05:30:31.571622 kubelet[2979]: I1106 05:30:31.571612 2979 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 6 05:30:31.571675 kubelet[2979]: I1106 05:30:31.571670 2979 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Nov 6 05:30:31.573234 kubelet[2979]: I1106 05:30:31.573226 2979 server.go:1262] "Started kubelet" Nov 6 05:30:31.574339 kubelet[2979]: I1106 05:30:31.574331 2979 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 6 05:30:31.581417 kubelet[2979]: I1106 05:30:31.581384 2979 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 6 05:30:31.582101 kubelet[2979]: I1106 05:30:31.581987 2979 server.go:310] "Adding debug handlers to kubelet server" Nov 6 05:30:31.584266 kubelet[2979]: I1106 05:30:31.584102 2979 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 6 05:30:31.584266 kubelet[2979]: I1106 05:30:31.584143 2979 server_v1.go:49] "podresources" method="list" useActivePods=true Nov 6 05:30:31.584266 kubelet[2979]: I1106 05:30:31.584233 2979 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 6 05:30:31.584396 kubelet[2979]: I1106 05:30:31.584376 2979 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 6 05:30:31.588617 kubelet[2979]: E1106 05:30:31.588601 2979 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 6 05:30:31.591064 kubelet[2979]: I1106 05:30:31.591052 2979 volume_manager.go:313] "Starting Kubelet Volume Manager" Nov 6 05:30:31.591867 kubelet[2979]: I1106 05:30:31.591225 2979 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 6 05:30:31.591867 kubelet[2979]: I1106 05:30:31.591313 2979 reconciler.go:29] "Reconciler: start to sync state" Nov 6 05:30:31.591867 kubelet[2979]: I1106 05:30:31.591705 2979 factory.go:223] Registration of the systemd container factory successfully Nov 6 05:30:31.591867 kubelet[2979]: I1106 05:30:31.591752 2979 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 6 05:30:31.592747 kubelet[2979]: I1106 05:30:31.592726 2979 factory.go:223] Registration of the containerd container factory successfully Nov 6 05:30:31.595525 kubelet[2979]: I1106 05:30:31.595509 2979 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Nov 6 05:30:31.596171 kubelet[2979]: I1106 05:30:31.596163 2979 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Nov 6 05:30:31.596217 kubelet[2979]: I1106 05:30:31.596212 2979 status_manager.go:244] "Starting to sync pod status with apiserver" Nov 6 05:30:31.596258 kubelet[2979]: I1106 05:30:31.596254 2979 kubelet.go:2427] "Starting kubelet main sync loop" Nov 6 05:30:31.596320 kubelet[2979]: E1106 05:30:31.596310 2979 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 6 05:30:31.642796 kubelet[2979]: I1106 05:30:31.642776 2979 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 6 05:30:31.643060 kubelet[2979]: I1106 05:30:31.643020 2979 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 6 05:30:31.643060 kubelet[2979]: I1106 05:30:31.643040 2979 state_mem.go:36] "Initialized new in-memory state store" Nov 6 05:30:31.643679 kubelet[2979]: I1106 05:30:31.643668 2979 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 6 05:30:31.643749 kubelet[2979]: I1106 05:30:31.643727 2979 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 6 05:30:31.643799 kubelet[2979]: I1106 05:30:31.643793 2979 policy_none.go:49] "None policy: Start" Nov 6 05:30:31.643880 kubelet[2979]: I1106 05:30:31.643838 2979 memory_manager.go:187] "Starting memorymanager" policy="None" Nov 6 05:30:31.643933 kubelet[2979]: I1106 05:30:31.643926 2979 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Nov 6 05:30:31.644259 kubelet[2979]: I1106 05:30:31.644049 2979 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Nov 6 05:30:31.644304 kubelet[2979]: I1106 05:30:31.644298 2979 policy_none.go:47] "Start" Nov 6 05:30:31.647520 kubelet[2979]: E1106 05:30:31.646884 2979 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 6 05:30:31.647760 kubelet[2979]: I1106 05:30:31.647750 2979 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 6 05:30:31.647820 kubelet[2979]: I1106 05:30:31.647802 2979 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 6 05:30:31.649388 kubelet[2979]: I1106 05:30:31.649005 2979 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 6 05:30:31.650397 kubelet[2979]: E1106 05:30:31.650385 2979 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 6 05:30:31.697746 kubelet[2979]: I1106 05:30:31.697651 2979 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 6 05:30:31.698719 kubelet[2979]: I1106 05:30:31.697766 2979 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 6 05:30:31.698719 kubelet[2979]: I1106 05:30:31.697850 2979 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 6 05:30:31.739778 kubelet[2979]: E1106 05:30:31.739596 2979 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Nov 6 05:30:31.751931 kubelet[2979]: I1106 05:30:31.751875 2979 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 6 05:30:31.752897 kubelet[2979]: E1106 05:30:31.752860 2979 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Nov 6 05:30:31.753005 kubelet[2979]: E1106 05:30:31.752887 2979 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Nov 6 05:30:31.772125 kubelet[2979]: I1106 05:30:31.772090 2979 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Nov 6 05:30:31.772377 kubelet[2979]: I1106 05:30:31.772286 2979 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 6 05:30:31.892389 kubelet[2979]: I1106 05:30:31.892353 2979 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0aec103f82290345bcc057479e5c6dde-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"0aec103f82290345bcc057479e5c6dde\") " pod="kube-system/kube-apiserver-localhost" Nov 6 05:30:31.892389 kubelet[2979]: I1106 05:30:31.892382 2979 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0aec103f82290345bcc057479e5c6dde-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"0aec103f82290345bcc057479e5c6dde\") " pod="kube-system/kube-apiserver-localhost" Nov 6 05:30:31.892389 kubelet[2979]: I1106 05:30:31.892394 2979 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 6 05:30:31.892516 kubelet[2979]: I1106 05:30:31.892404 2979 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 6 05:30:31.892516 kubelet[2979]: I1106 05:30:31.892414 2979 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72ae43bf624d285361487631af8a6ba6-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72ae43bf624d285361487631af8a6ba6\") " pod="kube-system/kube-scheduler-localhost" Nov 6 05:30:31.892516 kubelet[2979]: I1106 05:30:31.892422 2979 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0aec103f82290345bcc057479e5c6dde-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"0aec103f82290345bcc057479e5c6dde\") " pod="kube-system/kube-apiserver-localhost" Nov 6 05:30:31.892516 kubelet[2979]: I1106 05:30:31.892429 2979 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 6 05:30:31.892516 kubelet[2979]: I1106 05:30:31.892438 2979 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 6 05:30:31.892600 kubelet[2979]: I1106 05:30:31.892448 2979 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 6 05:30:32.571945 kubelet[2979]: I1106 05:30:32.571747 2979 apiserver.go:52] "Watching apiserver" Nov 6 05:30:32.591784 kubelet[2979]: I1106 05:30:32.591744 2979 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 6 05:30:32.627967 kubelet[2979]: I1106 05:30:32.627945 2979 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 6 05:30:32.629668 kubelet[2979]: I1106 05:30:32.629646 2979 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 6 05:30:32.667814 kubelet[2979]: E1106 05:30:32.667776 2979 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Nov 6 05:30:32.668477 kubelet[2979]: E1106 05:30:32.668420 2979 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Nov 6 05:30:32.731533 kubelet[2979]: I1106 05:30:32.731474 2979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.731462406 podStartE2EDuration="3.731462406s" podCreationTimestamp="2025-11-06 05:30:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 05:30:32.697464568 +0000 UTC m=+1.241087917" watchObservedRunningTime="2025-11-06 05:30:32.731462406 +0000 UTC m=+1.275085745" Nov 6 05:30:32.732005 kubelet[2979]: I1106 05:30:32.731863 2979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.731857994 podStartE2EDuration="3.731857994s" podCreationTimestamp="2025-11-06 05:30:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 05:30:32.731226905 +0000 UTC m=+1.274850245" watchObservedRunningTime="2025-11-06 05:30:32.731857994 +0000 UTC m=+1.275481335" Nov 6 05:30:35.176841 kubelet[2979]: I1106 05:30:35.176748 2979 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 6 05:30:35.177676 containerd[1686]: time="2025-11-06T05:30:35.177292558Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 6 05:30:35.179098 kubelet[2979]: I1106 05:30:35.177488 2979 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 6 05:30:35.943891 kubelet[2979]: I1106 05:30:35.943231 2979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=6.943217218 podStartE2EDuration="6.943217218s" podCreationTimestamp="2025-11-06 05:30:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 05:30:32.754023545 +0000 UTC m=+1.297646894" watchObservedRunningTime="2025-11-06 05:30:35.943217218 +0000 UTC m=+4.486840567" Nov 6 05:30:35.952176 systemd[1]: Created slice kubepods-besteffort-pod41758254_7426_41cf_9017_059ea166ca87.slice - libcontainer container kubepods-besteffort-pod41758254_7426_41cf_9017_059ea166ca87.slice. Nov 6 05:30:36.022815 kubelet[2979]: I1106 05:30:36.022783 2979 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/41758254-7426-41cf-9017-059ea166ca87-xtables-lock\") pod \"kube-proxy-xdtwt\" (UID: \"41758254-7426-41cf-9017-059ea166ca87\") " pod="kube-system/kube-proxy-xdtwt" Nov 6 05:30:36.022815 kubelet[2979]: I1106 05:30:36.022816 2979 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/41758254-7426-41cf-9017-059ea166ca87-lib-modules\") pod \"kube-proxy-xdtwt\" (UID: \"41758254-7426-41cf-9017-059ea166ca87\") " pod="kube-system/kube-proxy-xdtwt" Nov 6 05:30:36.022933 kubelet[2979]: I1106 05:30:36.022827 2979 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/41758254-7426-41cf-9017-059ea166ca87-kube-proxy\") pod \"kube-proxy-xdtwt\" (UID: \"41758254-7426-41cf-9017-059ea166ca87\") " pod="kube-system/kube-proxy-xdtwt" Nov 6 05:30:36.022933 kubelet[2979]: I1106 05:30:36.022836 2979 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zlbj6\" (UniqueName: \"kubernetes.io/projected/41758254-7426-41cf-9017-059ea166ca87-kube-api-access-zlbj6\") pod \"kube-proxy-xdtwt\" (UID: \"41758254-7426-41cf-9017-059ea166ca87\") " pod="kube-system/kube-proxy-xdtwt" Nov 6 05:30:36.279961 containerd[1686]: time="2025-11-06T05:30:36.279879917Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xdtwt,Uid:41758254-7426-41cf-9017-059ea166ca87,Namespace:kube-system,Attempt:0,}" Nov 6 05:30:36.387144 containerd[1686]: time="2025-11-06T05:30:36.386561154Z" level=info msg="connecting to shim 7e8a74bf610332e1bac16f2fe7a0bf0d4e807bb1761db6e26b11f570008b762a" address="unix:///run/containerd/s/2e4323b64b0683a23bbb91c8cf260f7b76e5f847b8b4e783dddcb8d7d11f9cea" namespace=k8s.io protocol=ttrpc version=3 Nov 6 05:30:36.404977 systemd[1]: Created slice kubepods-besteffort-pod074843cc_aabb_485d_9463_9dadbe59f96f.slice - libcontainer container kubepods-besteffort-pod074843cc_aabb_485d_9463_9dadbe59f96f.slice. Nov 6 05:30:36.423222 systemd[1]: Started cri-containerd-7e8a74bf610332e1bac16f2fe7a0bf0d4e807bb1761db6e26b11f570008b762a.scope - libcontainer container 7e8a74bf610332e1bac16f2fe7a0bf0d4e807bb1761db6e26b11f570008b762a. Nov 6 05:30:36.424955 kubelet[2979]: I1106 05:30:36.424877 2979 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/074843cc-aabb-485d-9463-9dadbe59f96f-var-lib-calico\") pod \"tigera-operator-65cdcdfd6d-2bv6v\" (UID: \"074843cc-aabb-485d-9463-9dadbe59f96f\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-2bv6v" Nov 6 05:30:36.424955 kubelet[2979]: I1106 05:30:36.424910 2979 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzgq2\" (UniqueName: \"kubernetes.io/projected/074843cc-aabb-485d-9463-9dadbe59f96f-kube-api-access-rzgq2\") pod \"tigera-operator-65cdcdfd6d-2bv6v\" (UID: \"074843cc-aabb-485d-9463-9dadbe59f96f\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-2bv6v" Nov 6 05:30:36.449502 containerd[1686]: time="2025-11-06T05:30:36.449434838Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xdtwt,Uid:41758254-7426-41cf-9017-059ea166ca87,Namespace:kube-system,Attempt:0,} returns sandbox id \"7e8a74bf610332e1bac16f2fe7a0bf0d4e807bb1761db6e26b11f570008b762a\"" Nov 6 05:30:36.472190 containerd[1686]: time="2025-11-06T05:30:36.472161166Z" level=info msg="CreateContainer within sandbox \"7e8a74bf610332e1bac16f2fe7a0bf0d4e807bb1761db6e26b11f570008b762a\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 6 05:30:36.717123 containerd[1686]: time="2025-11-06T05:30:36.716993711Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-2bv6v,Uid:074843cc-aabb-485d-9463-9dadbe59f96f,Namespace:tigera-operator,Attempt:0,}" Nov 6 05:30:36.730283 containerd[1686]: time="2025-11-06T05:30:36.730258650Z" level=info msg="Container a59d119eb5f429bfd749ae26a8f0666bc41ca2d9e95208d395e7803dba48feaa: CDI devices from CRI Config.CDIDevices: []" Nov 6 05:30:36.753459 containerd[1686]: time="2025-11-06T05:30:36.753380469Z" level=info msg="CreateContainer within sandbox \"7e8a74bf610332e1bac16f2fe7a0bf0d4e807bb1761db6e26b11f570008b762a\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"a59d119eb5f429bfd749ae26a8f0666bc41ca2d9e95208d395e7803dba48feaa\"" Nov 6 05:30:36.754089 containerd[1686]: time="2025-11-06T05:30:36.754070039Z" level=info msg="StartContainer for \"a59d119eb5f429bfd749ae26a8f0666bc41ca2d9e95208d395e7803dba48feaa\"" Nov 6 05:30:36.755317 containerd[1686]: time="2025-11-06T05:30:36.755303939Z" level=info msg="connecting to shim a59d119eb5f429bfd749ae26a8f0666bc41ca2d9e95208d395e7803dba48feaa" address="unix:///run/containerd/s/2e4323b64b0683a23bbb91c8cf260f7b76e5f847b8b4e783dddcb8d7d11f9cea" protocol=ttrpc version=3 Nov 6 05:30:36.767585 containerd[1686]: time="2025-11-06T05:30:36.767488431Z" level=info msg="connecting to shim ab3b8729f32a65070bd920a00481e3fb63ac4b4f42f1bf3bb4f9d08d81550869" address="unix:///run/containerd/s/07e05265b4523da267aea17274fd79584f75c199db96ae2138a0ede6c17a09dc" namespace=k8s.io protocol=ttrpc version=3 Nov 6 05:30:36.775399 systemd[1]: Started cri-containerd-a59d119eb5f429bfd749ae26a8f0666bc41ca2d9e95208d395e7803dba48feaa.scope - libcontainer container a59d119eb5f429bfd749ae26a8f0666bc41ca2d9e95208d395e7803dba48feaa. Nov 6 05:30:36.802416 systemd[1]: Started cri-containerd-ab3b8729f32a65070bd920a00481e3fb63ac4b4f42f1bf3bb4f9d08d81550869.scope - libcontainer container ab3b8729f32a65070bd920a00481e3fb63ac4b4f42f1bf3bb4f9d08d81550869. Nov 6 05:30:36.837015 containerd[1686]: time="2025-11-06T05:30:36.836949500Z" level=info msg="StartContainer for \"a59d119eb5f429bfd749ae26a8f0666bc41ca2d9e95208d395e7803dba48feaa\" returns successfully" Nov 6 05:30:36.859790 containerd[1686]: time="2025-11-06T05:30:36.859766719Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-2bv6v,Uid:074843cc-aabb-485d-9463-9dadbe59f96f,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"ab3b8729f32a65070bd920a00481e3fb63ac4b4f42f1bf3bb4f9d08d81550869\"" Nov 6 05:30:36.862312 containerd[1686]: time="2025-11-06T05:30:36.862284154Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Nov 6 05:30:37.150711 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3432922996.mount: Deactivated successfully. Nov 6 05:30:37.647506 kubelet[2979]: I1106 05:30:37.647470 2979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-xdtwt" podStartSLOduration=2.647458273 podStartE2EDuration="2.647458273s" podCreationTimestamp="2025-11-06 05:30:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 05:30:37.647048165 +0000 UTC m=+6.190671513" watchObservedRunningTime="2025-11-06 05:30:37.647458273 +0000 UTC m=+6.191081621" Nov 6 05:30:38.318951 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2260750344.mount: Deactivated successfully. Nov 6 05:30:38.875868 containerd[1686]: time="2025-11-06T05:30:38.875828333Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=23558205" Nov 6 05:30:38.881069 containerd[1686]: time="2025-11-06T05:30:38.881019755Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 2.018696095s" Nov 6 05:30:38.881069 containerd[1686]: time="2025-11-06T05:30:38.881039430Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Nov 6 05:30:38.889567 containerd[1686]: time="2025-11-06T05:30:38.889532690Z" level=info msg="CreateContainer within sandbox \"ab3b8729f32a65070bd920a00481e3fb63ac4b4f42f1bf3bb4f9d08d81550869\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 6 05:30:38.895575 containerd[1686]: time="2025-11-06T05:30:38.895548654Z" level=info msg="Container 4476a2d8995fab88a2e55e29ae218021a63e1425a8680bf43d0bfb51781a0099: CDI devices from CRI Config.CDIDevices: []" Nov 6 05:30:38.897845 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2986819151.mount: Deactivated successfully. Nov 6 05:30:38.916091 containerd[1686]: time="2025-11-06T05:30:38.916067809Z" level=info msg="CreateContainer within sandbox \"ab3b8729f32a65070bd920a00481e3fb63ac4b4f42f1bf3bb4f9d08d81550869\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"4476a2d8995fab88a2e55e29ae218021a63e1425a8680bf43d0bfb51781a0099\"" Nov 6 05:30:38.917352 containerd[1686]: time="2025-11-06T05:30:38.917059509Z" level=info msg="StartContainer for \"4476a2d8995fab88a2e55e29ae218021a63e1425a8680bf43d0bfb51781a0099\"" Nov 6 05:30:38.921560 containerd[1686]: time="2025-11-06T05:30:38.921532511Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 05:30:38.922256 containerd[1686]: time="2025-11-06T05:30:38.922239026Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 05:30:38.922669 containerd[1686]: time="2025-11-06T05:30:38.922654322Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 05:30:38.924061 containerd[1686]: time="2025-11-06T05:30:38.924037697Z" level=info msg="connecting to shim 4476a2d8995fab88a2e55e29ae218021a63e1425a8680bf43d0bfb51781a0099" address="unix:///run/containerd/s/07e05265b4523da267aea17274fd79584f75c199db96ae2138a0ede6c17a09dc" protocol=ttrpc version=3 Nov 6 05:30:38.943241 systemd[1]: Started cri-containerd-4476a2d8995fab88a2e55e29ae218021a63e1425a8680bf43d0bfb51781a0099.scope - libcontainer container 4476a2d8995fab88a2e55e29ae218021a63e1425a8680bf43d0bfb51781a0099. Nov 6 05:30:38.964018 containerd[1686]: time="2025-11-06T05:30:38.963999371Z" level=info msg="StartContainer for \"4476a2d8995fab88a2e55e29ae218021a63e1425a8680bf43d0bfb51781a0099\" returns successfully" Nov 6 05:30:39.652597 kubelet[2979]: I1106 05:30:39.652305 2979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-65cdcdfd6d-2bv6v" podStartSLOduration=1.6318119169999998 podStartE2EDuration="3.652296227s" podCreationTimestamp="2025-11-06 05:30:36 +0000 UTC" firstStartedPulling="2025-11-06 05:30:36.860984283 +0000 UTC m=+5.404607624" lastFinishedPulling="2025-11-06 05:30:38.881468597 +0000 UTC m=+7.425091934" observedRunningTime="2025-11-06 05:30:39.652157173 +0000 UTC m=+8.195780514" watchObservedRunningTime="2025-11-06 05:30:39.652296227 +0000 UTC m=+8.195919570" Nov 6 05:30:44.139912 sudo[2001]: pam_unix(sudo:session): session closed for user root Nov 6 05:30:44.141382 sshd[2000]: Connection closed by 139.178.68.195 port 38414 Nov 6 05:30:44.143603 sshd-session[1997]: pam_unix(sshd:session): session closed for user core Nov 6 05:30:44.147356 systemd[1]: sshd@6-139.178.70.103:22-139.178.68.195:38414.service: Deactivated successfully. Nov 6 05:30:44.149011 systemd[1]: session-9.scope: Deactivated successfully. Nov 6 05:30:44.149581 systemd[1]: session-9.scope: Consumed 3.753s CPU time, 153.9M memory peak. Nov 6 05:30:44.154692 systemd-logind[1657]: Session 9 logged out. Waiting for processes to exit. Nov 6 05:30:44.157427 systemd-logind[1657]: Removed session 9. Nov 6 05:30:48.232956 systemd[1]: Created slice kubepods-besteffort-pod224ba612_96df_400e_9bc1_2b4ebf648390.slice - libcontainer container kubepods-besteffort-pod224ba612_96df_400e_9bc1_2b4ebf648390.slice. Nov 6 05:30:48.296333 kubelet[2979]: I1106 05:30:48.296239 2979 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/224ba612-96df-400e-9bc1-2b4ebf648390-tigera-ca-bundle\") pod \"calico-typha-67bd7d946c-h7rmh\" (UID: \"224ba612-96df-400e-9bc1-2b4ebf648390\") " pod="calico-system/calico-typha-67bd7d946c-h7rmh" Nov 6 05:30:48.296333 kubelet[2979]: I1106 05:30:48.296271 2979 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/224ba612-96df-400e-9bc1-2b4ebf648390-typha-certs\") pod \"calico-typha-67bd7d946c-h7rmh\" (UID: \"224ba612-96df-400e-9bc1-2b4ebf648390\") " pod="calico-system/calico-typha-67bd7d946c-h7rmh" Nov 6 05:30:48.296333 kubelet[2979]: I1106 05:30:48.296282 2979 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzlqz\" (UniqueName: \"kubernetes.io/projected/224ba612-96df-400e-9bc1-2b4ebf648390-kube-api-access-pzlqz\") pod \"calico-typha-67bd7d946c-h7rmh\" (UID: \"224ba612-96df-400e-9bc1-2b4ebf648390\") " pod="calico-system/calico-typha-67bd7d946c-h7rmh" Nov 6 05:30:48.434768 systemd[1]: Created slice kubepods-besteffort-pod64fd9f22_9d65_4649_8ea3_7bec848cbe5e.slice - libcontainer container kubepods-besteffort-pod64fd9f22_9d65_4649_8ea3_7bec848cbe5e.slice. Nov 6 05:30:48.497471 kubelet[2979]: I1106 05:30:48.497378 2979 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99hkr\" (UniqueName: \"kubernetes.io/projected/64fd9f22-9d65-4649-8ea3-7bec848cbe5e-kube-api-access-99hkr\") pod \"calico-node-lfdks\" (UID: \"64fd9f22-9d65-4649-8ea3-7bec848cbe5e\") " pod="calico-system/calico-node-lfdks" Nov 6 05:30:48.497471 kubelet[2979]: I1106 05:30:48.497414 2979 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/64fd9f22-9d65-4649-8ea3-7bec848cbe5e-var-run-calico\") pod \"calico-node-lfdks\" (UID: \"64fd9f22-9d65-4649-8ea3-7bec848cbe5e\") " pod="calico-system/calico-node-lfdks" Nov 6 05:30:48.497471 kubelet[2979]: I1106 05:30:48.497425 2979 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/64fd9f22-9d65-4649-8ea3-7bec848cbe5e-cni-bin-dir\") pod \"calico-node-lfdks\" (UID: \"64fd9f22-9d65-4649-8ea3-7bec848cbe5e\") " pod="calico-system/calico-node-lfdks" Nov 6 05:30:48.497471 kubelet[2979]: I1106 05:30:48.497433 2979 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/64fd9f22-9d65-4649-8ea3-7bec848cbe5e-cni-log-dir\") pod \"calico-node-lfdks\" (UID: \"64fd9f22-9d65-4649-8ea3-7bec848cbe5e\") " pod="calico-system/calico-node-lfdks" Nov 6 05:30:48.497471 kubelet[2979]: I1106 05:30:48.497440 2979 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/64fd9f22-9d65-4649-8ea3-7bec848cbe5e-flexvol-driver-host\") pod \"calico-node-lfdks\" (UID: \"64fd9f22-9d65-4649-8ea3-7bec848cbe5e\") " pod="calico-system/calico-node-lfdks" Nov 6 05:30:48.501351 kubelet[2979]: I1106 05:30:48.497449 2979 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/64fd9f22-9d65-4649-8ea3-7bec848cbe5e-lib-modules\") pod \"calico-node-lfdks\" (UID: \"64fd9f22-9d65-4649-8ea3-7bec848cbe5e\") " pod="calico-system/calico-node-lfdks" Nov 6 05:30:48.501351 kubelet[2979]: I1106 05:30:48.497459 2979 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/64fd9f22-9d65-4649-8ea3-7bec848cbe5e-node-certs\") pod \"calico-node-lfdks\" (UID: \"64fd9f22-9d65-4649-8ea3-7bec848cbe5e\") " pod="calico-system/calico-node-lfdks" Nov 6 05:30:48.501351 kubelet[2979]: I1106 05:30:48.497467 2979 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/64fd9f22-9d65-4649-8ea3-7bec848cbe5e-var-lib-calico\") pod \"calico-node-lfdks\" (UID: \"64fd9f22-9d65-4649-8ea3-7bec848cbe5e\") " pod="calico-system/calico-node-lfdks" Nov 6 05:30:48.501351 kubelet[2979]: I1106 05:30:48.497477 2979 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/64fd9f22-9d65-4649-8ea3-7bec848cbe5e-cni-net-dir\") pod \"calico-node-lfdks\" (UID: \"64fd9f22-9d65-4649-8ea3-7bec848cbe5e\") " pod="calico-system/calico-node-lfdks" Nov 6 05:30:48.501351 kubelet[2979]: I1106 05:30:48.497486 2979 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/64fd9f22-9d65-4649-8ea3-7bec848cbe5e-policysync\") pod \"calico-node-lfdks\" (UID: \"64fd9f22-9d65-4649-8ea3-7bec848cbe5e\") " pod="calico-system/calico-node-lfdks" Nov 6 05:30:48.501514 kubelet[2979]: I1106 05:30:48.497494 2979 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/64fd9f22-9d65-4649-8ea3-7bec848cbe5e-tigera-ca-bundle\") pod \"calico-node-lfdks\" (UID: \"64fd9f22-9d65-4649-8ea3-7bec848cbe5e\") " pod="calico-system/calico-node-lfdks" Nov 6 05:30:48.501514 kubelet[2979]: I1106 05:30:48.497507 2979 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/64fd9f22-9d65-4649-8ea3-7bec848cbe5e-xtables-lock\") pod \"calico-node-lfdks\" (UID: \"64fd9f22-9d65-4649-8ea3-7bec848cbe5e\") " pod="calico-system/calico-node-lfdks" Nov 6 05:30:48.538063 containerd[1686]: time="2025-11-06T05:30:48.538018204Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-67bd7d946c-h7rmh,Uid:224ba612-96df-400e-9bc1-2b4ebf648390,Namespace:calico-system,Attempt:0,}" Nov 6 05:30:48.555748 containerd[1686]: time="2025-11-06T05:30:48.555707242Z" level=info msg="connecting to shim 94b26da86f2ffe7cf45e77309c2fb742f461d34846a1e671abb4db01312d8018" address="unix:///run/containerd/s/4be4c8c4402f13dbd5856f0f6fa7b3ae344d9f8818c0e8e06524ffc02ef64292" namespace=k8s.io protocol=ttrpc version=3 Nov 6 05:30:48.580317 systemd[1]: Started cri-containerd-94b26da86f2ffe7cf45e77309c2fb742f461d34846a1e671abb4db01312d8018.scope - libcontainer container 94b26da86f2ffe7cf45e77309c2fb742f461d34846a1e671abb4db01312d8018. Nov 6 05:30:48.600476 kubelet[2979]: E1106 05:30:48.600451 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:30:48.600476 kubelet[2979]: W1106 05:30:48.600470 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:30:48.600476 kubelet[2979]: E1106 05:30:48.600488 2979 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:30:48.601855 kubelet[2979]: E1106 05:30:48.601836 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:30:48.601855 kubelet[2979]: W1106 05:30:48.601853 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:30:48.601855 kubelet[2979]: E1106 05:30:48.601867 2979 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:30:48.602049 kubelet[2979]: E1106 05:30:48.601995 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:30:48.602049 kubelet[2979]: W1106 05:30:48.602000 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:30:48.602049 kubelet[2979]: E1106 05:30:48.602006 2979 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:30:48.602484 kubelet[2979]: E1106 05:30:48.602474 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:30:48.602519 kubelet[2979]: W1106 05:30:48.602483 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:30:48.602519 kubelet[2979]: E1106 05:30:48.602493 2979 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:30:48.604220 kubelet[2979]: E1106 05:30:48.604184 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:30:48.604220 kubelet[2979]: W1106 05:30:48.604197 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:30:48.604220 kubelet[2979]: E1106 05:30:48.604209 2979 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:30:48.607259 kubelet[2979]: E1106 05:30:48.607228 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:30:48.607259 kubelet[2979]: W1106 05:30:48.607242 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:30:48.607396 kubelet[2979]: E1106 05:30:48.607371 2979 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:30:48.617259 kubelet[2979]: E1106 05:30:48.617162 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:30:48.617259 kubelet[2979]: W1106 05:30:48.617178 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:30:48.617259 kubelet[2979]: E1106 05:30:48.617191 2979 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:30:48.620107 kubelet[2979]: E1106 05:30:48.620015 2979 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-65j25" podUID="8c33e9bc-df47-4f69-aab6-628eca0dd480" Nov 6 05:30:48.658819 containerd[1686]: time="2025-11-06T05:30:48.658794064Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-67bd7d946c-h7rmh,Uid:224ba612-96df-400e-9bc1-2b4ebf648390,Namespace:calico-system,Attempt:0,} returns sandbox id \"94b26da86f2ffe7cf45e77309c2fb742f461d34846a1e671abb4db01312d8018\"" Nov 6 05:30:48.659994 containerd[1686]: time="2025-11-06T05:30:48.659980518Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Nov 6 05:30:48.695105 kubelet[2979]: E1106 05:30:48.695088 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:30:48.695271 kubelet[2979]: W1106 05:30:48.695153 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:30:48.695271 kubelet[2979]: E1106 05:30:48.695167 2979 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:30:48.695439 kubelet[2979]: E1106 05:30:48.695409 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:30:48.695439 kubelet[2979]: W1106 05:30:48.695415 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:30:48.695439 kubelet[2979]: E1106 05:30:48.695421 2979 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:30:48.695645 kubelet[2979]: E1106 05:30:48.695613 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:30:48.695645 kubelet[2979]: W1106 05:30:48.695619 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:30:48.695645 kubelet[2979]: E1106 05:30:48.695624 2979 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:30:48.695854 kubelet[2979]: E1106 05:30:48.695817 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:30:48.695854 kubelet[2979]: W1106 05:30:48.695823 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:30:48.695854 kubelet[2979]: E1106 05:30:48.695829 2979 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:30:48.696005 kubelet[2979]: E1106 05:30:48.695995 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:30:48.696063 kubelet[2979]: W1106 05:30:48.696038 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:30:48.696063 kubelet[2979]: E1106 05:30:48.696045 2979 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:30:48.696229 kubelet[2979]: E1106 05:30:48.696202 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:30:48.696229 kubelet[2979]: W1106 05:30:48.696208 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:30:48.696229 kubelet[2979]: E1106 05:30:48.696213 2979 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:30:48.696403 kubelet[2979]: E1106 05:30:48.696354 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:30:48.696403 kubelet[2979]: W1106 05:30:48.696359 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:30:48.696403 kubelet[2979]: E1106 05:30:48.696365 2979 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:30:48.696522 kubelet[2979]: E1106 05:30:48.696491 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:30:48.696522 kubelet[2979]: W1106 05:30:48.696496 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:30:48.696522 kubelet[2979]: E1106 05:30:48.696501 2979 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:30:48.696700 kubelet[2979]: E1106 05:30:48.696671 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:30:48.696700 kubelet[2979]: W1106 05:30:48.696677 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:30:48.696700 kubelet[2979]: E1106 05:30:48.696682 2979 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:30:48.696831 kubelet[2979]: E1106 05:30:48.696827 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:30:48.696878 kubelet[2979]: W1106 05:30:48.696855 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:30:48.696878 kubelet[2979]: E1106 05:30:48.696862 2979 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:30:48.697014 kubelet[2979]: E1106 05:30:48.696986 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:30:48.697014 kubelet[2979]: W1106 05:30:48.696992 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:30:48.697014 kubelet[2979]: E1106 05:30:48.696996 2979 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:30:48.697187 kubelet[2979]: E1106 05:30:48.697155 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:30:48.697187 kubelet[2979]: W1106 05:30:48.697161 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:30:48.697187 kubelet[2979]: E1106 05:30:48.697165 2979 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:30:48.697388 kubelet[2979]: E1106 05:30:48.697361 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:30:48.697388 kubelet[2979]: W1106 05:30:48.697367 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:30:48.697388 kubelet[2979]: E1106 05:30:48.697372 2979 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:30:48.697522 kubelet[2979]: E1106 05:30:48.697517 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:30:48.697557 kubelet[2979]: W1106 05:30:48.697552 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:30:48.697602 kubelet[2979]: E1106 05:30:48.697595 2979 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:30:48.697742 kubelet[2979]: E1106 05:30:48.697715 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:30:48.697742 kubelet[2979]: W1106 05:30:48.697721 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:30:48.697742 kubelet[2979]: E1106 05:30:48.697726 2979 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:30:48.697931 kubelet[2979]: E1106 05:30:48.697896 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:30:48.697931 kubelet[2979]: W1106 05:30:48.697902 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:30:48.697931 kubelet[2979]: E1106 05:30:48.697907 2979 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:30:48.698112 kubelet[2979]: E1106 05:30:48.698084 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:30:48.698112 kubelet[2979]: W1106 05:30:48.698090 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:30:48.698112 kubelet[2979]: E1106 05:30:48.698095 2979 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:30:48.698301 kubelet[2979]: E1106 05:30:48.698272 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:30:48.698301 kubelet[2979]: W1106 05:30:48.698277 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:30:48.698301 kubelet[2979]: E1106 05:30:48.698282 2979 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:30:48.698499 kubelet[2979]: E1106 05:30:48.698469 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:30:48.698499 kubelet[2979]: W1106 05:30:48.698475 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:30:48.698499 kubelet[2979]: E1106 05:30:48.698480 2979 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:30:48.698658 kubelet[2979]: E1106 05:30:48.698638 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:30:48.698658 kubelet[2979]: W1106 05:30:48.698644 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:30:48.698658 kubelet[2979]: E1106 05:30:48.698649 2979 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:30:48.698896 kubelet[2979]: E1106 05:30:48.698875 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:30:48.698896 kubelet[2979]: W1106 05:30:48.698880 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:30:48.698896 kubelet[2979]: E1106 05:30:48.698886 2979 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:30:48.698993 kubelet[2979]: I1106 05:30:48.698961 2979 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/8c33e9bc-df47-4f69-aab6-628eca0dd480-registration-dir\") pod \"csi-node-driver-65j25\" (UID: \"8c33e9bc-df47-4f69-aab6-628eca0dd480\") " pod="calico-system/csi-node-driver-65j25" Nov 6 05:30:48.699100 kubelet[2979]: E1106 05:30:48.699093 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:30:48.699198 kubelet[2979]: W1106 05:30:48.699148 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:30:48.699198 kubelet[2979]: E1106 05:30:48.699158 2979 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:30:48.699198 kubelet[2979]: I1106 05:30:48.699175 2979 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/8c33e9bc-df47-4f69-aab6-628eca0dd480-socket-dir\") pod \"csi-node-driver-65j25\" (UID: \"8c33e9bc-df47-4f69-aab6-628eca0dd480\") " pod="calico-system/csi-node-driver-65j25" Nov 6 05:30:48.699366 kubelet[2979]: E1106 05:30:48.699349 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:30:48.699366 kubelet[2979]: W1106 05:30:48.699355 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:30:48.699366 kubelet[2979]: E1106 05:30:48.699360 2979 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:30:48.699461 kubelet[2979]: I1106 05:30:48.699432 2979 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/8c33e9bc-df47-4f69-aab6-628eca0dd480-varrun\") pod \"csi-node-driver-65j25\" (UID: \"8c33e9bc-df47-4f69-aab6-628eca0dd480\") " pod="calico-system/csi-node-driver-65j25" Nov 6 05:30:48.699590 kubelet[2979]: E1106 05:30:48.699573 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:30:48.699590 kubelet[2979]: W1106 05:30:48.699580 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:30:48.699590 kubelet[2979]: E1106 05:30:48.699585 2979 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:30:48.699676 kubelet[2979]: I1106 05:30:48.699655 2979 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8c33e9bc-df47-4f69-aab6-628eca0dd480-kubelet-dir\") pod \"csi-node-driver-65j25\" (UID: \"8c33e9bc-df47-4f69-aab6-628eca0dd480\") " pod="calico-system/csi-node-driver-65j25" Nov 6 05:30:48.699790 kubelet[2979]: E1106 05:30:48.699785 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:30:48.699824 kubelet[2979]: W1106 05:30:48.699819 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:30:48.699853 kubelet[2979]: E1106 05:30:48.699848 2979 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:30:48.699893 kubelet[2979]: I1106 05:30:48.699887 2979 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qb9ck\" (UniqueName: \"kubernetes.io/projected/8c33e9bc-df47-4f69-aab6-628eca0dd480-kube-api-access-qb9ck\") pod \"csi-node-driver-65j25\" (UID: \"8c33e9bc-df47-4f69-aab6-628eca0dd480\") " pod="calico-system/csi-node-driver-65j25" Nov 6 05:30:48.700040 kubelet[2979]: E1106 05:30:48.700021 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:30:48.700040 kubelet[2979]: W1106 05:30:48.700029 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:30:48.700040 kubelet[2979]: E1106 05:30:48.700034 2979 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:30:48.700253 kubelet[2979]: E1106 05:30:48.700236 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:30:48.700253 kubelet[2979]: W1106 05:30:48.700242 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:30:48.700253 kubelet[2979]: E1106 05:30:48.700247 2979 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:30:48.700425 kubelet[2979]: E1106 05:30:48.700407 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:30:48.700425 kubelet[2979]: W1106 05:30:48.700413 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:30:48.700425 kubelet[2979]: E1106 05:30:48.700419 2979 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:30:48.700603 kubelet[2979]: E1106 05:30:48.700586 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:30:48.700603 kubelet[2979]: W1106 05:30:48.700592 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:30:48.700603 kubelet[2979]: E1106 05:30:48.700597 2979 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:30:48.700792 kubelet[2979]: E1106 05:30:48.700775 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:30:48.700792 kubelet[2979]: W1106 05:30:48.700781 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:30:48.700792 kubelet[2979]: E1106 05:30:48.700786 2979 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:30:48.700961 kubelet[2979]: E1106 05:30:48.700944 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:30:48.700961 kubelet[2979]: W1106 05:30:48.700950 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:30:48.700961 kubelet[2979]: E1106 05:30:48.700954 2979 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:30:48.701165 kubelet[2979]: E1106 05:30:48.701110 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:30:48.701165 kubelet[2979]: W1106 05:30:48.701119 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:30:48.701165 kubelet[2979]: E1106 05:30:48.701124 2979 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:30:48.701364 kubelet[2979]: E1106 05:30:48.701346 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:30:48.701364 kubelet[2979]: W1106 05:30:48.701351 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:30:48.701364 kubelet[2979]: E1106 05:30:48.701356 2979 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:30:48.701520 kubelet[2979]: E1106 05:30:48.701512 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:30:48.701564 kubelet[2979]: W1106 05:30:48.701550 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:30:48.701564 kubelet[2979]: E1106 05:30:48.701557 2979 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:30:48.701721 kubelet[2979]: E1106 05:30:48.701701 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:30:48.701721 kubelet[2979]: W1106 05:30:48.701707 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:30:48.701721 kubelet[2979]: E1106 05:30:48.701712 2979 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:30:48.743808 containerd[1686]: time="2025-11-06T05:30:48.742490834Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-lfdks,Uid:64fd9f22-9d65-4649-8ea3-7bec848cbe5e,Namespace:calico-system,Attempt:0,}" Nov 6 05:30:48.801480 kubelet[2979]: E1106 05:30:48.800339 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:30:48.801480 kubelet[2979]: W1106 05:30:48.800355 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:30:48.801480 kubelet[2979]: E1106 05:30:48.800367 2979 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:30:48.801480 kubelet[2979]: E1106 05:30:48.800462 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:30:48.801480 kubelet[2979]: W1106 05:30:48.800467 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:30:48.801480 kubelet[2979]: E1106 05:30:48.800472 2979 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:30:48.801480 kubelet[2979]: E1106 05:30:48.800549 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:30:48.801480 kubelet[2979]: W1106 05:30:48.800553 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:30:48.801480 kubelet[2979]: E1106 05:30:48.800557 2979 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:30:48.801480 kubelet[2979]: E1106 05:30:48.800908 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:30:48.803321 kubelet[2979]: W1106 05:30:48.800913 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:30:48.803321 kubelet[2979]: E1106 05:30:48.800918 2979 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:30:48.803321 kubelet[2979]: E1106 05:30:48.801029 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:30:48.803321 kubelet[2979]: W1106 05:30:48.801033 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:30:48.803321 kubelet[2979]: E1106 05:30:48.801038 2979 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:30:48.803321 kubelet[2979]: E1106 05:30:48.801175 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:30:48.803321 kubelet[2979]: W1106 05:30:48.801180 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:30:48.803321 kubelet[2979]: E1106 05:30:48.801185 2979 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:30:48.803321 kubelet[2979]: E1106 05:30:48.801251 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:30:48.803321 kubelet[2979]: W1106 05:30:48.801255 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:30:48.806905 kubelet[2979]: E1106 05:30:48.801259 2979 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:30:48.806905 kubelet[2979]: E1106 05:30:48.801335 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:30:48.806905 kubelet[2979]: W1106 05:30:48.801341 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:30:48.806905 kubelet[2979]: E1106 05:30:48.801345 2979 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:30:48.806905 kubelet[2979]: E1106 05:30:48.801631 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:30:48.806905 kubelet[2979]: W1106 05:30:48.801636 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:30:48.806905 kubelet[2979]: E1106 05:30:48.801641 2979 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:30:48.806905 kubelet[2979]: E1106 05:30:48.801749 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:30:48.806905 kubelet[2979]: W1106 05:30:48.801754 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:30:48.806905 kubelet[2979]: E1106 05:30:48.801759 2979 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:30:48.808349 containerd[1686]: time="2025-11-06T05:30:48.804383333Z" level=info msg="connecting to shim 04ef05a8d846c2ebf4be9591d486081aeee924af0d3f802194e2a5e556be91af" address="unix:///run/containerd/s/8e4bfaee00ec2cb93070d819b0c07f2743ed487567f9fd2e07cf89559e6cbca5" namespace=k8s.io protocol=ttrpc version=3 Nov 6 05:30:48.808392 kubelet[2979]: E1106 05:30:48.801836 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:30:48.808392 kubelet[2979]: W1106 05:30:48.801840 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:30:48.808392 kubelet[2979]: E1106 05:30:48.801844 2979 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:30:48.808392 kubelet[2979]: E1106 05:30:48.801927 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:30:48.808392 kubelet[2979]: W1106 05:30:48.801931 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:30:48.808392 kubelet[2979]: E1106 05:30:48.801935 2979 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:30:48.808392 kubelet[2979]: E1106 05:30:48.802190 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:30:48.808392 kubelet[2979]: W1106 05:30:48.802194 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:30:48.808392 kubelet[2979]: E1106 05:30:48.802200 2979 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:30:48.808392 kubelet[2979]: E1106 05:30:48.803181 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:30:48.808625 kubelet[2979]: W1106 05:30:48.803195 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:30:48.808625 kubelet[2979]: E1106 05:30:48.804198 2979 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:30:48.808625 kubelet[2979]: E1106 05:30:48.805782 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:30:48.808625 kubelet[2979]: W1106 05:30:48.805794 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:30:48.808625 kubelet[2979]: E1106 05:30:48.805817 2979 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:30:48.808939 kubelet[2979]: E1106 05:30:48.808796 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:30:48.808939 kubelet[2979]: W1106 05:30:48.808812 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:30:48.808939 kubelet[2979]: E1106 05:30:48.808839 2979 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:30:48.809292 kubelet[2979]: E1106 05:30:48.808989 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:30:48.809292 kubelet[2979]: W1106 05:30:48.808995 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:30:48.809292 kubelet[2979]: E1106 05:30:48.809016 2979 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:30:48.809292 kubelet[2979]: E1106 05:30:48.809123 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:30:48.809292 kubelet[2979]: W1106 05:30:48.809138 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:30:48.809292 kubelet[2979]: E1106 05:30:48.809144 2979 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:30:48.809873 kubelet[2979]: E1106 05:30:48.809772 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:30:48.809873 kubelet[2979]: W1106 05:30:48.809786 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:30:48.809873 kubelet[2979]: E1106 05:30:48.809796 2979 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:30:48.810056 kubelet[2979]: E1106 05:30:48.809975 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:30:48.810056 kubelet[2979]: W1106 05:30:48.809982 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:30:48.810056 kubelet[2979]: E1106 05:30:48.809988 2979 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:30:48.810438 kubelet[2979]: E1106 05:30:48.810267 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:30:48.810438 kubelet[2979]: W1106 05:30:48.810275 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:30:48.810438 kubelet[2979]: E1106 05:30:48.810284 2979 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:30:48.811047 kubelet[2979]: E1106 05:30:48.811037 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:30:48.811122 kubelet[2979]: W1106 05:30:48.811110 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:30:48.811194 kubelet[2979]: E1106 05:30:48.811184 2979 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:30:48.811815 kubelet[2979]: E1106 05:30:48.811804 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:30:48.811876 kubelet[2979]: W1106 05:30:48.811868 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:30:48.811923 kubelet[2979]: E1106 05:30:48.811916 2979 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:30:48.812304 kubelet[2979]: E1106 05:30:48.812281 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:30:48.812304 kubelet[2979]: W1106 05:30:48.812290 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:30:48.812304 kubelet[2979]: E1106 05:30:48.812297 2979 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:30:48.812657 kubelet[2979]: E1106 05:30:48.812632 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:30:48.812657 kubelet[2979]: W1106 05:30:48.812639 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:30:48.812657 kubelet[2979]: E1106 05:30:48.812645 2979 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:30:48.815503 kubelet[2979]: E1106 05:30:48.815356 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:30:48.815852 kubelet[2979]: W1106 05:30:48.815808 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:30:48.815852 kubelet[2979]: E1106 05:30:48.815828 2979 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:30:48.826389 systemd[1]: Started cri-containerd-04ef05a8d846c2ebf4be9591d486081aeee924af0d3f802194e2a5e556be91af.scope - libcontainer container 04ef05a8d846c2ebf4be9591d486081aeee924af0d3f802194e2a5e556be91af. Nov 6 05:30:48.850574 containerd[1686]: time="2025-11-06T05:30:48.850552348Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-lfdks,Uid:64fd9f22-9d65-4649-8ea3-7bec848cbe5e,Namespace:calico-system,Attempt:0,} returns sandbox id \"04ef05a8d846c2ebf4be9591d486081aeee924af0d3f802194e2a5e556be91af\"" Nov 6 05:30:50.473929 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1721973924.mount: Deactivated successfully. Nov 6 05:30:50.599739 kubelet[2979]: E1106 05:30:50.599714 2979 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-65j25" podUID="8c33e9bc-df47-4f69-aab6-628eca0dd480" Nov 6 05:30:51.483556 containerd[1686]: time="2025-11-06T05:30:51.483508595Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 05:30:51.489557 containerd[1686]: time="2025-11-06T05:30:51.489530168Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33735893" Nov 6 05:30:51.490965 containerd[1686]: time="2025-11-06T05:30:51.490920046Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 05:30:51.681485 containerd[1686]: time="2025-11-06T05:30:51.681374809Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 05:30:51.681973 containerd[1686]: time="2025-11-06T05:30:51.681636376Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 3.02163858s" Nov 6 05:30:51.681973 containerd[1686]: time="2025-11-06T05:30:51.681760651Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Nov 6 05:30:51.716174 containerd[1686]: time="2025-11-06T05:30:51.715536440Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Nov 6 05:30:51.733327 containerd[1686]: time="2025-11-06T05:30:51.732370834Z" level=info msg="CreateContainer within sandbox \"94b26da86f2ffe7cf45e77309c2fb742f461d34846a1e671abb4db01312d8018\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 6 05:30:51.739582 containerd[1686]: time="2025-11-06T05:30:51.739421998Z" level=info msg="Container e7c1a2f8928f9162372b76761d4f7bff4de564081ee880e8fe3e66094475ed9f: CDI devices from CRI Config.CDIDevices: []" Nov 6 05:30:51.750092 containerd[1686]: time="2025-11-06T05:30:51.750037481Z" level=info msg="CreateContainer within sandbox \"94b26da86f2ffe7cf45e77309c2fb742f461d34846a1e671abb4db01312d8018\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"e7c1a2f8928f9162372b76761d4f7bff4de564081ee880e8fe3e66094475ed9f\"" Nov 6 05:30:51.752244 containerd[1686]: time="2025-11-06T05:30:51.751480318Z" level=info msg="StartContainer for \"e7c1a2f8928f9162372b76761d4f7bff4de564081ee880e8fe3e66094475ed9f\"" Nov 6 05:30:51.752784 containerd[1686]: time="2025-11-06T05:30:51.752683246Z" level=info msg="connecting to shim e7c1a2f8928f9162372b76761d4f7bff4de564081ee880e8fe3e66094475ed9f" address="unix:///run/containerd/s/4be4c8c4402f13dbd5856f0f6fa7b3ae344d9f8818c0e8e06524ffc02ef64292" protocol=ttrpc version=3 Nov 6 05:30:51.794576 systemd[1]: Started cri-containerd-e7c1a2f8928f9162372b76761d4f7bff4de564081ee880e8fe3e66094475ed9f.scope - libcontainer container e7c1a2f8928f9162372b76761d4f7bff4de564081ee880e8fe3e66094475ed9f. Nov 6 05:30:51.857405 containerd[1686]: time="2025-11-06T05:30:51.857379343Z" level=info msg="StartContainer for \"e7c1a2f8928f9162372b76761d4f7bff4de564081ee880e8fe3e66094475ed9f\" returns successfully" Nov 6 05:30:52.597321 kubelet[2979]: E1106 05:30:52.597283 2979 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-65j25" podUID="8c33e9bc-df47-4f69-aab6-628eca0dd480" Nov 6 05:30:52.849008 kubelet[2979]: E1106 05:30:52.848777 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:30:52.849008 kubelet[2979]: W1106 05:30:52.848811 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:30:52.869922 kubelet[2979]: E1106 05:30:52.869895 2979 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:30:52.870092 kubelet[2979]: E1106 05:30:52.870078 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:30:52.870092 kubelet[2979]: W1106 05:30:52.870090 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:30:52.870203 kubelet[2979]: E1106 05:30:52.870102 2979 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:30:52.875532 kubelet[2979]: E1106 05:30:52.870218 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:30:52.875532 kubelet[2979]: W1106 05:30:52.870223 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:30:52.875532 kubelet[2979]: E1106 05:30:52.870228 2979 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:30:52.875532 kubelet[2979]: E1106 05:30:52.870367 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:30:52.875532 kubelet[2979]: W1106 05:30:52.870372 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:30:52.875532 kubelet[2979]: E1106 05:30:52.870379 2979 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:30:52.875532 kubelet[2979]: E1106 05:30:52.870592 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:30:52.875532 kubelet[2979]: W1106 05:30:52.870599 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:30:52.875532 kubelet[2979]: E1106 05:30:52.870608 2979 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:30:52.875532 kubelet[2979]: E1106 05:30:52.870719 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:30:52.882960 kubelet[2979]: W1106 05:30:52.870724 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:30:52.882960 kubelet[2979]: E1106 05:30:52.870729 2979 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:30:52.882960 kubelet[2979]: E1106 05:30:52.870829 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:30:52.882960 kubelet[2979]: W1106 05:30:52.870835 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:30:52.882960 kubelet[2979]: E1106 05:30:52.870842 2979 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:30:52.882960 kubelet[2979]: E1106 05:30:52.870940 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:30:52.882960 kubelet[2979]: W1106 05:30:52.870946 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:30:52.882960 kubelet[2979]: E1106 05:30:52.870953 2979 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:30:52.882960 kubelet[2979]: E1106 05:30:52.871076 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:30:52.882960 kubelet[2979]: W1106 05:30:52.871082 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:30:52.883158 kubelet[2979]: E1106 05:30:52.871088 2979 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:30:52.883158 kubelet[2979]: E1106 05:30:52.871201 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:30:52.883158 kubelet[2979]: W1106 05:30:52.871207 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:30:52.883158 kubelet[2979]: E1106 05:30:52.871213 2979 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:30:52.883158 kubelet[2979]: E1106 05:30:52.871322 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:30:52.883158 kubelet[2979]: W1106 05:30:52.871329 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:30:52.883158 kubelet[2979]: E1106 05:30:52.871336 2979 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:30:52.883158 kubelet[2979]: E1106 05:30:52.871441 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:30:52.883158 kubelet[2979]: W1106 05:30:52.871447 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:30:52.883158 kubelet[2979]: E1106 05:30:52.871455 2979 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:30:52.883340 kubelet[2979]: E1106 05:30:52.871571 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:30:52.883340 kubelet[2979]: W1106 05:30:52.871579 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:30:52.883340 kubelet[2979]: E1106 05:30:52.871588 2979 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:30:52.883340 kubelet[2979]: E1106 05:30:52.871702 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:30:52.883340 kubelet[2979]: W1106 05:30:52.871707 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:30:52.883340 kubelet[2979]: E1106 05:30:52.871712 2979 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:30:52.883340 kubelet[2979]: E1106 05:30:52.871807 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:30:52.883340 kubelet[2979]: W1106 05:30:52.871812 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:30:52.883340 kubelet[2979]: E1106 05:30:52.871816 2979 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:30:52.883340 kubelet[2979]: E1106 05:30:52.871949 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:30:52.883501 kubelet[2979]: W1106 05:30:52.871954 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:30:52.883501 kubelet[2979]: E1106 05:30:52.871958 2979 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:30:52.883501 kubelet[2979]: E1106 05:30:52.872054 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:30:52.883501 kubelet[2979]: W1106 05:30:52.872059 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:30:52.883501 kubelet[2979]: E1106 05:30:52.872064 2979 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:30:52.883501 kubelet[2979]: E1106 05:30:52.872230 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:30:52.883501 kubelet[2979]: W1106 05:30:52.872236 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:30:52.883501 kubelet[2979]: E1106 05:30:52.872242 2979 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:30:52.883501 kubelet[2979]: E1106 05:30:52.872346 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:30:52.883501 kubelet[2979]: W1106 05:30:52.872351 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:30:52.883673 kubelet[2979]: E1106 05:30:52.872359 2979 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:30:52.883673 kubelet[2979]: E1106 05:30:52.872455 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:30:52.883673 kubelet[2979]: W1106 05:30:52.872460 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:30:52.883673 kubelet[2979]: E1106 05:30:52.872466 2979 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:30:52.883673 kubelet[2979]: E1106 05:30:52.872601 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:30:52.883673 kubelet[2979]: W1106 05:30:52.872607 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:30:52.883673 kubelet[2979]: E1106 05:30:52.872615 2979 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:30:52.883673 kubelet[2979]: E1106 05:30:52.872766 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:30:52.883673 kubelet[2979]: W1106 05:30:52.872770 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:30:52.883673 kubelet[2979]: E1106 05:30:52.872775 2979 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:30:52.883862 kubelet[2979]: E1106 05:30:52.872912 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:30:52.883862 kubelet[2979]: W1106 05:30:52.872917 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:30:52.883862 kubelet[2979]: E1106 05:30:52.872922 2979 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:30:52.883862 kubelet[2979]: E1106 05:30:52.873017 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:30:52.883862 kubelet[2979]: W1106 05:30:52.873022 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:30:52.883862 kubelet[2979]: E1106 05:30:52.873026 2979 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:30:52.883862 kubelet[2979]: E1106 05:30:52.873138 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:30:52.883862 kubelet[2979]: W1106 05:30:52.873144 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:30:52.883862 kubelet[2979]: E1106 05:30:52.873148 2979 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:30:52.883862 kubelet[2979]: E1106 05:30:52.873277 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:30:52.884030 kubelet[2979]: W1106 05:30:52.873282 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:30:52.884030 kubelet[2979]: E1106 05:30:52.873289 2979 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:30:52.884030 kubelet[2979]: E1106 05:30:52.873445 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:30:52.884030 kubelet[2979]: W1106 05:30:52.873451 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:30:52.884030 kubelet[2979]: E1106 05:30:52.873457 2979 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:30:52.884030 kubelet[2979]: E1106 05:30:52.873753 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:30:52.884030 kubelet[2979]: W1106 05:30:52.873759 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:30:52.884030 kubelet[2979]: E1106 05:30:52.873764 2979 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:30:52.884030 kubelet[2979]: E1106 05:30:52.873856 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:30:52.884030 kubelet[2979]: W1106 05:30:52.873865 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:30:52.890233 kubelet[2979]: E1106 05:30:52.873873 2979 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:30:52.890233 kubelet[2979]: E1106 05:30:52.873975 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:30:52.890233 kubelet[2979]: W1106 05:30:52.873980 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:30:52.890233 kubelet[2979]: E1106 05:30:52.873984 2979 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:30:52.890233 kubelet[2979]: E1106 05:30:52.874086 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:30:52.890233 kubelet[2979]: W1106 05:30:52.874091 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:30:52.890233 kubelet[2979]: E1106 05:30:52.874096 2979 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:30:52.890233 kubelet[2979]: E1106 05:30:52.874241 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:30:52.890233 kubelet[2979]: W1106 05:30:52.874245 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:30:52.890233 kubelet[2979]: E1106 05:30:52.874251 2979 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:30:52.900834 kubelet[2979]: E1106 05:30:52.874459 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:30:52.900834 kubelet[2979]: W1106 05:30:52.874464 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:30:52.900834 kubelet[2979]: E1106 05:30:52.874469 2979 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:30:53.788773 kubelet[2979]: I1106 05:30:53.788678 2979 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 6 05:30:53.877558 kubelet[2979]: E1106 05:30:53.877531 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:30:53.877558 kubelet[2979]: W1106 05:30:53.877549 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:30:53.877744 kubelet[2979]: E1106 05:30:53.877567 2979 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:30:53.877744 kubelet[2979]: E1106 05:30:53.877730 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:30:53.877744 kubelet[2979]: W1106 05:30:53.877736 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:30:53.877744 kubelet[2979]: E1106 05:30:53.877743 2979 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:30:53.877839 kubelet[2979]: E1106 05:30:53.877810 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:30:53.877839 kubelet[2979]: W1106 05:30:53.877814 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:30:53.877839 kubelet[2979]: E1106 05:30:53.877819 2979 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:30:53.877942 kubelet[2979]: E1106 05:30:53.877928 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:30:53.877942 kubelet[2979]: W1106 05:30:53.877933 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:30:53.877942 kubelet[2979]: E1106 05:30:53.877938 2979 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:30:53.878027 kubelet[2979]: E1106 05:30:53.878015 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:30:53.878027 kubelet[2979]: W1106 05:30:53.878023 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:30:53.878079 kubelet[2979]: E1106 05:30:53.878028 2979 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:30:53.878108 kubelet[2979]: E1106 05:30:53.878094 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:30:53.878108 kubelet[2979]: W1106 05:30:53.878098 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:30:53.878108 kubelet[2979]: E1106 05:30:53.878103 2979 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:30:53.878203 kubelet[2979]: E1106 05:30:53.878192 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:30:53.878203 kubelet[2979]: W1106 05:30:53.878198 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:30:53.878255 kubelet[2979]: E1106 05:30:53.878205 2979 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:30:53.878288 kubelet[2979]: E1106 05:30:53.878279 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:30:53.878288 kubelet[2979]: W1106 05:30:53.878283 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:30:53.878288 kubelet[2979]: E1106 05:30:53.878287 2979 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:30:53.878378 kubelet[2979]: E1106 05:30:53.878363 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:30:53.878378 kubelet[2979]: W1106 05:30:53.878367 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:30:53.878378 kubelet[2979]: E1106 05:30:53.878372 2979 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:30:53.878496 kubelet[2979]: E1106 05:30:53.878447 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:30:53.878496 kubelet[2979]: W1106 05:30:53.878451 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:30:53.878496 kubelet[2979]: E1106 05:30:53.878456 2979 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:30:53.878600 kubelet[2979]: E1106 05:30:53.878521 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:30:53.878600 kubelet[2979]: W1106 05:30:53.878525 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:30:53.878600 kubelet[2979]: E1106 05:30:53.878529 2979 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:30:53.878600 kubelet[2979]: E1106 05:30:53.878593 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:30:53.878600 kubelet[2979]: W1106 05:30:53.878597 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:30:53.878600 kubelet[2979]: E1106 05:30:53.878601 2979 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:30:53.878816 kubelet[2979]: E1106 05:30:53.878667 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:30:53.878910 kubelet[2979]: W1106 05:30:53.878672 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:30:53.878910 kubelet[2979]: E1106 05:30:53.878872 2979 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:30:53.879034 kubelet[2979]: E1106 05:30:53.878969 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:30:53.879034 kubelet[2979]: W1106 05:30:53.878975 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:30:53.879034 kubelet[2979]: E1106 05:30:53.878980 2979 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:30:53.879285 kubelet[2979]: E1106 05:30:53.879274 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:30:53.879285 kubelet[2979]: W1106 05:30:53.879283 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:30:53.879345 kubelet[2979]: E1106 05:30:53.879290 2979 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:30:53.879598 kubelet[2979]: E1106 05:30:53.879587 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:30:53.879598 kubelet[2979]: W1106 05:30:53.879594 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:30:53.879670 kubelet[2979]: E1106 05:30:53.879601 2979 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:30:53.879872 kubelet[2979]: E1106 05:30:53.879858 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:30:53.879872 kubelet[2979]: W1106 05:30:53.879865 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:30:53.879872 kubelet[2979]: E1106 05:30:53.879872 2979 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:30:53.880027 kubelet[2979]: E1106 05:30:53.880017 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:30:53.880027 kubelet[2979]: W1106 05:30:53.880025 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:30:53.880070 kubelet[2979]: E1106 05:30:53.880030 2979 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:30:53.880195 kubelet[2979]: E1106 05:30:53.880164 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:30:53.880195 kubelet[2979]: W1106 05:30:53.880172 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:30:53.880195 kubelet[2979]: E1106 05:30:53.880177 2979 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:30:53.880429 kubelet[2979]: E1106 05:30:53.880420 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:30:53.880429 kubelet[2979]: W1106 05:30:53.880427 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:30:53.880498 kubelet[2979]: E1106 05:30:53.880433 2979 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:30:53.880624 kubelet[2979]: E1106 05:30:53.880615 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:30:53.880624 kubelet[2979]: W1106 05:30:53.880622 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:30:53.880690 kubelet[2979]: E1106 05:30:53.880630 2979 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:30:53.880832 kubelet[2979]: E1106 05:30:53.880821 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:30:53.880832 kubelet[2979]: W1106 05:30:53.880831 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:30:53.880976 kubelet[2979]: E1106 05:30:53.880840 2979 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:30:53.881143 kubelet[2979]: E1106 05:30:53.881016 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:30:53.881143 kubelet[2979]: W1106 05:30:53.881023 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:30:53.881143 kubelet[2979]: E1106 05:30:53.881039 2979 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:30:53.881394 kubelet[2979]: E1106 05:30:53.881366 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:30:53.881394 kubelet[2979]: W1106 05:30:53.881374 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:30:53.881394 kubelet[2979]: E1106 05:30:53.881380 2979 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:30:53.881486 kubelet[2979]: E1106 05:30:53.881452 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:30:53.881486 kubelet[2979]: W1106 05:30:53.881456 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:30:53.881486 kubelet[2979]: E1106 05:30:53.881461 2979 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:30:53.881963 kubelet[2979]: E1106 05:30:53.881950 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:30:53.881963 kubelet[2979]: W1106 05:30:53.881958 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:30:53.881963 kubelet[2979]: E1106 05:30:53.881964 2979 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:30:53.882097 kubelet[2979]: E1106 05:30:53.882086 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:30:53.882097 kubelet[2979]: W1106 05:30:53.882090 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:30:53.882097 kubelet[2979]: E1106 05:30:53.882096 2979 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:30:53.890237 kubelet[2979]: E1106 05:30:53.882330 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:30:53.890237 kubelet[2979]: W1106 05:30:53.882337 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:30:53.890237 kubelet[2979]: E1106 05:30:53.882345 2979 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:30:53.890237 kubelet[2979]: E1106 05:30:53.882494 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:30:53.890237 kubelet[2979]: W1106 05:30:53.882552 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:30:53.890237 kubelet[2979]: E1106 05:30:53.882569 2979 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:30:53.890237 kubelet[2979]: E1106 05:30:53.882763 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:30:53.890237 kubelet[2979]: W1106 05:30:53.882768 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:30:53.890237 kubelet[2979]: E1106 05:30:53.882774 2979 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:30:53.890237 kubelet[2979]: E1106 05:30:53.882848 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:30:53.890449 kubelet[2979]: W1106 05:30:53.882853 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:30:53.890449 kubelet[2979]: E1106 05:30:53.882858 2979 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:30:53.890449 kubelet[2979]: E1106 05:30:53.882961 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:30:53.890449 kubelet[2979]: W1106 05:30:53.882967 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:30:53.890449 kubelet[2979]: E1106 05:30:53.882972 2979 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:30:53.890449 kubelet[2979]: E1106 05:30:53.883167 2979 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:30:53.890449 kubelet[2979]: W1106 05:30:53.883172 2979 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:30:53.890449 kubelet[2979]: E1106 05:30:53.883177 2979 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:30:54.198019 containerd[1686]: time="2025-11-06T05:30:54.197965251Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 05:30:54.216818 containerd[1686]: time="2025-11-06T05:30:54.216788827Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=3258" Nov 6 05:30:54.254610 containerd[1686]: time="2025-11-06T05:30:54.254563000Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 05:30:54.294774 containerd[1686]: time="2025-11-06T05:30:54.294736192Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 05:30:54.295562 containerd[1686]: time="2025-11-06T05:30:54.295416878Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 2.57985634s" Nov 6 05:30:54.295898 containerd[1686]: time="2025-11-06T05:30:54.295882143Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Nov 6 05:30:54.313688 containerd[1686]: time="2025-11-06T05:30:54.313660530Z" level=info msg="CreateContainer within sandbox \"04ef05a8d846c2ebf4be9591d486081aeee924af0d3f802194e2a5e556be91af\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 6 05:30:54.385175 containerd[1686]: time="2025-11-06T05:30:54.385142124Z" level=info msg="Container 4e0a4c56b20bd9cd57c005c6f92d597b9833cca7fb78c621d6be454170130360: CDI devices from CRI Config.CDIDevices: []" Nov 6 05:30:54.387366 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1864817169.mount: Deactivated successfully. Nov 6 05:30:54.392514 containerd[1686]: time="2025-11-06T05:30:54.392430617Z" level=info msg="CreateContainer within sandbox \"04ef05a8d846c2ebf4be9591d486081aeee924af0d3f802194e2a5e556be91af\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"4e0a4c56b20bd9cd57c005c6f92d597b9833cca7fb78c621d6be454170130360\"" Nov 6 05:30:54.393437 containerd[1686]: time="2025-11-06T05:30:54.393313452Z" level=info msg="StartContainer for \"4e0a4c56b20bd9cd57c005c6f92d597b9833cca7fb78c621d6be454170130360\"" Nov 6 05:30:54.402679 containerd[1686]: time="2025-11-06T05:30:54.402326985Z" level=info msg="connecting to shim 4e0a4c56b20bd9cd57c005c6f92d597b9833cca7fb78c621d6be454170130360" address="unix:///run/containerd/s/8e4bfaee00ec2cb93070d819b0c07f2743ed487567f9fd2e07cf89559e6cbca5" protocol=ttrpc version=3 Nov 6 05:30:54.428311 systemd[1]: Started cri-containerd-4e0a4c56b20bd9cd57c005c6f92d597b9833cca7fb78c621d6be454170130360.scope - libcontainer container 4e0a4c56b20bd9cd57c005c6f92d597b9833cca7fb78c621d6be454170130360. Nov 6 05:30:54.467603 containerd[1686]: time="2025-11-06T05:30:54.467035607Z" level=info msg="StartContainer for \"4e0a4c56b20bd9cd57c005c6f92d597b9833cca7fb78c621d6be454170130360\" returns successfully" Nov 6 05:30:54.475742 systemd[1]: cri-containerd-4e0a4c56b20bd9cd57c005c6f92d597b9833cca7fb78c621d6be454170130360.scope: Deactivated successfully. Nov 6 05:30:54.512989 containerd[1686]: time="2025-11-06T05:30:54.507009219Z" level=info msg="received exit event container_id:\"4e0a4c56b20bd9cd57c005c6f92d597b9833cca7fb78c621d6be454170130360\" id:\"4e0a4c56b20bd9cd57c005c6f92d597b9833cca7fb78c621d6be454170130360\" pid:3688 exited_at:{seconds:1762407054 nanos:477767194}" Nov 6 05:30:54.532615 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4e0a4c56b20bd9cd57c005c6f92d597b9833cca7fb78c621d6be454170130360-rootfs.mount: Deactivated successfully. Nov 6 05:30:54.608087 kubelet[2979]: E1106 05:30:54.608018 2979 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-65j25" podUID="8c33e9bc-df47-4f69-aab6-628eca0dd480" Nov 6 05:30:54.902913 kubelet[2979]: I1106 05:30:54.900702 2979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-67bd7d946c-h7rmh" podStartSLOduration=3.864272306 podStartE2EDuration="6.90069002s" podCreationTimestamp="2025-11-06 05:30:48 +0000 UTC" firstStartedPulling="2025-11-06 05:30:48.659686528 +0000 UTC m=+17.203309864" lastFinishedPulling="2025-11-06 05:30:51.696104241 +0000 UTC m=+20.239727578" observedRunningTime="2025-11-06 05:30:52.858234739 +0000 UTC m=+21.401858087" watchObservedRunningTime="2025-11-06 05:30:54.90069002 +0000 UTC m=+23.444313367" Nov 6 05:30:55.778362 containerd[1686]: time="2025-11-06T05:30:55.778057709Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Nov 6 05:30:56.597244 kubelet[2979]: E1106 05:30:56.597206 2979 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-65j25" podUID="8c33e9bc-df47-4f69-aab6-628eca0dd480" Nov 6 05:30:58.603991 kubelet[2979]: E1106 05:30:58.603958 2979 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-65j25" podUID="8c33e9bc-df47-4f69-aab6-628eca0dd480" Nov 6 05:31:00.601802 kubelet[2979]: E1106 05:31:00.601766 2979 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-65j25" podUID="8c33e9bc-df47-4f69-aab6-628eca0dd480" Nov 6 05:31:01.596340 containerd[1686]: time="2025-11-06T05:31:01.596195682Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 05:31:01.614316 containerd[1686]: time="2025-11-06T05:31:01.614293132Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70442291" Nov 6 05:31:01.623879 containerd[1686]: time="2025-11-06T05:31:01.623833989Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 05:31:01.636788 containerd[1686]: time="2025-11-06T05:31:01.636754245Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 05:31:01.637588 containerd[1686]: time="2025-11-06T05:31:01.637252720Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 5.859162816s" Nov 6 05:31:01.637588 containerd[1686]: time="2025-11-06T05:31:01.637276648Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Nov 6 05:31:02.044708 containerd[1686]: time="2025-11-06T05:31:02.044626821Z" level=info msg="CreateContainer within sandbox \"04ef05a8d846c2ebf4be9591d486081aeee924af0d3f802194e2a5e556be91af\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 6 05:31:02.072418 containerd[1686]: time="2025-11-06T05:31:02.072352424Z" level=info msg="Container 58807faafd46219e5016a31e925ad3278e335bed2c268d11069fb9d52bd09e13: CDI devices from CRI Config.CDIDevices: []" Nov 6 05:31:02.084073 containerd[1686]: time="2025-11-06T05:31:02.084037456Z" level=info msg="CreateContainer within sandbox \"04ef05a8d846c2ebf4be9591d486081aeee924af0d3f802194e2a5e556be91af\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"58807faafd46219e5016a31e925ad3278e335bed2c268d11069fb9d52bd09e13\"" Nov 6 05:31:02.085367 containerd[1686]: time="2025-11-06T05:31:02.084659370Z" level=info msg="StartContainer for \"58807faafd46219e5016a31e925ad3278e335bed2c268d11069fb9d52bd09e13\"" Nov 6 05:31:02.090141 containerd[1686]: time="2025-11-06T05:31:02.090032485Z" level=info msg="connecting to shim 58807faafd46219e5016a31e925ad3278e335bed2c268d11069fb9d52bd09e13" address="unix:///run/containerd/s/8e4bfaee00ec2cb93070d819b0c07f2743ed487567f9fd2e07cf89559e6cbca5" protocol=ttrpc version=3 Nov 6 05:31:02.108227 systemd[1]: Started cri-containerd-58807faafd46219e5016a31e925ad3278e335bed2c268d11069fb9d52bd09e13.scope - libcontainer container 58807faafd46219e5016a31e925ad3278e335bed2c268d11069fb9d52bd09e13. Nov 6 05:31:02.157078 containerd[1686]: time="2025-11-06T05:31:02.156945766Z" level=info msg="StartContainer for \"58807faafd46219e5016a31e925ad3278e335bed2c268d11069fb9d52bd09e13\" returns successfully" Nov 6 05:31:02.596755 kubelet[2979]: E1106 05:31:02.596665 2979 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-65j25" podUID="8c33e9bc-df47-4f69-aab6-628eca0dd480" Nov 6 05:31:03.858253 systemd[1]: cri-containerd-58807faafd46219e5016a31e925ad3278e335bed2c268d11069fb9d52bd09e13.scope: Deactivated successfully. Nov 6 05:31:03.858532 systemd[1]: cri-containerd-58807faafd46219e5016a31e925ad3278e335bed2c268d11069fb9d52bd09e13.scope: Consumed 308ms CPU time, 162.9M memory peak, 2.3M read from disk, 171.3M written to disk. Nov 6 05:31:03.863920 containerd[1686]: time="2025-11-06T05:31:03.863900554Z" level=info msg="received exit event container_id:\"58807faafd46219e5016a31e925ad3278e335bed2c268d11069fb9d52bd09e13\" id:\"58807faafd46219e5016a31e925ad3278e335bed2c268d11069fb9d52bd09e13\" pid:3747 exited_at:{seconds:1762407063 nanos:857711006}" Nov 6 05:31:03.932331 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-58807faafd46219e5016a31e925ad3278e335bed2c268d11069fb9d52bd09e13-rootfs.mount: Deactivated successfully. Nov 6 05:31:03.967902 kubelet[2979]: I1106 05:31:03.967866 2979 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Nov 6 05:31:03.999363 kubelet[2979]: E1106 05:31:03.999199 2979 status_manager.go:1018] "Failed to get status for pod" err="pods \"coredns-66bc5c9577-zs4hc\" is forbidden: User \"system:node:localhost\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" podUID="67fe9b67-1f92-4893-a091-15ef1297d78d" pod="kube-system/coredns-66bc5c9577-zs4hc" Nov 6 05:31:04.000976 systemd[1]: Created slice kubepods-burstable-pod67fe9b67_1f92_4893_a091_15ef1297d78d.slice - libcontainer container kubepods-burstable-pod67fe9b67_1f92_4893_a091_15ef1297d78d.slice. Nov 6 05:31:04.003958 kubelet[2979]: E1106 05:31:04.003898 2979 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"goldmane-ca-bundle\" is forbidden: User \"system:node:localhost\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" reflector="object-\"calico-system\"/\"goldmane-ca-bundle\"" type="*v1.ConfigMap" Nov 6 05:31:04.003958 kubelet[2979]: E1106 05:31:04.003928 2979 reflector.go:205] "Failed to watch" err="failed to list *v1.Secret: secrets \"goldmane-key-pair\" is forbidden: User \"system:node:localhost\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" reflector="object-\"calico-system\"/\"goldmane-key-pair\"" type="*v1.Secret" Nov 6 05:31:04.003958 kubelet[2979]: E1106 05:31:04.003957 2979 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"whisker-ca-bundle\" is forbidden: User \"system:node:localhost\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" reflector="object-\"calico-system\"/\"whisker-ca-bundle\"" type="*v1.ConfigMap" Nov 6 05:31:04.004090 kubelet[2979]: E1106 05:31:04.004076 2979 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"goldmane\" is forbidden: User \"system:node:localhost\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" reflector="object-\"calico-system\"/\"goldmane\"" type="*v1.ConfigMap" Nov 6 05:31:04.008538 systemd[1]: Created slice kubepods-burstable-pod47bb3fbd_95a7_4d8b_a266_fa714e9b7eb7.slice - libcontainer container kubepods-burstable-pod47bb3fbd_95a7_4d8b_a266_fa714e9b7eb7.slice. Nov 6 05:31:04.013595 systemd[1]: Created slice kubepods-besteffort-podf3152899_b21d_4929_8f12_210f707e4efb.slice - libcontainer container kubepods-besteffort-podf3152899_b21d_4929_8f12_210f707e4efb.slice. Nov 6 05:31:04.017827 systemd[1]: Created slice kubepods-besteffort-podd9191e4f_5d9b_4552_8319_79f0f354642e.slice - libcontainer container kubepods-besteffort-podd9191e4f_5d9b_4552_8319_79f0f354642e.slice. Nov 6 05:31:04.023935 systemd[1]: Created slice kubepods-besteffort-pod78c7b0f4_b8bc_4238_84a0_4c0e67aa615a.slice - libcontainer container kubepods-besteffort-pod78c7b0f4_b8bc_4238_84a0_4c0e67aa615a.slice. Nov 6 05:31:04.028799 systemd[1]: Created slice kubepods-besteffort-pode398733d_8974_4f27_8e6f_2b8309aa0ac5.slice - libcontainer container kubepods-besteffort-pode398733d_8974_4f27_8e6f_2b8309aa0ac5.slice. Nov 6 05:31:04.033582 systemd[1]: Created slice kubepods-besteffort-podc9c32848_e569_4835_803e_e1e4f2da5056.slice - libcontainer container kubepods-besteffort-podc9c32848_e569_4835_803e_e1e4f2da5056.slice. Nov 6 05:31:04.132062 kubelet[2979]: I1106 05:31:04.131485 2979 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d9191e4f-5d9b-4552-8319-79f0f354642e-tigera-ca-bundle\") pod \"calico-kube-controllers-678545c96c-k4phr\" (UID: \"d9191e4f-5d9b-4552-8319-79f0f354642e\") " pod="calico-system/calico-kube-controllers-678545c96c-k4phr" Nov 6 05:31:04.132062 kubelet[2979]: I1106 05:31:04.131520 2979 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8b4xs\" (UniqueName: \"kubernetes.io/projected/78c7b0f4-b8bc-4238-84a0-4c0e67aa615a-kube-api-access-8b4xs\") pod \"goldmane-7c778bb748-5djlt\" (UID: \"78c7b0f4-b8bc-4238-84a0-4c0e67aa615a\") " pod="calico-system/goldmane-7c778bb748-5djlt" Nov 6 05:31:04.132062 kubelet[2979]: I1106 05:31:04.131535 2979 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e398733d-8974-4f27-8e6f-2b8309aa0ac5-calico-apiserver-certs\") pod \"calico-apiserver-647585d966-4pbqk\" (UID: \"e398733d-8974-4f27-8e6f-2b8309aa0ac5\") " pod="calico-apiserver/calico-apiserver-647585d966-4pbqk" Nov 6 05:31:04.132062 kubelet[2979]: I1106 05:31:04.131544 2979 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/78c7b0f4-b8bc-4238-84a0-4c0e67aa615a-goldmane-ca-bundle\") pod \"goldmane-7c778bb748-5djlt\" (UID: \"78c7b0f4-b8bc-4238-84a0-4c0e67aa615a\") " pod="calico-system/goldmane-7c778bb748-5djlt" Nov 6 05:31:04.132062 kubelet[2979]: I1106 05:31:04.131555 2979 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f3152899-b21d-4929-8f12-210f707e4efb-calico-apiserver-certs\") pod \"calico-apiserver-647585d966-dj8ql\" (UID: \"f3152899-b21d-4929-8f12-210f707e4efb\") " pod="calico-apiserver/calico-apiserver-647585d966-dj8ql" Nov 6 05:31:04.133411 kubelet[2979]: I1106 05:31:04.131564 2979 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c9c32848-e569-4835-803e-e1e4f2da5056-whisker-ca-bundle\") pod \"whisker-bb47d67c4-mf2g9\" (UID: \"c9c32848-e569-4835-803e-e1e4f2da5056\") " pod="calico-system/whisker-bb47d67c4-mf2g9" Nov 6 05:31:04.133411 kubelet[2979]: I1106 05:31:04.131589 2979 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6tpj\" (UniqueName: \"kubernetes.io/projected/e398733d-8974-4f27-8e6f-2b8309aa0ac5-kube-api-access-q6tpj\") pod \"calico-apiserver-647585d966-4pbqk\" (UID: \"e398733d-8974-4f27-8e6f-2b8309aa0ac5\") " pod="calico-apiserver/calico-apiserver-647585d966-4pbqk" Nov 6 05:31:04.133411 kubelet[2979]: I1106 05:31:04.131600 2979 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78c7b0f4-b8bc-4238-84a0-4c0e67aa615a-config\") pod \"goldmane-7c778bb748-5djlt\" (UID: \"78c7b0f4-b8bc-4238-84a0-4c0e67aa615a\") " pod="calico-system/goldmane-7c778bb748-5djlt" Nov 6 05:31:04.133411 kubelet[2979]: I1106 05:31:04.131611 2979 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c9c32848-e569-4835-803e-e1e4f2da5056-whisker-backend-key-pair\") pod \"whisker-bb47d67c4-mf2g9\" (UID: \"c9c32848-e569-4835-803e-e1e4f2da5056\") " pod="calico-system/whisker-bb47d67c4-mf2g9" Nov 6 05:31:04.133411 kubelet[2979]: I1106 05:31:04.131624 2979 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnthj\" (UniqueName: \"kubernetes.io/projected/d9191e4f-5d9b-4552-8319-79f0f354642e-kube-api-access-xnthj\") pod \"calico-kube-controllers-678545c96c-k4phr\" (UID: \"d9191e4f-5d9b-4552-8319-79f0f354642e\") " pod="calico-system/calico-kube-controllers-678545c96c-k4phr" Nov 6 05:31:04.134416 kubelet[2979]: I1106 05:31:04.131634 2979 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtrxg\" (UniqueName: \"kubernetes.io/projected/67fe9b67-1f92-4893-a091-15ef1297d78d-kube-api-access-xtrxg\") pod \"coredns-66bc5c9577-zs4hc\" (UID: \"67fe9b67-1f92-4893-a091-15ef1297d78d\") " pod="kube-system/coredns-66bc5c9577-zs4hc" Nov 6 05:31:04.134416 kubelet[2979]: I1106 05:31:04.131645 2979 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/78c7b0f4-b8bc-4238-84a0-4c0e67aa615a-goldmane-key-pair\") pod \"goldmane-7c778bb748-5djlt\" (UID: \"78c7b0f4-b8bc-4238-84a0-4c0e67aa615a\") " pod="calico-system/goldmane-7c778bb748-5djlt" Nov 6 05:31:04.134416 kubelet[2979]: I1106 05:31:04.131653 2979 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q772k\" (UniqueName: \"kubernetes.io/projected/47bb3fbd-95a7-4d8b-a266-fa714e9b7eb7-kube-api-access-q772k\") pod \"coredns-66bc5c9577-2ph6d\" (UID: \"47bb3fbd-95a7-4d8b-a266-fa714e9b7eb7\") " pod="kube-system/coredns-66bc5c9577-2ph6d" Nov 6 05:31:04.134416 kubelet[2979]: I1106 05:31:04.131661 2979 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gh66h\" (UniqueName: \"kubernetes.io/projected/f3152899-b21d-4929-8f12-210f707e4efb-kube-api-access-gh66h\") pod \"calico-apiserver-647585d966-dj8ql\" (UID: \"f3152899-b21d-4929-8f12-210f707e4efb\") " pod="calico-apiserver/calico-apiserver-647585d966-dj8ql" Nov 6 05:31:04.134416 kubelet[2979]: I1106 05:31:04.131670 2979 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdfbz\" (UniqueName: \"kubernetes.io/projected/c9c32848-e569-4835-803e-e1e4f2da5056-kube-api-access-cdfbz\") pod \"whisker-bb47d67c4-mf2g9\" (UID: \"c9c32848-e569-4835-803e-e1e4f2da5056\") " pod="calico-system/whisker-bb47d67c4-mf2g9" Nov 6 05:31:04.134708 kubelet[2979]: I1106 05:31:04.131680 2979 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/47bb3fbd-95a7-4d8b-a266-fa714e9b7eb7-config-volume\") pod \"coredns-66bc5c9577-2ph6d\" (UID: \"47bb3fbd-95a7-4d8b-a266-fa714e9b7eb7\") " pod="kube-system/coredns-66bc5c9577-2ph6d" Nov 6 05:31:04.134708 kubelet[2979]: I1106 05:31:04.131688 2979 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/67fe9b67-1f92-4893-a091-15ef1297d78d-config-volume\") pod \"coredns-66bc5c9577-zs4hc\" (UID: \"67fe9b67-1f92-4893-a091-15ef1297d78d\") " pod="kube-system/coredns-66bc5c9577-zs4hc" Nov 6 05:31:04.307669 containerd[1686]: time="2025-11-06T05:31:04.307647768Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-zs4hc,Uid:67fe9b67-1f92-4893-a091-15ef1297d78d,Namespace:kube-system,Attempt:0,}" Nov 6 05:31:04.313520 containerd[1686]: time="2025-11-06T05:31:04.313369108Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-2ph6d,Uid:47bb3fbd-95a7-4d8b-a266-fa714e9b7eb7,Namespace:kube-system,Attempt:0,}" Nov 6 05:31:04.335362 containerd[1686]: time="2025-11-06T05:31:04.335330518Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-647585d966-4pbqk,Uid:e398733d-8974-4f27-8e6f-2b8309aa0ac5,Namespace:calico-apiserver,Attempt:0,}" Nov 6 05:31:04.335716 containerd[1686]: time="2025-11-06T05:31:04.335701128Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-647585d966-dj8ql,Uid:f3152899-b21d-4929-8f12-210f707e4efb,Namespace:calico-apiserver,Attempt:0,}" Nov 6 05:31:04.335753 containerd[1686]: time="2025-11-06T05:31:04.335744775Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-678545c96c-k4phr,Uid:d9191e4f-5d9b-4552-8319-79f0f354642e,Namespace:calico-system,Attempt:0,}" Nov 6 05:31:04.608761 systemd[1]: Created slice kubepods-besteffort-pod8c33e9bc_df47_4f69_aab6_628eca0dd480.slice - libcontainer container kubepods-besteffort-pod8c33e9bc_df47_4f69_aab6_628eca0dd480.slice. Nov 6 05:31:04.611976 containerd[1686]: time="2025-11-06T05:31:04.611958661Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-65j25,Uid:8c33e9bc-df47-4f69-aab6-628eca0dd480,Namespace:calico-system,Attempt:0,}" Nov 6 05:31:04.656921 containerd[1686]: time="2025-11-06T05:31:04.656856644Z" level=error msg="Failed to destroy network for sandbox \"2a99f77d70eedd4a0a6161711cb322e2cba1b8e7db809e3583f343d4a66a430e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 05:31:04.658021 containerd[1686]: time="2025-11-06T05:31:04.657973620Z" level=error msg="Failed to destroy network for sandbox \"91ffdca1b20fda278085dc073bd391c51e1dbdeb675a79790b4d1c995108a89e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 05:31:04.667771 containerd[1686]: time="2025-11-06T05:31:04.667705299Z" level=error msg="Failed to destroy network for sandbox \"4a22609f97fa660c7238e45bc0d139507d1f4be48bfa748ddd45895c77d768b2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 05:31:04.669918 containerd[1686]: time="2025-11-06T05:31:04.669846743Z" level=error msg="Failed to destroy network for sandbox \"c09cfcd5f8b03eaf31fbacb4ab4751a6bfecd6ccee5d8cb523c752a23223b959\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 05:31:04.674760 containerd[1686]: time="2025-11-06T05:31:04.674732264Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-678545c96c-k4phr,Uid:d9191e4f-5d9b-4552-8319-79f0f354642e,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2a99f77d70eedd4a0a6161711cb322e2cba1b8e7db809e3583f343d4a66a430e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 05:31:04.682006 containerd[1686]: time="2025-11-06T05:31:04.681920124Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-zs4hc,Uid:67fe9b67-1f92-4893-a091-15ef1297d78d,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c09cfcd5f8b03eaf31fbacb4ab4751a6bfecd6ccee5d8cb523c752a23223b959\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 05:31:04.685337 containerd[1686]: time="2025-11-06T05:31:04.684773552Z" level=error msg="Failed to destroy network for sandbox \"e632a12b238da4a2c89b062c7d0e36cc939fb4d53532238f3e4c642ee82a5c00\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 05:31:04.685384 kubelet[2979]: E1106 05:31:04.685293 2979 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2a99f77d70eedd4a0a6161711cb322e2cba1b8e7db809e3583f343d4a66a430e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 05:31:04.685384 kubelet[2979]: E1106 05:31:04.685324 2979 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2a99f77d70eedd4a0a6161711cb322e2cba1b8e7db809e3583f343d4a66a430e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-678545c96c-k4phr" Nov 6 05:31:04.685384 kubelet[2979]: E1106 05:31:04.685344 2979 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2a99f77d70eedd4a0a6161711cb322e2cba1b8e7db809e3583f343d4a66a430e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-678545c96c-k4phr" Nov 6 05:31:04.685451 kubelet[2979]: E1106 05:31:04.685383 2979 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-678545c96c-k4phr_calico-system(d9191e4f-5d9b-4552-8319-79f0f354642e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-678545c96c-k4phr_calico-system(d9191e4f-5d9b-4552-8319-79f0f354642e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2a99f77d70eedd4a0a6161711cb322e2cba1b8e7db809e3583f343d4a66a430e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-678545c96c-k4phr" podUID="d9191e4f-5d9b-4552-8319-79f0f354642e" Nov 6 05:31:04.685671 containerd[1686]: time="2025-11-06T05:31:04.685650257Z" level=error msg="Failed to destroy network for sandbox \"acb585ec4403893f38c3b39b64bc162fe97898b21f0db2e490f658f690c5e0af\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 05:31:04.685806 containerd[1686]: time="2025-11-06T05:31:04.685791423Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-2ph6d,Uid:47bb3fbd-95a7-4d8b-a266-fa714e9b7eb7,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"91ffdca1b20fda278085dc073bd391c51e1dbdeb675a79790b4d1c995108a89e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 05:31:04.686309 kubelet[2979]: E1106 05:31:04.685268 2979 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c09cfcd5f8b03eaf31fbacb4ab4751a6bfecd6ccee5d8cb523c752a23223b959\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 05:31:04.686309 kubelet[2979]: E1106 05:31:04.686176 2979 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c09cfcd5f8b03eaf31fbacb4ab4751a6bfecd6ccee5d8cb523c752a23223b959\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-zs4hc" Nov 6 05:31:04.686309 kubelet[2979]: E1106 05:31:04.686188 2979 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c09cfcd5f8b03eaf31fbacb4ab4751a6bfecd6ccee5d8cb523c752a23223b959\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-zs4hc" Nov 6 05:31:04.686392 kubelet[2979]: E1106 05:31:04.686211 2979 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-zs4hc_kube-system(67fe9b67-1f92-4893-a091-15ef1297d78d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-zs4hc_kube-system(67fe9b67-1f92-4893-a091-15ef1297d78d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c09cfcd5f8b03eaf31fbacb4ab4751a6bfecd6ccee5d8cb523c752a23223b959\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-zs4hc" podUID="67fe9b67-1f92-4893-a091-15ef1297d78d" Nov 6 05:31:04.686392 kubelet[2979]: E1106 05:31:04.686257 2979 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"91ffdca1b20fda278085dc073bd391c51e1dbdeb675a79790b4d1c995108a89e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 05:31:04.686392 kubelet[2979]: E1106 05:31:04.686267 2979 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"91ffdca1b20fda278085dc073bd391c51e1dbdeb675a79790b4d1c995108a89e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-2ph6d" Nov 6 05:31:04.686473 kubelet[2979]: E1106 05:31:04.686275 2979 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"91ffdca1b20fda278085dc073bd391c51e1dbdeb675a79790b4d1c995108a89e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-2ph6d" Nov 6 05:31:04.686473 kubelet[2979]: E1106 05:31:04.686294 2979 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-2ph6d_kube-system(47bb3fbd-95a7-4d8b-a266-fa714e9b7eb7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-2ph6d_kube-system(47bb3fbd-95a7-4d8b-a266-fa714e9b7eb7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"91ffdca1b20fda278085dc073bd391c51e1dbdeb675a79790b4d1c995108a89e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-2ph6d" podUID="47bb3fbd-95a7-4d8b-a266-fa714e9b7eb7" Nov 6 05:31:04.686622 containerd[1686]: time="2025-11-06T05:31:04.686601622Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-65j25,Uid:8c33e9bc-df47-4f69-aab6-628eca0dd480,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"acb585ec4403893f38c3b39b64bc162fe97898b21f0db2e490f658f690c5e0af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 05:31:04.687055 kubelet[2979]: E1106 05:31:04.687039 2979 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"acb585ec4403893f38c3b39b64bc162fe97898b21f0db2e490f658f690c5e0af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 05:31:04.687081 kubelet[2979]: E1106 05:31:04.687060 2979 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"acb585ec4403893f38c3b39b64bc162fe97898b21f0db2e490f658f690c5e0af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-65j25" Nov 6 05:31:04.687081 kubelet[2979]: E1106 05:31:04.687071 2979 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"acb585ec4403893f38c3b39b64bc162fe97898b21f0db2e490f658f690c5e0af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-65j25" Nov 6 05:31:04.687119 kubelet[2979]: E1106 05:31:04.687090 2979 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-65j25_calico-system(8c33e9bc-df47-4f69-aab6-628eca0dd480)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-65j25_calico-system(8c33e9bc-df47-4f69-aab6-628eca0dd480)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"acb585ec4403893f38c3b39b64bc162fe97898b21f0db2e490f658f690c5e0af\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-65j25" podUID="8c33e9bc-df47-4f69-aab6-628eca0dd480" Nov 6 05:31:04.687500 containerd[1686]: time="2025-11-06T05:31:04.687474451Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-647585d966-dj8ql,Uid:f3152899-b21d-4929-8f12-210f707e4efb,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4a22609f97fa660c7238e45bc0d139507d1f4be48bfa748ddd45895c77d768b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 05:31:04.687761 kubelet[2979]: E1106 05:31:04.687743 2979 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4a22609f97fa660c7238e45bc0d139507d1f4be48bfa748ddd45895c77d768b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 05:31:04.687790 kubelet[2979]: E1106 05:31:04.687763 2979 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4a22609f97fa660c7238e45bc0d139507d1f4be48bfa748ddd45895c77d768b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-647585d966-dj8ql" Nov 6 05:31:04.687878 kubelet[2979]: E1106 05:31:04.687774 2979 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4a22609f97fa660c7238e45bc0d139507d1f4be48bfa748ddd45895c77d768b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-647585d966-dj8ql" Nov 6 05:31:04.687908 kubelet[2979]: E1106 05:31:04.687894 2979 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-647585d966-dj8ql_calico-apiserver(f3152899-b21d-4929-8f12-210f707e4efb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-647585d966-dj8ql_calico-apiserver(f3152899-b21d-4929-8f12-210f707e4efb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4a22609f97fa660c7238e45bc0d139507d1f4be48bfa748ddd45895c77d768b2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-647585d966-dj8ql" podUID="f3152899-b21d-4929-8f12-210f707e4efb" Nov 6 05:31:04.688187 containerd[1686]: time="2025-11-06T05:31:04.687995284Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-647585d966-4pbqk,Uid:e398733d-8974-4f27-8e6f-2b8309aa0ac5,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e632a12b238da4a2c89b062c7d0e36cc939fb4d53532238f3e4c642ee82a5c00\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 05:31:04.688290 kubelet[2979]: E1106 05:31:04.688239 2979 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e632a12b238da4a2c89b062c7d0e36cc939fb4d53532238f3e4c642ee82a5c00\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 05:31:04.688290 kubelet[2979]: E1106 05:31:04.688254 2979 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e632a12b238da4a2c89b062c7d0e36cc939fb4d53532238f3e4c642ee82a5c00\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-647585d966-4pbqk" Nov 6 05:31:04.688290 kubelet[2979]: E1106 05:31:04.688264 2979 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e632a12b238da4a2c89b062c7d0e36cc939fb4d53532238f3e4c642ee82a5c00\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-647585d966-4pbqk" Nov 6 05:31:04.688367 kubelet[2979]: E1106 05:31:04.688284 2979 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-647585d966-4pbqk_calico-apiserver(e398733d-8974-4f27-8e6f-2b8309aa0ac5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-647585d966-4pbqk_calico-apiserver(e398733d-8974-4f27-8e6f-2b8309aa0ac5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e632a12b238da4a2c89b062c7d0e36cc939fb4d53532238f3e4c642ee82a5c00\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-647585d966-4pbqk" podUID="e398733d-8974-4f27-8e6f-2b8309aa0ac5" Nov 6 05:31:04.852041 containerd[1686]: time="2025-11-06T05:31:04.851416965Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Nov 6 05:31:05.236798 containerd[1686]: time="2025-11-06T05:31:05.236765973Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-bb47d67c4-mf2g9,Uid:c9c32848-e569-4835-803e-e1e4f2da5056,Namespace:calico-system,Attempt:0,}" Nov 6 05:31:05.247245 kubelet[2979]: E1106 05:31:05.246975 2979 configmap.go:193] Couldn't get configMap calico-system/goldmane: failed to sync configmap cache: timed out waiting for the condition Nov 6 05:31:05.247245 kubelet[2979]: E1106 05:31:05.247032 2979 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/78c7b0f4-b8bc-4238-84a0-4c0e67aa615a-config podName:78c7b0f4-b8bc-4238-84a0-4c0e67aa615a nodeName:}" failed. No retries permitted until 2025-11-06 05:31:05.747017325 +0000 UTC m=+34.290640667 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/78c7b0f4-b8bc-4238-84a0-4c0e67aa615a-config") pod "goldmane-7c778bb748-5djlt" (UID: "78c7b0f4-b8bc-4238-84a0-4c0e67aa615a") : failed to sync configmap cache: timed out waiting for the condition Nov 6 05:31:05.280237 containerd[1686]: time="2025-11-06T05:31:05.280197076Z" level=error msg="Failed to destroy network for sandbox \"ca0556fff90ec61292d17986fa3d00e0a639130dc250474ca61865668a29bf31\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 05:31:05.281719 systemd[1]: run-netns-cni\x2d89db1332\x2d8dc6\x2d1a5a\x2d7325\x2dac6feaefc4bd.mount: Deactivated successfully. Nov 6 05:31:05.282609 containerd[1686]: time="2025-11-06T05:31:05.282063237Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-bb47d67c4-mf2g9,Uid:c9c32848-e569-4835-803e-e1e4f2da5056,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ca0556fff90ec61292d17986fa3d00e0a639130dc250474ca61865668a29bf31\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 05:31:05.282664 kubelet[2979]: E1106 05:31:05.282210 2979 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ca0556fff90ec61292d17986fa3d00e0a639130dc250474ca61865668a29bf31\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 05:31:05.282664 kubelet[2979]: E1106 05:31:05.282250 2979 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ca0556fff90ec61292d17986fa3d00e0a639130dc250474ca61865668a29bf31\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-bb47d67c4-mf2g9" Nov 6 05:31:05.282664 kubelet[2979]: E1106 05:31:05.282262 2979 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ca0556fff90ec61292d17986fa3d00e0a639130dc250474ca61865668a29bf31\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-bb47d67c4-mf2g9" Nov 6 05:31:05.282734 kubelet[2979]: E1106 05:31:05.282293 2979 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-bb47d67c4-mf2g9_calico-system(c9c32848-e569-4835-803e-e1e4f2da5056)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-bb47d67c4-mf2g9_calico-system(c9c32848-e569-4835-803e-e1e4f2da5056)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ca0556fff90ec61292d17986fa3d00e0a639130dc250474ca61865668a29bf31\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-bb47d67c4-mf2g9" podUID="c9c32848-e569-4835-803e-e1e4f2da5056" Nov 6 05:31:06.122089 kubelet[2979]: I1106 05:31:06.122064 2979 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 6 05:31:06.133924 containerd[1686]: time="2025-11-06T05:31:06.133798444Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-5djlt,Uid:78c7b0f4-b8bc-4238-84a0-4c0e67aa615a,Namespace:calico-system,Attempt:0,}" Nov 6 05:31:06.225958 containerd[1686]: time="2025-11-06T05:31:06.225921752Z" level=error msg="Failed to destroy network for sandbox \"14c24c55c84235bb3648299cb4f505052c7aef52f7934bcd76d55f40871b8699\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 05:31:06.230683 systemd[1]: run-netns-cni\x2d08df2ffe\x2dd53b\x2de3fa\x2d36a4\x2d96ca661decbf.mount: Deactivated successfully. Nov 6 05:31:06.245828 containerd[1686]: time="2025-11-06T05:31:06.245739167Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-5djlt,Uid:78c7b0f4-b8bc-4238-84a0-4c0e67aa615a,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"14c24c55c84235bb3648299cb4f505052c7aef52f7934bcd76d55f40871b8699\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 05:31:06.246521 kubelet[2979]: E1106 05:31:06.245882 2979 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"14c24c55c84235bb3648299cb4f505052c7aef52f7934bcd76d55f40871b8699\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 05:31:06.246521 kubelet[2979]: E1106 05:31:06.245908 2979 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"14c24c55c84235bb3648299cb4f505052c7aef52f7934bcd76d55f40871b8699\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-5djlt" Nov 6 05:31:06.246521 kubelet[2979]: E1106 05:31:06.245920 2979 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"14c24c55c84235bb3648299cb4f505052c7aef52f7934bcd76d55f40871b8699\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-5djlt" Nov 6 05:31:06.246645 kubelet[2979]: E1106 05:31:06.245949 2979 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-5djlt_calico-system(78c7b0f4-b8bc-4238-84a0-4c0e67aa615a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-5djlt_calico-system(78c7b0f4-b8bc-4238-84a0-4c0e67aa615a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"14c24c55c84235bb3648299cb4f505052c7aef52f7934bcd76d55f40871b8699\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-5djlt" podUID="78c7b0f4-b8bc-4238-84a0-4c0e67aa615a" Nov 6 05:31:09.281739 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1238529870.mount: Deactivated successfully. Nov 6 05:31:09.323606 containerd[1686]: time="2025-11-06T05:31:09.303359014Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156880025" Nov 6 05:31:09.323606 containerd[1686]: time="2025-11-06T05:31:09.323533676Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 4.472085581s" Nov 6 05:31:09.323606 containerd[1686]: time="2025-11-06T05:31:09.323548804Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Nov 6 05:31:09.325972 containerd[1686]: time="2025-11-06T05:31:09.325776613Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 05:31:09.326377 containerd[1686]: time="2025-11-06T05:31:09.326362459Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 05:31:09.326855 containerd[1686]: time="2025-11-06T05:31:09.326828244Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 05:31:09.362926 containerd[1686]: time="2025-11-06T05:31:09.362897128Z" level=info msg="CreateContainer within sandbox \"04ef05a8d846c2ebf4be9591d486081aeee924af0d3f802194e2a5e556be91af\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 6 05:31:09.391241 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3936287246.mount: Deactivated successfully. Nov 6 05:31:09.391904 containerd[1686]: time="2025-11-06T05:31:09.391257138Z" level=info msg="Container 9c87f79db3b0d1d4a579465e4b3100345ee932a18d1105963cc1a3e8e9298e9b: CDI devices from CRI Config.CDIDevices: []" Nov 6 05:31:09.402959 containerd[1686]: time="2025-11-06T05:31:09.402938463Z" level=info msg="CreateContainer within sandbox \"04ef05a8d846c2ebf4be9591d486081aeee924af0d3f802194e2a5e556be91af\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"9c87f79db3b0d1d4a579465e4b3100345ee932a18d1105963cc1a3e8e9298e9b\"" Nov 6 05:31:09.403441 containerd[1686]: time="2025-11-06T05:31:09.403423252Z" level=info msg="StartContainer for \"9c87f79db3b0d1d4a579465e4b3100345ee932a18d1105963cc1a3e8e9298e9b\"" Nov 6 05:31:09.405401 containerd[1686]: time="2025-11-06T05:31:09.405386259Z" level=info msg="connecting to shim 9c87f79db3b0d1d4a579465e4b3100345ee932a18d1105963cc1a3e8e9298e9b" address="unix:///run/containerd/s/8e4bfaee00ec2cb93070d819b0c07f2743ed487567f9fd2e07cf89559e6cbca5" protocol=ttrpc version=3 Nov 6 05:31:09.477229 systemd[1]: Started cri-containerd-9c87f79db3b0d1d4a579465e4b3100345ee932a18d1105963cc1a3e8e9298e9b.scope - libcontainer container 9c87f79db3b0d1d4a579465e4b3100345ee932a18d1105963cc1a3e8e9298e9b. Nov 6 05:31:09.511247 containerd[1686]: time="2025-11-06T05:31:09.511158866Z" level=info msg="StartContainer for \"9c87f79db3b0d1d4a579465e4b3100345ee932a18d1105963cc1a3e8e9298e9b\" returns successfully" Nov 6 05:31:09.814161 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 6 05:31:09.819368 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 6 05:31:09.884617 kubelet[2979]: I1106 05:31:09.883202 2979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-lfdks" podStartSLOduration=1.4097508140000001 podStartE2EDuration="21.88318867s" podCreationTimestamp="2025-11-06 05:30:48 +0000 UTC" firstStartedPulling="2025-11-06 05:30:48.851636383 +0000 UTC m=+17.395259723" lastFinishedPulling="2025-11-06 05:31:09.325074243 +0000 UTC m=+37.868697579" observedRunningTime="2025-11-06 05:31:09.882686864 +0000 UTC m=+38.426310213" watchObservedRunningTime="2025-11-06 05:31:09.88318867 +0000 UTC m=+38.426812018" Nov 6 05:31:10.168019 kubelet[2979]: I1106 05:31:10.167859 2979 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c9c32848-e569-4835-803e-e1e4f2da5056-whisker-ca-bundle\") pod \"c9c32848-e569-4835-803e-e1e4f2da5056\" (UID: \"c9c32848-e569-4835-803e-e1e4f2da5056\") " Nov 6 05:31:10.168019 kubelet[2979]: I1106 05:31:10.167896 2979 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c9c32848-e569-4835-803e-e1e4f2da5056-whisker-backend-key-pair\") pod \"c9c32848-e569-4835-803e-e1e4f2da5056\" (UID: \"c9c32848-e569-4835-803e-e1e4f2da5056\") " Nov 6 05:31:10.168019 kubelet[2979]: I1106 05:31:10.167913 2979 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cdfbz\" (UniqueName: \"kubernetes.io/projected/c9c32848-e569-4835-803e-e1e4f2da5056-kube-api-access-cdfbz\") pod \"c9c32848-e569-4835-803e-e1e4f2da5056\" (UID: \"c9c32848-e569-4835-803e-e1e4f2da5056\") " Nov 6 05:31:10.196747 kubelet[2979]: I1106 05:31:10.196664 2979 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9c32848-e569-4835-803e-e1e4f2da5056-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "c9c32848-e569-4835-803e-e1e4f2da5056" (UID: "c9c32848-e569-4835-803e-e1e4f2da5056"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 6 05:31:10.211635 kubelet[2979]: I1106 05:31:10.211207 2979 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9c32848-e569-4835-803e-e1e4f2da5056-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "c9c32848-e569-4835-803e-e1e4f2da5056" (UID: "c9c32848-e569-4835-803e-e1e4f2da5056"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 6 05:31:10.211933 kubelet[2979]: I1106 05:31:10.211908 2979 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9c32848-e569-4835-803e-e1e4f2da5056-kube-api-access-cdfbz" (OuterVolumeSpecName: "kube-api-access-cdfbz") pod "c9c32848-e569-4835-803e-e1e4f2da5056" (UID: "c9c32848-e569-4835-803e-e1e4f2da5056"). InnerVolumeSpecName "kube-api-access-cdfbz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 6 05:31:10.269149 kubelet[2979]: I1106 05:31:10.269101 2979 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c9c32848-e569-4835-803e-e1e4f2da5056-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Nov 6 05:31:10.269149 kubelet[2979]: I1106 05:31:10.269153 2979 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c9c32848-e569-4835-803e-e1e4f2da5056-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Nov 6 05:31:10.269263 kubelet[2979]: I1106 05:31:10.269161 2979 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-cdfbz\" (UniqueName: \"kubernetes.io/projected/c9c32848-e569-4835-803e-e1e4f2da5056-kube-api-access-cdfbz\") on node \"localhost\" DevicePath \"\"" Nov 6 05:31:10.282266 systemd[1]: var-lib-kubelet-pods-c9c32848\x2de569\x2d4835\x2d803e\x2de1e4f2da5056-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dcdfbz.mount: Deactivated successfully. Nov 6 05:31:10.282330 systemd[1]: var-lib-kubelet-pods-c9c32848\x2de569\x2d4835\x2d803e\x2de1e4f2da5056-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Nov 6 05:31:10.871001 systemd[1]: Removed slice kubepods-besteffort-podc9c32848_e569_4835_803e_e1e4f2da5056.slice - libcontainer container kubepods-besteffort-podc9c32848_e569_4835_803e_e1e4f2da5056.slice. Nov 6 05:31:10.943456 systemd[1]: Created slice kubepods-besteffort-podaf2848d9_f5cf_4973_af81_ec9678a8d6c7.slice - libcontainer container kubepods-besteffort-podaf2848d9_f5cf_4973_af81_ec9678a8d6c7.slice. Nov 6 05:31:11.026151 kubelet[2979]: I1106 05:31:11.026106 2979 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/af2848d9-f5cf-4973-af81-ec9678a8d6c7-whisker-ca-bundle\") pod \"whisker-66c4b95784-sxn7m\" (UID: \"af2848d9-f5cf-4973-af81-ec9678a8d6c7\") " pod="calico-system/whisker-66c4b95784-sxn7m" Nov 6 05:31:11.026574 kubelet[2979]: I1106 05:31:11.026486 2979 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47t4t\" (UniqueName: \"kubernetes.io/projected/af2848d9-f5cf-4973-af81-ec9678a8d6c7-kube-api-access-47t4t\") pod \"whisker-66c4b95784-sxn7m\" (UID: \"af2848d9-f5cf-4973-af81-ec9678a8d6c7\") " pod="calico-system/whisker-66c4b95784-sxn7m" Nov 6 05:31:11.026574 kubelet[2979]: I1106 05:31:11.026546 2979 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/af2848d9-f5cf-4973-af81-ec9678a8d6c7-whisker-backend-key-pair\") pod \"whisker-66c4b95784-sxn7m\" (UID: \"af2848d9-f5cf-4973-af81-ec9678a8d6c7\") " pod="calico-system/whisker-66c4b95784-sxn7m" Nov 6 05:31:11.258568 containerd[1686]: time="2025-11-06T05:31:11.258506940Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-66c4b95784-sxn7m,Uid:af2848d9-f5cf-4973-af81-ec9678a8d6c7,Namespace:calico-system,Attempt:0,}" Nov 6 05:31:11.610507 kubelet[2979]: I1106 05:31:11.610473 2979 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9c32848-e569-4835-803e-e1e4f2da5056" path="/var/lib/kubelet/pods/c9c32848-e569-4835-803e-e1e4f2da5056/volumes" Nov 6 05:31:12.057909 systemd-networkd[1562]: vxlan.calico: Link UP Nov 6 05:31:12.057914 systemd-networkd[1562]: vxlan.calico: Gained carrier Nov 6 05:31:12.931611 systemd-networkd[1562]: cali705c22f4d8e: Link UP Nov 6 05:31:12.931732 systemd-networkd[1562]: cali705c22f4d8e: Gained carrier Nov 6 05:31:12.950763 containerd[1686]: 2025-11-06 05:31:11.364 [INFO][4205] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 6 05:31:12.950763 containerd[1686]: 2025-11-06 05:31:11.853 [INFO][4205] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--66c4b95784--sxn7m-eth0 whisker-66c4b95784- calico-system af2848d9-f5cf-4973-af81-ec9678a8d6c7 886 0 2025-11-06 05:31:10 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:66c4b95784 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-66c4b95784-sxn7m eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali705c22f4d8e [] [] }} ContainerID="3cbf176a89a53a9120a93489709d1a13d0307384b541bb344c921250355ab3da" Namespace="calico-system" Pod="whisker-66c4b95784-sxn7m" WorkloadEndpoint="localhost-k8s-whisker--66c4b95784--sxn7m-" Nov 6 05:31:12.950763 containerd[1686]: 2025-11-06 05:31:11.862 [INFO][4205] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3cbf176a89a53a9120a93489709d1a13d0307384b541bb344c921250355ab3da" Namespace="calico-system" Pod="whisker-66c4b95784-sxn7m" WorkloadEndpoint="localhost-k8s-whisker--66c4b95784--sxn7m-eth0" Nov 6 05:31:12.950763 containerd[1686]: 2025-11-06 05:31:12.832 [INFO][4237] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3cbf176a89a53a9120a93489709d1a13d0307384b541bb344c921250355ab3da" HandleID="k8s-pod-network.3cbf176a89a53a9120a93489709d1a13d0307384b541bb344c921250355ab3da" Workload="localhost-k8s-whisker--66c4b95784--sxn7m-eth0" Nov 6 05:31:12.951082 containerd[1686]: 2025-11-06 05:31:12.834 [INFO][4237] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="3cbf176a89a53a9120a93489709d1a13d0307384b541bb344c921250355ab3da" HandleID="k8s-pod-network.3cbf176a89a53a9120a93489709d1a13d0307384b541bb344c921250355ab3da" Workload="localhost-k8s-whisker--66c4b95784--sxn7m-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c9720), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-66c4b95784-sxn7m", "timestamp":"2025-11-06 05:31:12.832606243 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 6 05:31:12.951082 containerd[1686]: 2025-11-06 05:31:12.834 [INFO][4237] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 6 05:31:12.951082 containerd[1686]: 2025-11-06 05:31:12.834 [INFO][4237] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 6 05:31:12.951082 containerd[1686]: 2025-11-06 05:31:12.835 [INFO][4237] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 6 05:31:12.951082 containerd[1686]: 2025-11-06 05:31:12.895 [INFO][4237] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3cbf176a89a53a9120a93489709d1a13d0307384b541bb344c921250355ab3da" host="localhost" Nov 6 05:31:12.951082 containerd[1686]: 2025-11-06 05:31:12.905 [INFO][4237] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 6 05:31:12.951082 containerd[1686]: 2025-11-06 05:31:12.908 [INFO][4237] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 6 05:31:12.951082 containerd[1686]: 2025-11-06 05:31:12.909 [INFO][4237] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 6 05:31:12.951082 containerd[1686]: 2025-11-06 05:31:12.911 [INFO][4237] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 6 05:31:12.951082 containerd[1686]: 2025-11-06 05:31:12.911 [INFO][4237] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3cbf176a89a53a9120a93489709d1a13d0307384b541bb344c921250355ab3da" host="localhost" Nov 6 05:31:12.952188 containerd[1686]: 2025-11-06 05:31:12.913 [INFO][4237] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.3cbf176a89a53a9120a93489709d1a13d0307384b541bb344c921250355ab3da Nov 6 05:31:12.952188 containerd[1686]: 2025-11-06 05:31:12.916 [INFO][4237] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3cbf176a89a53a9120a93489709d1a13d0307384b541bb344c921250355ab3da" host="localhost" Nov 6 05:31:12.952188 containerd[1686]: 2025-11-06 05:31:12.922 [INFO][4237] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.3cbf176a89a53a9120a93489709d1a13d0307384b541bb344c921250355ab3da" host="localhost" Nov 6 05:31:12.952188 containerd[1686]: 2025-11-06 05:31:12.922 [INFO][4237] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.3cbf176a89a53a9120a93489709d1a13d0307384b541bb344c921250355ab3da" host="localhost" Nov 6 05:31:12.952188 containerd[1686]: 2025-11-06 05:31:12.922 [INFO][4237] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 6 05:31:12.952188 containerd[1686]: 2025-11-06 05:31:12.923 [INFO][4237] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="3cbf176a89a53a9120a93489709d1a13d0307384b541bb344c921250355ab3da" HandleID="k8s-pod-network.3cbf176a89a53a9120a93489709d1a13d0307384b541bb344c921250355ab3da" Workload="localhost-k8s-whisker--66c4b95784--sxn7m-eth0" Nov 6 05:31:12.953240 containerd[1686]: 2025-11-06 05:31:12.924 [INFO][4205] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3cbf176a89a53a9120a93489709d1a13d0307384b541bb344c921250355ab3da" Namespace="calico-system" Pod="whisker-66c4b95784-sxn7m" WorkloadEndpoint="localhost-k8s-whisker--66c4b95784--sxn7m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--66c4b95784--sxn7m-eth0", GenerateName:"whisker-66c4b95784-", Namespace:"calico-system", SelfLink:"", UID:"af2848d9-f5cf-4973-af81-ec9678a8d6c7", ResourceVersion:"886", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 5, 31, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"66c4b95784", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-66c4b95784-sxn7m", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali705c22f4d8e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 05:31:12.953240 containerd[1686]: 2025-11-06 05:31:12.924 [INFO][4205] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="3cbf176a89a53a9120a93489709d1a13d0307384b541bb344c921250355ab3da" Namespace="calico-system" Pod="whisker-66c4b95784-sxn7m" WorkloadEndpoint="localhost-k8s-whisker--66c4b95784--sxn7m-eth0" Nov 6 05:31:12.953303 containerd[1686]: 2025-11-06 05:31:12.924 [INFO][4205] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali705c22f4d8e ContainerID="3cbf176a89a53a9120a93489709d1a13d0307384b541bb344c921250355ab3da" Namespace="calico-system" Pod="whisker-66c4b95784-sxn7m" WorkloadEndpoint="localhost-k8s-whisker--66c4b95784--sxn7m-eth0" Nov 6 05:31:12.953303 containerd[1686]: 2025-11-06 05:31:12.933 [INFO][4205] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3cbf176a89a53a9120a93489709d1a13d0307384b541bb344c921250355ab3da" Namespace="calico-system" Pod="whisker-66c4b95784-sxn7m" WorkloadEndpoint="localhost-k8s-whisker--66c4b95784--sxn7m-eth0" Nov 6 05:31:12.954186 containerd[1686]: 2025-11-06 05:31:12.934 [INFO][4205] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3cbf176a89a53a9120a93489709d1a13d0307384b541bb344c921250355ab3da" Namespace="calico-system" Pod="whisker-66c4b95784-sxn7m" WorkloadEndpoint="localhost-k8s-whisker--66c4b95784--sxn7m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--66c4b95784--sxn7m-eth0", GenerateName:"whisker-66c4b95784-", Namespace:"calico-system", SelfLink:"", UID:"af2848d9-f5cf-4973-af81-ec9678a8d6c7", ResourceVersion:"886", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 5, 31, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"66c4b95784", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3cbf176a89a53a9120a93489709d1a13d0307384b541bb344c921250355ab3da", Pod:"whisker-66c4b95784-sxn7m", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali705c22f4d8e", MAC:"52:df:db:bb:26:6a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 05:31:12.954230 containerd[1686]: 2025-11-06 05:31:12.946 [INFO][4205] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3cbf176a89a53a9120a93489709d1a13d0307384b541bb344c921250355ab3da" Namespace="calico-system" Pod="whisker-66c4b95784-sxn7m" WorkloadEndpoint="localhost-k8s-whisker--66c4b95784--sxn7m-eth0" Nov 6 05:31:13.066290 containerd[1686]: time="2025-11-06T05:31:13.066231442Z" level=info msg="connecting to shim 3cbf176a89a53a9120a93489709d1a13d0307384b541bb344c921250355ab3da" address="unix:///run/containerd/s/58243dda89f3cc24943a40b1b463e9356b9a9fcbe25dd38c90f4a5d946079baa" namespace=k8s.io protocol=ttrpc version=3 Nov 6 05:31:13.087220 systemd[1]: Started cri-containerd-3cbf176a89a53a9120a93489709d1a13d0307384b541bb344c921250355ab3da.scope - libcontainer container 3cbf176a89a53a9120a93489709d1a13d0307384b541bb344c921250355ab3da. Nov 6 05:31:13.097224 systemd-resolved[1563]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 6 05:31:13.141438 containerd[1686]: time="2025-11-06T05:31:13.141403864Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-66c4b95784-sxn7m,Uid:af2848d9-f5cf-4973-af81-ec9678a8d6c7,Namespace:calico-system,Attempt:0,} returns sandbox id \"3cbf176a89a53a9120a93489709d1a13d0307384b541bb344c921250355ab3da\"" Nov 6 05:31:13.144076 containerd[1686]: time="2025-11-06T05:31:13.144053940Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 6 05:31:13.489271 containerd[1686]: time="2025-11-06T05:31:13.489230020Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 05:31:13.505476 containerd[1686]: time="2025-11-06T05:31:13.505425910Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Nov 6 05:31:13.505476 containerd[1686]: time="2025-11-06T05:31:13.505463297Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 6 05:31:13.505651 kubelet[2979]: E1106 05:31:13.505613 2979 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 6 05:31:13.506078 kubelet[2979]: E1106 05:31:13.505659 2979 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 6 05:31:13.532950 kubelet[2979]: E1106 05:31:13.532913 2979 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-66c4b95784-sxn7m_calico-system(af2848d9-f5cf-4973-af81-ec9678a8d6c7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 6 05:31:13.534044 containerd[1686]: time="2025-11-06T05:31:13.533992790Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 6 05:31:13.890299 containerd[1686]: time="2025-11-06T05:31:13.890251243Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 05:31:13.900682 containerd[1686]: time="2025-11-06T05:31:13.900653509Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 6 05:31:13.900764 containerd[1686]: time="2025-11-06T05:31:13.900710440Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Nov 6 05:31:13.900850 kubelet[2979]: E1106 05:31:13.900824 2979 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 6 05:31:13.900890 kubelet[2979]: E1106 05:31:13.900869 2979 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 6 05:31:13.900944 kubelet[2979]: E1106 05:31:13.900920 2979 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-66c4b95784-sxn7m_calico-system(af2848d9-f5cf-4973-af81-ec9678a8d6c7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 6 05:31:13.900984 kubelet[2979]: E1106 05:31:13.900966 2979 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-66c4b95784-sxn7m" podUID="af2848d9-f5cf-4973-af81-ec9678a8d6c7" Nov 6 05:31:14.078373 systemd-networkd[1562]: vxlan.calico: Gained IPv6LL Nov 6 05:31:14.527279 systemd-networkd[1562]: cali705c22f4d8e: Gained IPv6LL Nov 6 05:31:14.871539 kubelet[2979]: E1106 05:31:14.871508 2979 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-66c4b95784-sxn7m" podUID="af2848d9-f5cf-4973-af81-ec9678a8d6c7" Nov 6 05:31:16.602708 containerd[1686]: time="2025-11-06T05:31:16.602640247Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-647585d966-dj8ql,Uid:f3152899-b21d-4929-8f12-210f707e4efb,Namespace:calico-apiserver,Attempt:0,}" Nov 6 05:31:16.602708 containerd[1686]: time="2025-11-06T05:31:16.602697907Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-zs4hc,Uid:67fe9b67-1f92-4893-a091-15ef1297d78d,Namespace:kube-system,Attempt:0,}" Nov 6 05:31:16.602708 containerd[1686]: time="2025-11-06T05:31:16.602737635Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-65j25,Uid:8c33e9bc-df47-4f69-aab6-628eca0dd480,Namespace:calico-system,Attempt:0,}" Nov 6 05:31:16.629228 containerd[1686]: time="2025-11-06T05:31:16.629197493Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-678545c96c-k4phr,Uid:d9191e4f-5d9b-4552-8319-79f0f354642e,Namespace:calico-system,Attempt:0,}" Nov 6 05:31:16.801371 systemd-networkd[1562]: cali3323cebc0d9: Link UP Nov 6 05:31:16.802060 systemd-networkd[1562]: cali3323cebc0d9: Gained carrier Nov 6 05:31:16.824306 containerd[1686]: 2025-11-06 05:31:16.675 [INFO][4400] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--647585d966--dj8ql-eth0 calico-apiserver-647585d966- calico-apiserver f3152899-b21d-4929-8f12-210f707e4efb 805 0 2025-11-06 05:30:44 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:647585d966 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-647585d966-dj8ql eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali3323cebc0d9 [] [] }} ContainerID="bcf68ebb0125de42903ab58ffb8e1b02db547a976dad2e63c5f17a30ef8ce54e" Namespace="calico-apiserver" Pod="calico-apiserver-647585d966-dj8ql" WorkloadEndpoint="localhost-k8s-calico--apiserver--647585d966--dj8ql-" Nov 6 05:31:16.824306 containerd[1686]: 2025-11-06 05:31:16.675 [INFO][4400] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="bcf68ebb0125de42903ab58ffb8e1b02db547a976dad2e63c5f17a30ef8ce54e" Namespace="calico-apiserver" Pod="calico-apiserver-647585d966-dj8ql" WorkloadEndpoint="localhost-k8s-calico--apiserver--647585d966--dj8ql-eth0" Nov 6 05:31:16.824306 containerd[1686]: 2025-11-06 05:31:16.750 [INFO][4448] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bcf68ebb0125de42903ab58ffb8e1b02db547a976dad2e63c5f17a30ef8ce54e" HandleID="k8s-pod-network.bcf68ebb0125de42903ab58ffb8e1b02db547a976dad2e63c5f17a30ef8ce54e" Workload="localhost-k8s-calico--apiserver--647585d966--dj8ql-eth0" Nov 6 05:31:16.826054 containerd[1686]: 2025-11-06 05:31:16.753 [INFO][4448] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="bcf68ebb0125de42903ab58ffb8e1b02db547a976dad2e63c5f17a30ef8ce54e" HandleID="k8s-pod-network.bcf68ebb0125de42903ab58ffb8e1b02db547a976dad2e63c5f17a30ef8ce54e" Workload="localhost-k8s-calico--apiserver--647585d966--dj8ql-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024fa30), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-647585d966-dj8ql", "timestamp":"2025-11-06 05:31:16.750348145 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 6 05:31:16.826054 containerd[1686]: 2025-11-06 05:31:16.753 [INFO][4448] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 6 05:31:16.826054 containerd[1686]: 2025-11-06 05:31:16.753 [INFO][4448] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 6 05:31:16.826054 containerd[1686]: 2025-11-06 05:31:16.753 [INFO][4448] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 6 05:31:16.826054 containerd[1686]: 2025-11-06 05:31:16.761 [INFO][4448] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.bcf68ebb0125de42903ab58ffb8e1b02db547a976dad2e63c5f17a30ef8ce54e" host="localhost" Nov 6 05:31:16.826054 containerd[1686]: 2025-11-06 05:31:16.763 [INFO][4448] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 6 05:31:16.826054 containerd[1686]: 2025-11-06 05:31:16.775 [INFO][4448] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 6 05:31:16.826054 containerd[1686]: 2025-11-06 05:31:16.776 [INFO][4448] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 6 05:31:16.826054 containerd[1686]: 2025-11-06 05:31:16.777 [INFO][4448] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 6 05:31:16.826054 containerd[1686]: 2025-11-06 05:31:16.778 [INFO][4448] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.bcf68ebb0125de42903ab58ffb8e1b02db547a976dad2e63c5f17a30ef8ce54e" host="localhost" Nov 6 05:31:16.826797 containerd[1686]: 2025-11-06 05:31:16.778 [INFO][4448] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.bcf68ebb0125de42903ab58ffb8e1b02db547a976dad2e63c5f17a30ef8ce54e Nov 6 05:31:16.826797 containerd[1686]: 2025-11-06 05:31:16.782 [INFO][4448] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.bcf68ebb0125de42903ab58ffb8e1b02db547a976dad2e63c5f17a30ef8ce54e" host="localhost" Nov 6 05:31:16.826797 containerd[1686]: 2025-11-06 05:31:16.792 [INFO][4448] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.bcf68ebb0125de42903ab58ffb8e1b02db547a976dad2e63c5f17a30ef8ce54e" host="localhost" Nov 6 05:31:16.826797 containerd[1686]: 2025-11-06 05:31:16.793 [INFO][4448] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.bcf68ebb0125de42903ab58ffb8e1b02db547a976dad2e63c5f17a30ef8ce54e" host="localhost" Nov 6 05:31:16.826797 containerd[1686]: 2025-11-06 05:31:16.793 [INFO][4448] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 6 05:31:16.826797 containerd[1686]: 2025-11-06 05:31:16.793 [INFO][4448] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="bcf68ebb0125de42903ab58ffb8e1b02db547a976dad2e63c5f17a30ef8ce54e" HandleID="k8s-pod-network.bcf68ebb0125de42903ab58ffb8e1b02db547a976dad2e63c5f17a30ef8ce54e" Workload="localhost-k8s-calico--apiserver--647585d966--dj8ql-eth0" Nov 6 05:31:16.826900 containerd[1686]: 2025-11-06 05:31:16.798 [INFO][4400] cni-plugin/k8s.go 418: Populated endpoint ContainerID="bcf68ebb0125de42903ab58ffb8e1b02db547a976dad2e63c5f17a30ef8ce54e" Namespace="calico-apiserver" Pod="calico-apiserver-647585d966-dj8ql" WorkloadEndpoint="localhost-k8s-calico--apiserver--647585d966--dj8ql-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--647585d966--dj8ql-eth0", GenerateName:"calico-apiserver-647585d966-", Namespace:"calico-apiserver", SelfLink:"", UID:"f3152899-b21d-4929-8f12-210f707e4efb", ResourceVersion:"805", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 5, 30, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"647585d966", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-647585d966-dj8ql", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3323cebc0d9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 05:31:16.826954 containerd[1686]: 2025-11-06 05:31:16.798 [INFO][4400] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="bcf68ebb0125de42903ab58ffb8e1b02db547a976dad2e63c5f17a30ef8ce54e" Namespace="calico-apiserver" Pod="calico-apiserver-647585d966-dj8ql" WorkloadEndpoint="localhost-k8s-calico--apiserver--647585d966--dj8ql-eth0" Nov 6 05:31:16.826954 containerd[1686]: 2025-11-06 05:31:16.798 [INFO][4400] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3323cebc0d9 ContainerID="bcf68ebb0125de42903ab58ffb8e1b02db547a976dad2e63c5f17a30ef8ce54e" Namespace="calico-apiserver" Pod="calico-apiserver-647585d966-dj8ql" WorkloadEndpoint="localhost-k8s-calico--apiserver--647585d966--dj8ql-eth0" Nov 6 05:31:16.826954 containerd[1686]: 2025-11-06 05:31:16.805 [INFO][4400] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bcf68ebb0125de42903ab58ffb8e1b02db547a976dad2e63c5f17a30ef8ce54e" Namespace="calico-apiserver" Pod="calico-apiserver-647585d966-dj8ql" WorkloadEndpoint="localhost-k8s-calico--apiserver--647585d966--dj8ql-eth0" Nov 6 05:31:16.827069 containerd[1686]: 2025-11-06 05:31:16.806 [INFO][4400] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="bcf68ebb0125de42903ab58ffb8e1b02db547a976dad2e63c5f17a30ef8ce54e" Namespace="calico-apiserver" Pod="calico-apiserver-647585d966-dj8ql" WorkloadEndpoint="localhost-k8s-calico--apiserver--647585d966--dj8ql-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--647585d966--dj8ql-eth0", GenerateName:"calico-apiserver-647585d966-", Namespace:"calico-apiserver", SelfLink:"", UID:"f3152899-b21d-4929-8f12-210f707e4efb", ResourceVersion:"805", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 5, 30, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"647585d966", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bcf68ebb0125de42903ab58ffb8e1b02db547a976dad2e63c5f17a30ef8ce54e", Pod:"calico-apiserver-647585d966-dj8ql", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3323cebc0d9", MAC:"3e:fa:66:31:af:c9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 05:31:16.827180 containerd[1686]: 2025-11-06 05:31:16.821 [INFO][4400] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="bcf68ebb0125de42903ab58ffb8e1b02db547a976dad2e63c5f17a30ef8ce54e" Namespace="calico-apiserver" Pod="calico-apiserver-647585d966-dj8ql" WorkloadEndpoint="localhost-k8s-calico--apiserver--647585d966--dj8ql-eth0" Nov 6 05:31:16.842626 containerd[1686]: time="2025-11-06T05:31:16.842594370Z" level=info msg="connecting to shim bcf68ebb0125de42903ab58ffb8e1b02db547a976dad2e63c5f17a30ef8ce54e" address="unix:///run/containerd/s/34a89777d8fe43c4cf26948279f9752498ee3d57a076cb3ead73955790e161af" namespace=k8s.io protocol=ttrpc version=3 Nov 6 05:31:16.862326 systemd[1]: Started cri-containerd-bcf68ebb0125de42903ab58ffb8e1b02db547a976dad2e63c5f17a30ef8ce54e.scope - libcontainer container bcf68ebb0125de42903ab58ffb8e1b02db547a976dad2e63c5f17a30ef8ce54e. Nov 6 05:31:16.878676 systemd-resolved[1563]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 6 05:31:16.890873 systemd-networkd[1562]: caliddcb66269aa: Link UP Nov 6 05:31:16.891001 systemd-networkd[1562]: caliddcb66269aa: Gained carrier Nov 6 05:31:16.906422 containerd[1686]: 2025-11-06 05:31:16.726 [INFO][4441] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--678545c96c--k4phr-eth0 calico-kube-controllers-678545c96c- calico-system d9191e4f-5d9b-4552-8319-79f0f354642e 811 0 2025-11-06 05:30:48 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:678545c96c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-678545c96c-k4phr eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] caliddcb66269aa [] [] }} ContainerID="fcc647dea5bb6ff9b63ad067e23de917c7f3f642269cd154fc25e4338ebcf0e5" Namespace="calico-system" Pod="calico-kube-controllers-678545c96c-k4phr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--678545c96c--k4phr-" Nov 6 05:31:16.906422 containerd[1686]: 2025-11-06 05:31:16.726 [INFO][4441] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fcc647dea5bb6ff9b63ad067e23de917c7f3f642269cd154fc25e4338ebcf0e5" Namespace="calico-system" Pod="calico-kube-controllers-678545c96c-k4phr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--678545c96c--k4phr-eth0" Nov 6 05:31:16.906422 containerd[1686]: 2025-11-06 05:31:16.764 [INFO][4468] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fcc647dea5bb6ff9b63ad067e23de917c7f3f642269cd154fc25e4338ebcf0e5" HandleID="k8s-pod-network.fcc647dea5bb6ff9b63ad067e23de917c7f3f642269cd154fc25e4338ebcf0e5" Workload="localhost-k8s-calico--kube--controllers--678545c96c--k4phr-eth0" Nov 6 05:31:16.906718 containerd[1686]: 2025-11-06 05:31:16.765 [INFO][4468] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="fcc647dea5bb6ff9b63ad067e23de917c7f3f642269cd154fc25e4338ebcf0e5" HandleID="k8s-pod-network.fcc647dea5bb6ff9b63ad067e23de917c7f3f642269cd154fc25e4338ebcf0e5" Workload="localhost-k8s-calico--kube--controllers--678545c96c--k4phr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5910), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-678545c96c-k4phr", "timestamp":"2025-11-06 05:31:16.764026258 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 6 05:31:16.906718 containerd[1686]: 2025-11-06 05:31:16.765 [INFO][4468] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 6 05:31:16.906718 containerd[1686]: 2025-11-06 05:31:16.793 [INFO][4468] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 6 05:31:16.906718 containerd[1686]: 2025-11-06 05:31:16.793 [INFO][4468] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 6 05:31:16.906718 containerd[1686]: 2025-11-06 05:31:16.861 [INFO][4468] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.fcc647dea5bb6ff9b63ad067e23de917c7f3f642269cd154fc25e4338ebcf0e5" host="localhost" Nov 6 05:31:16.906718 containerd[1686]: 2025-11-06 05:31:16.865 [INFO][4468] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 6 05:31:16.906718 containerd[1686]: 2025-11-06 05:31:16.870 [INFO][4468] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 6 05:31:16.906718 containerd[1686]: 2025-11-06 05:31:16.874 [INFO][4468] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 6 05:31:16.906718 containerd[1686]: 2025-11-06 05:31:16.875 [INFO][4468] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 6 05:31:16.906718 containerd[1686]: 2025-11-06 05:31:16.875 [INFO][4468] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.fcc647dea5bb6ff9b63ad067e23de917c7f3f642269cd154fc25e4338ebcf0e5" host="localhost" Nov 6 05:31:16.906914 containerd[1686]: 2025-11-06 05:31:16.877 [INFO][4468] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.fcc647dea5bb6ff9b63ad067e23de917c7f3f642269cd154fc25e4338ebcf0e5 Nov 6 05:31:16.906914 containerd[1686]: 2025-11-06 05:31:16.880 [INFO][4468] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.fcc647dea5bb6ff9b63ad067e23de917c7f3f642269cd154fc25e4338ebcf0e5" host="localhost" Nov 6 05:31:16.906914 containerd[1686]: 2025-11-06 05:31:16.883 [INFO][4468] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.fcc647dea5bb6ff9b63ad067e23de917c7f3f642269cd154fc25e4338ebcf0e5" host="localhost" Nov 6 05:31:16.906914 containerd[1686]: 2025-11-06 05:31:16.883 [INFO][4468] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.fcc647dea5bb6ff9b63ad067e23de917c7f3f642269cd154fc25e4338ebcf0e5" host="localhost" Nov 6 05:31:16.906914 containerd[1686]: 2025-11-06 05:31:16.883 [INFO][4468] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 6 05:31:16.906914 containerd[1686]: 2025-11-06 05:31:16.883 [INFO][4468] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="fcc647dea5bb6ff9b63ad067e23de917c7f3f642269cd154fc25e4338ebcf0e5" HandleID="k8s-pod-network.fcc647dea5bb6ff9b63ad067e23de917c7f3f642269cd154fc25e4338ebcf0e5" Workload="localhost-k8s-calico--kube--controllers--678545c96c--k4phr-eth0" Nov 6 05:31:16.907730 containerd[1686]: 2025-11-06 05:31:16.888 [INFO][4441] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fcc647dea5bb6ff9b63ad067e23de917c7f3f642269cd154fc25e4338ebcf0e5" Namespace="calico-system" Pod="calico-kube-controllers-678545c96c-k4phr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--678545c96c--k4phr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--678545c96c--k4phr-eth0", GenerateName:"calico-kube-controllers-678545c96c-", Namespace:"calico-system", SelfLink:"", UID:"d9191e4f-5d9b-4552-8319-79f0f354642e", ResourceVersion:"811", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 5, 30, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"678545c96c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-678545c96c-k4phr", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliddcb66269aa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 05:31:16.907774 containerd[1686]: 2025-11-06 05:31:16.888 [INFO][4441] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="fcc647dea5bb6ff9b63ad067e23de917c7f3f642269cd154fc25e4338ebcf0e5" Namespace="calico-system" Pod="calico-kube-controllers-678545c96c-k4phr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--678545c96c--k4phr-eth0" Nov 6 05:31:16.907774 containerd[1686]: 2025-11-06 05:31:16.888 [INFO][4441] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliddcb66269aa ContainerID="fcc647dea5bb6ff9b63ad067e23de917c7f3f642269cd154fc25e4338ebcf0e5" Namespace="calico-system" Pod="calico-kube-controllers-678545c96c-k4phr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--678545c96c--k4phr-eth0" Nov 6 05:31:16.907774 containerd[1686]: 2025-11-06 05:31:16.894 [INFO][4441] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fcc647dea5bb6ff9b63ad067e23de917c7f3f642269cd154fc25e4338ebcf0e5" Namespace="calico-system" Pod="calico-kube-controllers-678545c96c-k4phr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--678545c96c--k4phr-eth0" Nov 6 05:31:16.907826 containerd[1686]: 2025-11-06 05:31:16.895 [INFO][4441] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fcc647dea5bb6ff9b63ad067e23de917c7f3f642269cd154fc25e4338ebcf0e5" Namespace="calico-system" Pod="calico-kube-controllers-678545c96c-k4phr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--678545c96c--k4phr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--678545c96c--k4phr-eth0", GenerateName:"calico-kube-controllers-678545c96c-", Namespace:"calico-system", SelfLink:"", UID:"d9191e4f-5d9b-4552-8319-79f0f354642e", ResourceVersion:"811", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 5, 30, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"678545c96c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fcc647dea5bb6ff9b63ad067e23de917c7f3f642269cd154fc25e4338ebcf0e5", Pod:"calico-kube-controllers-678545c96c-k4phr", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliddcb66269aa", MAC:"92:6c:41:66:fb:42", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 05:31:16.907870 containerd[1686]: 2025-11-06 05:31:16.903 [INFO][4441] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fcc647dea5bb6ff9b63ad067e23de917c7f3f642269cd154fc25e4338ebcf0e5" Namespace="calico-system" Pod="calico-kube-controllers-678545c96c-k4phr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--678545c96c--k4phr-eth0" Nov 6 05:31:16.925792 containerd[1686]: time="2025-11-06T05:31:16.925759074Z" level=info msg="connecting to shim fcc647dea5bb6ff9b63ad067e23de917c7f3f642269cd154fc25e4338ebcf0e5" address="unix:///run/containerd/s/1bb769ca3052169dc68e7cc63aa01146b525896ee0f7af9dcdc391757f133787" namespace=k8s.io protocol=ttrpc version=3 Nov 6 05:31:16.938624 containerd[1686]: time="2025-11-06T05:31:16.938594578Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-647585d966-dj8ql,Uid:f3152899-b21d-4929-8f12-210f707e4efb,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"bcf68ebb0125de42903ab58ffb8e1b02db547a976dad2e63c5f17a30ef8ce54e\"" Nov 6 05:31:16.950421 containerd[1686]: time="2025-11-06T05:31:16.950393118Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 6 05:31:16.960269 systemd[1]: Started cri-containerd-fcc647dea5bb6ff9b63ad067e23de917c7f3f642269cd154fc25e4338ebcf0e5.scope - libcontainer container fcc647dea5bb6ff9b63ad067e23de917c7f3f642269cd154fc25e4338ebcf0e5. Nov 6 05:31:16.972600 systemd-resolved[1563]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 6 05:31:16.990514 systemd-networkd[1562]: cali7b941c721e8: Link UP Nov 6 05:31:16.991939 systemd-networkd[1562]: cali7b941c721e8: Gained carrier Nov 6 05:31:17.001908 containerd[1686]: 2025-11-06 05:31:16.686 [INFO][4412] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--zs4hc-eth0 coredns-66bc5c9577- kube-system 67fe9b67-1f92-4893-a091-15ef1297d78d 802 0 2025-11-06 05:30:36 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-zs4hc eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali7b941c721e8 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="0cd204630f302bfaa9a441f8bd5b0a1eac0c45b28af848a5a8647b09e7577da0" Namespace="kube-system" Pod="coredns-66bc5c9577-zs4hc" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--zs4hc-" Nov 6 05:31:17.001908 containerd[1686]: 2025-11-06 05:31:16.687 [INFO][4412] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0cd204630f302bfaa9a441f8bd5b0a1eac0c45b28af848a5a8647b09e7577da0" Namespace="kube-system" Pod="coredns-66bc5c9577-zs4hc" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--zs4hc-eth0" Nov 6 05:31:17.001908 containerd[1686]: 2025-11-06 05:31:16.764 [INFO][4453] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0cd204630f302bfaa9a441f8bd5b0a1eac0c45b28af848a5a8647b09e7577da0" HandleID="k8s-pod-network.0cd204630f302bfaa9a441f8bd5b0a1eac0c45b28af848a5a8647b09e7577da0" Workload="localhost-k8s-coredns--66bc5c9577--zs4hc-eth0" Nov 6 05:31:17.002075 containerd[1686]: 2025-11-06 05:31:16.765 [INFO][4453] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="0cd204630f302bfaa9a441f8bd5b0a1eac0c45b28af848a5a8647b09e7577da0" HandleID="k8s-pod-network.0cd204630f302bfaa9a441f8bd5b0a1eac0c45b28af848a5a8647b09e7577da0" Workload="localhost-k8s-coredns--66bc5c9577--zs4hc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002df8a0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-zs4hc", "timestamp":"2025-11-06 05:31:16.764766551 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 6 05:31:17.002075 containerd[1686]: 2025-11-06 05:31:16.765 [INFO][4453] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 6 05:31:17.002075 containerd[1686]: 2025-11-06 05:31:16.883 [INFO][4453] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 6 05:31:17.002075 containerd[1686]: 2025-11-06 05:31:16.883 [INFO][4453] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 6 05:31:17.002075 containerd[1686]: 2025-11-06 05:31:16.962 [INFO][4453] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0cd204630f302bfaa9a441f8bd5b0a1eac0c45b28af848a5a8647b09e7577da0" host="localhost" Nov 6 05:31:17.002075 containerd[1686]: 2025-11-06 05:31:16.967 [INFO][4453] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 6 05:31:17.002075 containerd[1686]: 2025-11-06 05:31:16.970 [INFO][4453] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 6 05:31:17.002075 containerd[1686]: 2025-11-06 05:31:16.972 [INFO][4453] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 6 05:31:17.002075 containerd[1686]: 2025-11-06 05:31:16.974 [INFO][4453] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 6 05:31:17.002075 containerd[1686]: 2025-11-06 05:31:16.974 [INFO][4453] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0cd204630f302bfaa9a441f8bd5b0a1eac0c45b28af848a5a8647b09e7577da0" host="localhost" Nov 6 05:31:17.002304 containerd[1686]: 2025-11-06 05:31:16.975 [INFO][4453] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.0cd204630f302bfaa9a441f8bd5b0a1eac0c45b28af848a5a8647b09e7577da0 Nov 6 05:31:17.002304 containerd[1686]: 2025-11-06 05:31:16.977 [INFO][4453] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0cd204630f302bfaa9a441f8bd5b0a1eac0c45b28af848a5a8647b09e7577da0" host="localhost" Nov 6 05:31:17.002304 containerd[1686]: 2025-11-06 05:31:16.981 [INFO][4453] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.0cd204630f302bfaa9a441f8bd5b0a1eac0c45b28af848a5a8647b09e7577da0" host="localhost" Nov 6 05:31:17.002304 containerd[1686]: 2025-11-06 05:31:16.981 [INFO][4453] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.0cd204630f302bfaa9a441f8bd5b0a1eac0c45b28af848a5a8647b09e7577da0" host="localhost" Nov 6 05:31:17.002304 containerd[1686]: 2025-11-06 05:31:16.982 [INFO][4453] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 6 05:31:17.002304 containerd[1686]: 2025-11-06 05:31:16.982 [INFO][4453] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="0cd204630f302bfaa9a441f8bd5b0a1eac0c45b28af848a5a8647b09e7577da0" HandleID="k8s-pod-network.0cd204630f302bfaa9a441f8bd5b0a1eac0c45b28af848a5a8647b09e7577da0" Workload="localhost-k8s-coredns--66bc5c9577--zs4hc-eth0" Nov 6 05:31:17.002493 containerd[1686]: 2025-11-06 05:31:16.984 [INFO][4412] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0cd204630f302bfaa9a441f8bd5b0a1eac0c45b28af848a5a8647b09e7577da0" Namespace="kube-system" Pod="coredns-66bc5c9577-zs4hc" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--zs4hc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--zs4hc-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"67fe9b67-1f92-4893-a091-15ef1297d78d", ResourceVersion:"802", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 5, 30, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-zs4hc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7b941c721e8", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 05:31:17.002493 containerd[1686]: 2025-11-06 05:31:16.984 [INFO][4412] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="0cd204630f302bfaa9a441f8bd5b0a1eac0c45b28af848a5a8647b09e7577da0" Namespace="kube-system" Pod="coredns-66bc5c9577-zs4hc" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--zs4hc-eth0" Nov 6 05:31:17.002493 containerd[1686]: 2025-11-06 05:31:16.985 [INFO][4412] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7b941c721e8 ContainerID="0cd204630f302bfaa9a441f8bd5b0a1eac0c45b28af848a5a8647b09e7577da0" Namespace="kube-system" Pod="coredns-66bc5c9577-zs4hc" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--zs4hc-eth0" Nov 6 05:31:17.002493 containerd[1686]: 2025-11-06 05:31:16.991 [INFO][4412] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0cd204630f302bfaa9a441f8bd5b0a1eac0c45b28af848a5a8647b09e7577da0" Namespace="kube-system" Pod="coredns-66bc5c9577-zs4hc" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--zs4hc-eth0" Nov 6 05:31:17.002493 containerd[1686]: 2025-11-06 05:31:16.992 [INFO][4412] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0cd204630f302bfaa9a441f8bd5b0a1eac0c45b28af848a5a8647b09e7577da0" Namespace="kube-system" Pod="coredns-66bc5c9577-zs4hc" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--zs4hc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--zs4hc-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"67fe9b67-1f92-4893-a091-15ef1297d78d", ResourceVersion:"802", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 5, 30, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0cd204630f302bfaa9a441f8bd5b0a1eac0c45b28af848a5a8647b09e7577da0", Pod:"coredns-66bc5c9577-zs4hc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7b941c721e8", MAC:"f6:34:10:e9:61:07", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 05:31:17.002493 containerd[1686]: 2025-11-06 05:31:17.000 [INFO][4412] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0cd204630f302bfaa9a441f8bd5b0a1eac0c45b28af848a5a8647b09e7577da0" Namespace="kube-system" Pod="coredns-66bc5c9577-zs4hc" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--zs4hc-eth0" Nov 6 05:31:17.023061 containerd[1686]: time="2025-11-06T05:31:17.023038400Z" level=info msg="connecting to shim 0cd204630f302bfaa9a441f8bd5b0a1eac0c45b28af848a5a8647b09e7577da0" address="unix:///run/containerd/s/6875bf4f302d27cccd4e75bf7183920a4311670939650c26459d344942c9ddf6" namespace=k8s.io protocol=ttrpc version=3 Nov 6 05:31:17.026717 containerd[1686]: time="2025-11-06T05:31:17.026700177Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-678545c96c-k4phr,Uid:d9191e4f-5d9b-4552-8319-79f0f354642e,Namespace:calico-system,Attempt:0,} returns sandbox id \"fcc647dea5bb6ff9b63ad067e23de917c7f3f642269cd154fc25e4338ebcf0e5\"" Nov 6 05:31:17.043247 systemd[1]: Started cri-containerd-0cd204630f302bfaa9a441f8bd5b0a1eac0c45b28af848a5a8647b09e7577da0.scope - libcontainer container 0cd204630f302bfaa9a441f8bd5b0a1eac0c45b28af848a5a8647b09e7577da0. Nov 6 05:31:17.051875 systemd-resolved[1563]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 6 05:31:17.089720 systemd-networkd[1562]: cali731ef3c1e03: Link UP Nov 6 05:31:17.091212 systemd-networkd[1562]: cali731ef3c1e03: Gained carrier Nov 6 05:31:17.104652 containerd[1686]: 2025-11-06 05:31:16.722 [INFO][4422] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--65j25-eth0 csi-node-driver- calico-system 8c33e9bc-df47-4f69-aab6-628eca0dd480 689 0 2025-11-06 05:30:48 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:9d99788f7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-65j25 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali731ef3c1e03 [] [] }} ContainerID="8f581b6ce4adec17a3ebd841b1948872771404a69ee904f8565e134849cf950e" Namespace="calico-system" Pod="csi-node-driver-65j25" WorkloadEndpoint="localhost-k8s-csi--node--driver--65j25-" Nov 6 05:31:17.104652 containerd[1686]: 2025-11-06 05:31:16.722 [INFO][4422] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8f581b6ce4adec17a3ebd841b1948872771404a69ee904f8565e134849cf950e" Namespace="calico-system" Pod="csi-node-driver-65j25" WorkloadEndpoint="localhost-k8s-csi--node--driver--65j25-eth0" Nov 6 05:31:17.104652 containerd[1686]: 2025-11-06 05:31:16.766 [INFO][4463] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8f581b6ce4adec17a3ebd841b1948872771404a69ee904f8565e134849cf950e" HandleID="k8s-pod-network.8f581b6ce4adec17a3ebd841b1948872771404a69ee904f8565e134849cf950e" Workload="localhost-k8s-csi--node--driver--65j25-eth0" Nov 6 05:31:17.104652 containerd[1686]: 2025-11-06 05:31:16.766 [INFO][4463] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="8f581b6ce4adec17a3ebd841b1948872771404a69ee904f8565e134849cf950e" HandleID="k8s-pod-network.8f581b6ce4adec17a3ebd841b1948872771404a69ee904f8565e134849cf950e" Workload="localhost-k8s-csi--node--driver--65j25-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f590), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-65j25", "timestamp":"2025-11-06 05:31:16.766000325 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 6 05:31:17.104652 containerd[1686]: 2025-11-06 05:31:16.766 [INFO][4463] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 6 05:31:17.104652 containerd[1686]: 2025-11-06 05:31:16.982 [INFO][4463] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 6 05:31:17.104652 containerd[1686]: 2025-11-06 05:31:16.982 [INFO][4463] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 6 05:31:17.104652 containerd[1686]: 2025-11-06 05:31:17.062 [INFO][4463] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8f581b6ce4adec17a3ebd841b1948872771404a69ee904f8565e134849cf950e" host="localhost" Nov 6 05:31:17.104652 containerd[1686]: 2025-11-06 05:31:17.067 [INFO][4463] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 6 05:31:17.104652 containerd[1686]: 2025-11-06 05:31:17.071 [INFO][4463] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 6 05:31:17.104652 containerd[1686]: 2025-11-06 05:31:17.072 [INFO][4463] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 6 05:31:17.104652 containerd[1686]: 2025-11-06 05:31:17.074 [INFO][4463] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 6 05:31:17.104652 containerd[1686]: 2025-11-06 05:31:17.074 [INFO][4463] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8f581b6ce4adec17a3ebd841b1948872771404a69ee904f8565e134849cf950e" host="localhost" Nov 6 05:31:17.104652 containerd[1686]: 2025-11-06 05:31:17.075 [INFO][4463] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.8f581b6ce4adec17a3ebd841b1948872771404a69ee904f8565e134849cf950e Nov 6 05:31:17.104652 containerd[1686]: 2025-11-06 05:31:17.077 [INFO][4463] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8f581b6ce4adec17a3ebd841b1948872771404a69ee904f8565e134849cf950e" host="localhost" Nov 6 05:31:17.104652 containerd[1686]: 2025-11-06 05:31:17.080 [INFO][4463] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.8f581b6ce4adec17a3ebd841b1948872771404a69ee904f8565e134849cf950e" host="localhost" Nov 6 05:31:17.104652 containerd[1686]: 2025-11-06 05:31:17.080 [INFO][4463] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.8f581b6ce4adec17a3ebd841b1948872771404a69ee904f8565e134849cf950e" host="localhost" Nov 6 05:31:17.104652 containerd[1686]: 2025-11-06 05:31:17.080 [INFO][4463] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 6 05:31:17.104652 containerd[1686]: 2025-11-06 05:31:17.080 [INFO][4463] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="8f581b6ce4adec17a3ebd841b1948872771404a69ee904f8565e134849cf950e" HandleID="k8s-pod-network.8f581b6ce4adec17a3ebd841b1948872771404a69ee904f8565e134849cf950e" Workload="localhost-k8s-csi--node--driver--65j25-eth0" Nov 6 05:31:17.106082 containerd[1686]: 2025-11-06 05:31:17.084 [INFO][4422] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8f581b6ce4adec17a3ebd841b1948872771404a69ee904f8565e134849cf950e" Namespace="calico-system" Pod="csi-node-driver-65j25" WorkloadEndpoint="localhost-k8s-csi--node--driver--65j25-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--65j25-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8c33e9bc-df47-4f69-aab6-628eca0dd480", ResourceVersion:"689", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 5, 30, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-65j25", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali731ef3c1e03", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 05:31:17.106082 containerd[1686]: 2025-11-06 05:31:17.084 [INFO][4422] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="8f581b6ce4adec17a3ebd841b1948872771404a69ee904f8565e134849cf950e" Namespace="calico-system" Pod="csi-node-driver-65j25" WorkloadEndpoint="localhost-k8s-csi--node--driver--65j25-eth0" Nov 6 05:31:17.106082 containerd[1686]: 2025-11-06 05:31:17.085 [INFO][4422] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali731ef3c1e03 ContainerID="8f581b6ce4adec17a3ebd841b1948872771404a69ee904f8565e134849cf950e" Namespace="calico-system" Pod="csi-node-driver-65j25" WorkloadEndpoint="localhost-k8s-csi--node--driver--65j25-eth0" Nov 6 05:31:17.106082 containerd[1686]: 2025-11-06 05:31:17.090 [INFO][4422] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8f581b6ce4adec17a3ebd841b1948872771404a69ee904f8565e134849cf950e" Namespace="calico-system" Pod="csi-node-driver-65j25" WorkloadEndpoint="localhost-k8s-csi--node--driver--65j25-eth0" Nov 6 05:31:17.106082 containerd[1686]: 2025-11-06 05:31:17.090 [INFO][4422] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8f581b6ce4adec17a3ebd841b1948872771404a69ee904f8565e134849cf950e" Namespace="calico-system" Pod="csi-node-driver-65j25" WorkloadEndpoint="localhost-k8s-csi--node--driver--65j25-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--65j25-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8c33e9bc-df47-4f69-aab6-628eca0dd480", ResourceVersion:"689", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 5, 30, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8f581b6ce4adec17a3ebd841b1948872771404a69ee904f8565e134849cf950e", Pod:"csi-node-driver-65j25", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali731ef3c1e03", MAC:"6a:61:43:44:ab:17", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 05:31:17.106082 containerd[1686]: 2025-11-06 05:31:17.102 [INFO][4422] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8f581b6ce4adec17a3ebd841b1948872771404a69ee904f8565e134849cf950e" Namespace="calico-system" Pod="csi-node-driver-65j25" WorkloadEndpoint="localhost-k8s-csi--node--driver--65j25-eth0" Nov 6 05:31:17.107833 containerd[1686]: time="2025-11-06T05:31:17.107767159Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-zs4hc,Uid:67fe9b67-1f92-4893-a091-15ef1297d78d,Namespace:kube-system,Attempt:0,} returns sandbox id \"0cd204630f302bfaa9a441f8bd5b0a1eac0c45b28af848a5a8647b09e7577da0\"" Nov 6 05:31:17.129195 containerd[1686]: time="2025-11-06T05:31:17.128038405Z" level=info msg="connecting to shim 8f581b6ce4adec17a3ebd841b1948872771404a69ee904f8565e134849cf950e" address="unix:///run/containerd/s/d4b192b683e0b8c0ab52e5130dda26b6ac7c45a495c9e63a5f8976e931b3a7d9" namespace=k8s.io protocol=ttrpc version=3 Nov 6 05:31:17.140930 containerd[1686]: time="2025-11-06T05:31:17.140812804Z" level=info msg="CreateContainer within sandbox \"0cd204630f302bfaa9a441f8bd5b0a1eac0c45b28af848a5a8647b09e7577da0\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 6 05:31:17.149409 containerd[1686]: time="2025-11-06T05:31:17.149384423Z" level=info msg="Container 866414601d2c11014ed974eddd0940cfe1f9114a719104af80623e8383559100: CDI devices from CRI Config.CDIDevices: []" Nov 6 05:31:17.151612 containerd[1686]: time="2025-11-06T05:31:17.151548188Z" level=info msg="CreateContainer within sandbox \"0cd204630f302bfaa9a441f8bd5b0a1eac0c45b28af848a5a8647b09e7577da0\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"866414601d2c11014ed974eddd0940cfe1f9114a719104af80623e8383559100\"" Nov 6 05:31:17.152167 containerd[1686]: time="2025-11-06T05:31:17.152035398Z" level=info msg="StartContainer for \"866414601d2c11014ed974eddd0940cfe1f9114a719104af80623e8383559100\"" Nov 6 05:31:17.152614 containerd[1686]: time="2025-11-06T05:31:17.152599767Z" level=info msg="connecting to shim 866414601d2c11014ed974eddd0940cfe1f9114a719104af80623e8383559100" address="unix:///run/containerd/s/6875bf4f302d27cccd4e75bf7183920a4311670939650c26459d344942c9ddf6" protocol=ttrpc version=3 Nov 6 05:31:17.153427 systemd[1]: Started cri-containerd-8f581b6ce4adec17a3ebd841b1948872771404a69ee904f8565e134849cf950e.scope - libcontainer container 8f581b6ce4adec17a3ebd841b1948872771404a69ee904f8565e134849cf950e. Nov 6 05:31:17.167698 systemd-resolved[1563]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 6 05:31:17.171432 systemd[1]: Started cri-containerd-866414601d2c11014ed974eddd0940cfe1f9114a719104af80623e8383559100.scope - libcontainer container 866414601d2c11014ed974eddd0940cfe1f9114a719104af80623e8383559100. Nov 6 05:31:17.184230 containerd[1686]: time="2025-11-06T05:31:17.184197830Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-65j25,Uid:8c33e9bc-df47-4f69-aab6-628eca0dd480,Namespace:calico-system,Attempt:0,} returns sandbox id \"8f581b6ce4adec17a3ebd841b1948872771404a69ee904f8565e134849cf950e\"" Nov 6 05:31:17.218109 containerd[1686]: time="2025-11-06T05:31:17.218083522Z" level=info msg="StartContainer for \"866414601d2c11014ed974eddd0940cfe1f9114a719104af80623e8383559100\" returns successfully" Nov 6 05:31:17.317988 containerd[1686]: time="2025-11-06T05:31:17.317954660Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 05:31:17.318480 containerd[1686]: time="2025-11-06T05:31:17.318460635Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 6 05:31:17.318564 containerd[1686]: time="2025-11-06T05:31:17.318510902Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Nov 6 05:31:17.318657 kubelet[2979]: E1106 05:31:17.318617 2979 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 05:31:17.319342 kubelet[2979]: E1106 05:31:17.318665 2979 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 05:31:17.319342 kubelet[2979]: E1106 05:31:17.318770 2979 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-647585d966-dj8ql_calico-apiserver(f3152899-b21d-4929-8f12-210f707e4efb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 6 05:31:17.319342 kubelet[2979]: E1106 05:31:17.318795 2979 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-647585d966-dj8ql" podUID="f3152899-b21d-4929-8f12-210f707e4efb" Nov 6 05:31:17.319926 containerd[1686]: time="2025-11-06T05:31:17.319539720Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 6 05:31:17.691063 containerd[1686]: time="2025-11-06T05:31:17.691024849Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 05:31:17.692020 containerd[1686]: time="2025-11-06T05:31:17.691984318Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 6 05:31:17.692068 containerd[1686]: time="2025-11-06T05:31:17.692047086Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Nov 6 05:31:17.692293 kubelet[2979]: E1106 05:31:17.692241 2979 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 6 05:31:17.692293 kubelet[2979]: E1106 05:31:17.692278 2979 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 6 05:31:17.694420 containerd[1686]: time="2025-11-06T05:31:17.693899416Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 6 05:31:17.699754 kubelet[2979]: E1106 05:31:17.699734 2979 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-678545c96c-k4phr_calico-system(d9191e4f-5d9b-4552-8319-79f0f354642e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 6 05:31:17.700048 kubelet[2979]: E1106 05:31:17.700031 2979 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-678545c96c-k4phr" podUID="d9191e4f-5d9b-4552-8319-79f0f354642e" Nov 6 05:31:17.879178 kubelet[2979]: E1106 05:31:17.879054 2979 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-678545c96c-k4phr" podUID="d9191e4f-5d9b-4552-8319-79f0f354642e" Nov 6 05:31:17.879993 kubelet[2979]: E1106 05:31:17.879882 2979 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-647585d966-dj8ql" podUID="f3152899-b21d-4929-8f12-210f707e4efb" Nov 6 05:31:17.913992 kubelet[2979]: I1106 05:31:17.911799 2979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-zs4hc" podStartSLOduration=41.910436318 podStartE2EDuration="41.910436318s" podCreationTimestamp="2025-11-06 05:30:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 05:31:17.909642822 +0000 UTC m=+46.453266169" watchObservedRunningTime="2025-11-06 05:31:17.910436318 +0000 UTC m=+46.454059661" Nov 6 05:31:18.045432 containerd[1686]: time="2025-11-06T05:31:18.045349514Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 05:31:18.045704 containerd[1686]: time="2025-11-06T05:31:18.045685574Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 6 05:31:18.045743 containerd[1686]: time="2025-11-06T05:31:18.045734353Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Nov 6 05:31:18.045959 kubelet[2979]: E1106 05:31:18.045932 2979 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 6 05:31:18.045959 kubelet[2979]: E1106 05:31:18.045964 2979 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 6 05:31:18.046046 kubelet[2979]: E1106 05:31:18.046012 2979 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-65j25_calico-system(8c33e9bc-df47-4f69-aab6-628eca0dd480): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 6 05:31:18.046768 containerd[1686]: time="2025-11-06T05:31:18.046735321Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 6 05:31:18.366293 systemd-networkd[1562]: cali3323cebc0d9: Gained IPv6LL Nov 6 05:31:18.428149 containerd[1686]: time="2025-11-06T05:31:18.428085935Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 05:31:18.428718 containerd[1686]: time="2025-11-06T05:31:18.428675962Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 6 05:31:18.428768 containerd[1686]: time="2025-11-06T05:31:18.428747658Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Nov 6 05:31:18.428974 kubelet[2979]: E1106 05:31:18.428925 2979 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 6 05:31:18.428974 kubelet[2979]: E1106 05:31:18.428970 2979 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 6 05:31:18.429479 kubelet[2979]: E1106 05:31:18.429035 2979 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-65j25_calico-system(8c33e9bc-df47-4f69-aab6-628eca0dd480): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 6 05:31:18.429479 kubelet[2979]: E1106 05:31:18.429066 2979 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-65j25" podUID="8c33e9bc-df47-4f69-aab6-628eca0dd480" Nov 6 05:31:18.608763 containerd[1686]: time="2025-11-06T05:31:18.608695229Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-5djlt,Uid:78c7b0f4-b8bc-4238-84a0-4c0e67aa615a,Namespace:calico-system,Attempt:0,}" Nov 6 05:31:18.618308 containerd[1686]: time="2025-11-06T05:31:18.618242195Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-2ph6d,Uid:47bb3fbd-95a7-4d8b-a266-fa714e9b7eb7,Namespace:kube-system,Attempt:0,}" Nov 6 05:31:18.628082 containerd[1686]: time="2025-11-06T05:31:18.628029622Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-647585d966-4pbqk,Uid:e398733d-8974-4f27-8e6f-2b8309aa0ac5,Namespace:calico-apiserver,Attempt:0,}" Nov 6 05:31:18.850651 systemd-networkd[1562]: cali4c4724fe2ab: Link UP Nov 6 05:31:18.851451 systemd-networkd[1562]: cali4c4724fe2ab: Gained carrier Nov 6 05:31:18.870259 containerd[1686]: 2025-11-06 05:31:18.738 [INFO][4746] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--2ph6d-eth0 coredns-66bc5c9577- kube-system 47bb3fbd-95a7-4d8b-a266-fa714e9b7eb7 810 0 2025-11-06 05:30:36 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-2ph6d eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali4c4724fe2ab [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="63706bcb0a82edb6883930eb8d8a599dfc5534f695e21f8220547bf4a8c8f06f" Namespace="kube-system" Pod="coredns-66bc5c9577-2ph6d" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--2ph6d-" Nov 6 05:31:18.870259 containerd[1686]: 2025-11-06 05:31:18.738 [INFO][4746] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="63706bcb0a82edb6883930eb8d8a599dfc5534f695e21f8220547bf4a8c8f06f" Namespace="kube-system" Pod="coredns-66bc5c9577-2ph6d" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--2ph6d-eth0" Nov 6 05:31:18.870259 containerd[1686]: 2025-11-06 05:31:18.769 [INFO][4765] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="63706bcb0a82edb6883930eb8d8a599dfc5534f695e21f8220547bf4a8c8f06f" HandleID="k8s-pod-network.63706bcb0a82edb6883930eb8d8a599dfc5534f695e21f8220547bf4a8c8f06f" Workload="localhost-k8s-coredns--66bc5c9577--2ph6d-eth0" Nov 6 05:31:18.870259 containerd[1686]: 2025-11-06 05:31:18.769 [INFO][4765] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="63706bcb0a82edb6883930eb8d8a599dfc5534f695e21f8220547bf4a8c8f06f" HandleID="k8s-pod-network.63706bcb0a82edb6883930eb8d8a599dfc5534f695e21f8220547bf4a8c8f06f" Workload="localhost-k8s-coredns--66bc5c9577--2ph6d-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d56c0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-2ph6d", "timestamp":"2025-11-06 05:31:18.769846222 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 6 05:31:18.870259 containerd[1686]: 2025-11-06 05:31:18.769 [INFO][4765] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 6 05:31:18.870259 containerd[1686]: 2025-11-06 05:31:18.769 [INFO][4765] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 6 05:31:18.870259 containerd[1686]: 2025-11-06 05:31:18.769 [INFO][4765] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 6 05:31:18.870259 containerd[1686]: 2025-11-06 05:31:18.775 [INFO][4765] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.63706bcb0a82edb6883930eb8d8a599dfc5534f695e21f8220547bf4a8c8f06f" host="localhost" Nov 6 05:31:18.870259 containerd[1686]: 2025-11-06 05:31:18.778 [INFO][4765] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 6 05:31:18.870259 containerd[1686]: 2025-11-06 05:31:18.781 [INFO][4765] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 6 05:31:18.870259 containerd[1686]: 2025-11-06 05:31:18.784 [INFO][4765] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 6 05:31:18.870259 containerd[1686]: 2025-11-06 05:31:18.788 [INFO][4765] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 6 05:31:18.870259 containerd[1686]: 2025-11-06 05:31:18.788 [INFO][4765] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.63706bcb0a82edb6883930eb8d8a599dfc5534f695e21f8220547bf4a8c8f06f" host="localhost" Nov 6 05:31:18.870259 containerd[1686]: 2025-11-06 05:31:18.790 [INFO][4765] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.63706bcb0a82edb6883930eb8d8a599dfc5534f695e21f8220547bf4a8c8f06f Nov 6 05:31:18.870259 containerd[1686]: 2025-11-06 05:31:18.794 [INFO][4765] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.63706bcb0a82edb6883930eb8d8a599dfc5534f695e21f8220547bf4a8c8f06f" host="localhost" Nov 6 05:31:18.870259 containerd[1686]: 2025-11-06 05:31:18.810 [INFO][4765] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.63706bcb0a82edb6883930eb8d8a599dfc5534f695e21f8220547bf4a8c8f06f" host="localhost" Nov 6 05:31:18.870259 containerd[1686]: 2025-11-06 05:31:18.810 [INFO][4765] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.63706bcb0a82edb6883930eb8d8a599dfc5534f695e21f8220547bf4a8c8f06f" host="localhost" Nov 6 05:31:18.870259 containerd[1686]: 2025-11-06 05:31:18.810 [INFO][4765] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 6 05:31:18.870259 containerd[1686]: 2025-11-06 05:31:18.810 [INFO][4765] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="63706bcb0a82edb6883930eb8d8a599dfc5534f695e21f8220547bf4a8c8f06f" HandleID="k8s-pod-network.63706bcb0a82edb6883930eb8d8a599dfc5534f695e21f8220547bf4a8c8f06f" Workload="localhost-k8s-coredns--66bc5c9577--2ph6d-eth0" Nov 6 05:31:18.871368 containerd[1686]: 2025-11-06 05:31:18.835 [INFO][4746] cni-plugin/k8s.go 418: Populated endpoint ContainerID="63706bcb0a82edb6883930eb8d8a599dfc5534f695e21f8220547bf4a8c8f06f" Namespace="kube-system" Pod="coredns-66bc5c9577-2ph6d" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--2ph6d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--2ph6d-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"47bb3fbd-95a7-4d8b-a266-fa714e9b7eb7", ResourceVersion:"810", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 5, 30, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-2ph6d", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4c4724fe2ab", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 05:31:18.871368 containerd[1686]: 2025-11-06 05:31:18.842 [INFO][4746] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="63706bcb0a82edb6883930eb8d8a599dfc5534f695e21f8220547bf4a8c8f06f" Namespace="kube-system" Pod="coredns-66bc5c9577-2ph6d" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--2ph6d-eth0" Nov 6 05:31:18.871368 containerd[1686]: 2025-11-06 05:31:18.843 [INFO][4746] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4c4724fe2ab ContainerID="63706bcb0a82edb6883930eb8d8a599dfc5534f695e21f8220547bf4a8c8f06f" Namespace="kube-system" Pod="coredns-66bc5c9577-2ph6d" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--2ph6d-eth0" Nov 6 05:31:18.871368 containerd[1686]: 2025-11-06 05:31:18.851 [INFO][4746] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="63706bcb0a82edb6883930eb8d8a599dfc5534f695e21f8220547bf4a8c8f06f" Namespace="kube-system" Pod="coredns-66bc5c9577-2ph6d" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--2ph6d-eth0" Nov 6 05:31:18.871368 containerd[1686]: 2025-11-06 05:31:18.852 [INFO][4746] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="63706bcb0a82edb6883930eb8d8a599dfc5534f695e21f8220547bf4a8c8f06f" Namespace="kube-system" Pod="coredns-66bc5c9577-2ph6d" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--2ph6d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--2ph6d-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"47bb3fbd-95a7-4d8b-a266-fa714e9b7eb7", ResourceVersion:"810", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 5, 30, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"63706bcb0a82edb6883930eb8d8a599dfc5534f695e21f8220547bf4a8c8f06f", Pod:"coredns-66bc5c9577-2ph6d", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4c4724fe2ab", MAC:"62:84:43:41:51:82", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 05:31:18.871368 containerd[1686]: 2025-11-06 05:31:18.866 [INFO][4746] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="63706bcb0a82edb6883930eb8d8a599dfc5534f695e21f8220547bf4a8c8f06f" Namespace="kube-system" Pod="coredns-66bc5c9577-2ph6d" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--2ph6d-eth0" Nov 6 05:31:18.878362 systemd-networkd[1562]: caliddcb66269aa: Gained IPv6LL Nov 6 05:31:18.878823 systemd-networkd[1562]: cali7b941c721e8: Gained IPv6LL Nov 6 05:31:18.900246 kubelet[2979]: E1106 05:31:18.900187 2979 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-678545c96c-k4phr" podUID="d9191e4f-5d9b-4552-8319-79f0f354642e" Nov 6 05:31:18.909350 kubelet[2979]: E1106 05:31:18.900370 2979 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-647585d966-dj8ql" podUID="f3152899-b21d-4929-8f12-210f707e4efb" Nov 6 05:31:18.909350 kubelet[2979]: E1106 05:31:18.901023 2979 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-65j25" podUID="8c33e9bc-df47-4f69-aab6-628eca0dd480" Nov 6 05:31:18.931732 systemd-networkd[1562]: calia7c5d9a2fb7: Link UP Nov 6 05:31:18.932725 systemd-networkd[1562]: calia7c5d9a2fb7: Gained carrier Nov 6 05:31:18.963365 containerd[1686]: 2025-11-06 05:31:18.751 [INFO][4756] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--647585d966--4pbqk-eth0 calico-apiserver-647585d966- calico-apiserver e398733d-8974-4f27-8e6f-2b8309aa0ac5 812 0 2025-11-06 05:30:44 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:647585d966 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-647585d966-4pbqk eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calia7c5d9a2fb7 [] [] }} ContainerID="38eecbb61db524c3349e418c1bcae948a4aadb1357eb55f60f1a7a592bc5a666" Namespace="calico-apiserver" Pod="calico-apiserver-647585d966-4pbqk" WorkloadEndpoint="localhost-k8s-calico--apiserver--647585d966--4pbqk-" Nov 6 05:31:18.963365 containerd[1686]: 2025-11-06 05:31:18.751 [INFO][4756] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="38eecbb61db524c3349e418c1bcae948a4aadb1357eb55f60f1a7a592bc5a666" Namespace="calico-apiserver" Pod="calico-apiserver-647585d966-4pbqk" WorkloadEndpoint="localhost-k8s-calico--apiserver--647585d966--4pbqk-eth0" Nov 6 05:31:18.963365 containerd[1686]: 2025-11-06 05:31:18.789 [INFO][4774] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="38eecbb61db524c3349e418c1bcae948a4aadb1357eb55f60f1a7a592bc5a666" HandleID="k8s-pod-network.38eecbb61db524c3349e418c1bcae948a4aadb1357eb55f60f1a7a592bc5a666" Workload="localhost-k8s-calico--apiserver--647585d966--4pbqk-eth0" Nov 6 05:31:18.963365 containerd[1686]: 2025-11-06 05:31:18.790 [INFO][4774] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="38eecbb61db524c3349e418c1bcae948a4aadb1357eb55f60f1a7a592bc5a666" HandleID="k8s-pod-network.38eecbb61db524c3349e418c1bcae948a4aadb1357eb55f60f1a7a592bc5a666" Workload="localhost-k8s-calico--apiserver--647585d966--4pbqk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d1680), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-647585d966-4pbqk", "timestamp":"2025-11-06 05:31:18.789780668 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 6 05:31:18.963365 containerd[1686]: 2025-11-06 05:31:18.790 [INFO][4774] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 6 05:31:18.963365 containerd[1686]: 2025-11-06 05:31:18.810 [INFO][4774] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 6 05:31:18.963365 containerd[1686]: 2025-11-06 05:31:18.810 [INFO][4774] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 6 05:31:18.963365 containerd[1686]: 2025-11-06 05:31:18.887 [INFO][4774] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.38eecbb61db524c3349e418c1bcae948a4aadb1357eb55f60f1a7a592bc5a666" host="localhost" Nov 6 05:31:18.963365 containerd[1686]: 2025-11-06 05:31:18.889 [INFO][4774] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 6 05:31:18.963365 containerd[1686]: 2025-11-06 05:31:18.891 [INFO][4774] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 6 05:31:18.963365 containerd[1686]: 2025-11-06 05:31:18.892 [INFO][4774] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 6 05:31:18.963365 containerd[1686]: 2025-11-06 05:31:18.894 [INFO][4774] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 6 05:31:18.963365 containerd[1686]: 2025-11-06 05:31:18.894 [INFO][4774] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.38eecbb61db524c3349e418c1bcae948a4aadb1357eb55f60f1a7a592bc5a666" host="localhost" Nov 6 05:31:18.963365 containerd[1686]: 2025-11-06 05:31:18.895 [INFO][4774] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.38eecbb61db524c3349e418c1bcae948a4aadb1357eb55f60f1a7a592bc5a666 Nov 6 05:31:18.963365 containerd[1686]: 2025-11-06 05:31:18.901 [INFO][4774] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.38eecbb61db524c3349e418c1bcae948a4aadb1357eb55f60f1a7a592bc5a666" host="localhost" Nov 6 05:31:18.963365 containerd[1686]: 2025-11-06 05:31:18.922 [INFO][4774] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.38eecbb61db524c3349e418c1bcae948a4aadb1357eb55f60f1a7a592bc5a666" host="localhost" Nov 6 05:31:18.963365 containerd[1686]: 2025-11-06 05:31:18.922 [INFO][4774] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.38eecbb61db524c3349e418c1bcae948a4aadb1357eb55f60f1a7a592bc5a666" host="localhost" Nov 6 05:31:18.963365 containerd[1686]: 2025-11-06 05:31:18.922 [INFO][4774] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 6 05:31:18.963365 containerd[1686]: 2025-11-06 05:31:18.922 [INFO][4774] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="38eecbb61db524c3349e418c1bcae948a4aadb1357eb55f60f1a7a592bc5a666" HandleID="k8s-pod-network.38eecbb61db524c3349e418c1bcae948a4aadb1357eb55f60f1a7a592bc5a666" Workload="localhost-k8s-calico--apiserver--647585d966--4pbqk-eth0" Nov 6 05:31:18.985320 containerd[1686]: 2025-11-06 05:31:18.926 [INFO][4756] cni-plugin/k8s.go 418: Populated endpoint ContainerID="38eecbb61db524c3349e418c1bcae948a4aadb1357eb55f60f1a7a592bc5a666" Namespace="calico-apiserver" Pod="calico-apiserver-647585d966-4pbqk" WorkloadEndpoint="localhost-k8s-calico--apiserver--647585d966--4pbqk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--647585d966--4pbqk-eth0", GenerateName:"calico-apiserver-647585d966-", Namespace:"calico-apiserver", SelfLink:"", UID:"e398733d-8974-4f27-8e6f-2b8309aa0ac5", ResourceVersion:"812", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 5, 30, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"647585d966", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-647585d966-4pbqk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia7c5d9a2fb7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 05:31:18.985320 containerd[1686]: 2025-11-06 05:31:18.926 [INFO][4756] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="38eecbb61db524c3349e418c1bcae948a4aadb1357eb55f60f1a7a592bc5a666" Namespace="calico-apiserver" Pod="calico-apiserver-647585d966-4pbqk" WorkloadEndpoint="localhost-k8s-calico--apiserver--647585d966--4pbqk-eth0" Nov 6 05:31:18.985320 containerd[1686]: 2025-11-06 05:31:18.926 [INFO][4756] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia7c5d9a2fb7 ContainerID="38eecbb61db524c3349e418c1bcae948a4aadb1357eb55f60f1a7a592bc5a666" Namespace="calico-apiserver" Pod="calico-apiserver-647585d966-4pbqk" WorkloadEndpoint="localhost-k8s-calico--apiserver--647585d966--4pbqk-eth0" Nov 6 05:31:18.985320 containerd[1686]: 2025-11-06 05:31:18.933 [INFO][4756] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="38eecbb61db524c3349e418c1bcae948a4aadb1357eb55f60f1a7a592bc5a666" Namespace="calico-apiserver" Pod="calico-apiserver-647585d966-4pbqk" WorkloadEndpoint="localhost-k8s-calico--apiserver--647585d966--4pbqk-eth0" Nov 6 05:31:18.985320 containerd[1686]: 2025-11-06 05:31:18.941 [INFO][4756] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="38eecbb61db524c3349e418c1bcae948a4aadb1357eb55f60f1a7a592bc5a666" Namespace="calico-apiserver" Pod="calico-apiserver-647585d966-4pbqk" WorkloadEndpoint="localhost-k8s-calico--apiserver--647585d966--4pbqk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--647585d966--4pbqk-eth0", GenerateName:"calico-apiserver-647585d966-", Namespace:"calico-apiserver", SelfLink:"", UID:"e398733d-8974-4f27-8e6f-2b8309aa0ac5", ResourceVersion:"812", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 5, 30, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"647585d966", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"38eecbb61db524c3349e418c1bcae948a4aadb1357eb55f60f1a7a592bc5a666", Pod:"calico-apiserver-647585d966-4pbqk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia7c5d9a2fb7", MAC:"1e:fd:10:c7:fe:1f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 05:31:18.985320 containerd[1686]: 2025-11-06 05:31:18.961 [INFO][4756] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="38eecbb61db524c3349e418c1bcae948a4aadb1357eb55f60f1a7a592bc5a666" Namespace="calico-apiserver" Pod="calico-apiserver-647585d966-4pbqk" WorkloadEndpoint="localhost-k8s-calico--apiserver--647585d966--4pbqk-eth0" Nov 6 05:31:19.020283 containerd[1686]: time="2025-11-06T05:31:19.020233589Z" level=info msg="connecting to shim 63706bcb0a82edb6883930eb8d8a599dfc5534f695e21f8220547bf4a8c8f06f" address="unix:///run/containerd/s/6c6f9e3553aa785ed7d85e96a1a204dfc0bf22482ee089bc0d120a134afe7f56" namespace=k8s.io protocol=ttrpc version=3 Nov 6 05:31:19.021791 containerd[1686]: time="2025-11-06T05:31:19.021775887Z" level=info msg="connecting to shim 38eecbb61db524c3349e418c1bcae948a4aadb1357eb55f60f1a7a592bc5a666" address="unix:///run/containerd/s/f298fc939923ff6537d97fd5af917cddd90c1ce042d33042363cf108e1fa9629" namespace=k8s.io protocol=ttrpc version=3 Nov 6 05:31:19.027233 systemd-networkd[1562]: calic2f4f1ecf4e: Link UP Nov 6 05:31:19.027348 systemd-networkd[1562]: calic2f4f1ecf4e: Gained carrier Nov 6 05:31:19.059961 containerd[1686]: 2025-11-06 05:31:18.748 [INFO][4741] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--7c778bb748--5djlt-eth0 goldmane-7c778bb748- calico-system 78c7b0f4-b8bc-4238-84a0-4c0e67aa615a 808 0 2025-11-06 05:30:46 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7c778bb748 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-7c778bb748-5djlt eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calic2f4f1ecf4e [] [] }} ContainerID="70834d1bdb057f624fb99f9fd4a432ed10eb1c00a5d55c8990fbecae37e1b415" Namespace="calico-system" Pod="goldmane-7c778bb748-5djlt" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--5djlt-" Nov 6 05:31:19.059961 containerd[1686]: 2025-11-06 05:31:18.748 [INFO][4741] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="70834d1bdb057f624fb99f9fd4a432ed10eb1c00a5d55c8990fbecae37e1b415" Namespace="calico-system" Pod="goldmane-7c778bb748-5djlt" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--5djlt-eth0" Nov 6 05:31:19.059961 containerd[1686]: 2025-11-06 05:31:18.791 [INFO][4779] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="70834d1bdb057f624fb99f9fd4a432ed10eb1c00a5d55c8990fbecae37e1b415" HandleID="k8s-pod-network.70834d1bdb057f624fb99f9fd4a432ed10eb1c00a5d55c8990fbecae37e1b415" Workload="localhost-k8s-goldmane--7c778bb748--5djlt-eth0" Nov 6 05:31:19.059961 containerd[1686]: 2025-11-06 05:31:18.791 [INFO][4779] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="70834d1bdb057f624fb99f9fd4a432ed10eb1c00a5d55c8990fbecae37e1b415" HandleID="k8s-pod-network.70834d1bdb057f624fb99f9fd4a432ed10eb1c00a5d55c8990fbecae37e1b415" Workload="localhost-k8s-goldmane--7c778bb748--5djlt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f950), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-7c778bb748-5djlt", "timestamp":"2025-11-06 05:31:18.7915489 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 6 05:31:19.059961 containerd[1686]: 2025-11-06 05:31:18.791 [INFO][4779] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 6 05:31:19.059961 containerd[1686]: 2025-11-06 05:31:18.922 [INFO][4779] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 6 05:31:19.059961 containerd[1686]: 2025-11-06 05:31:18.922 [INFO][4779] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 6 05:31:19.059961 containerd[1686]: 2025-11-06 05:31:18.976 [INFO][4779] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.70834d1bdb057f624fb99f9fd4a432ed10eb1c00a5d55c8990fbecae37e1b415" host="localhost" Nov 6 05:31:19.059961 containerd[1686]: 2025-11-06 05:31:18.993 [INFO][4779] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 6 05:31:19.059961 containerd[1686]: 2025-11-06 05:31:18.996 [INFO][4779] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 6 05:31:19.059961 containerd[1686]: 2025-11-06 05:31:18.999 [INFO][4779] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 6 05:31:19.059961 containerd[1686]: 2025-11-06 05:31:19.000 [INFO][4779] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 6 05:31:19.059961 containerd[1686]: 2025-11-06 05:31:19.000 [INFO][4779] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.70834d1bdb057f624fb99f9fd4a432ed10eb1c00a5d55c8990fbecae37e1b415" host="localhost" Nov 6 05:31:19.059961 containerd[1686]: 2025-11-06 05:31:19.003 [INFO][4779] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.70834d1bdb057f624fb99f9fd4a432ed10eb1c00a5d55c8990fbecae37e1b415 Nov 6 05:31:19.059961 containerd[1686]: 2025-11-06 05:31:19.009 [INFO][4779] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.70834d1bdb057f624fb99f9fd4a432ed10eb1c00a5d55c8990fbecae37e1b415" host="localhost" Nov 6 05:31:19.059961 containerd[1686]: 2025-11-06 05:31:19.019 [INFO][4779] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.70834d1bdb057f624fb99f9fd4a432ed10eb1c00a5d55c8990fbecae37e1b415" host="localhost" Nov 6 05:31:19.059961 containerd[1686]: 2025-11-06 05:31:19.019 [INFO][4779] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.70834d1bdb057f624fb99f9fd4a432ed10eb1c00a5d55c8990fbecae37e1b415" host="localhost" Nov 6 05:31:19.059961 containerd[1686]: 2025-11-06 05:31:19.019 [INFO][4779] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 6 05:31:19.059961 containerd[1686]: 2025-11-06 05:31:19.019 [INFO][4779] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="70834d1bdb057f624fb99f9fd4a432ed10eb1c00a5d55c8990fbecae37e1b415" HandleID="k8s-pod-network.70834d1bdb057f624fb99f9fd4a432ed10eb1c00a5d55c8990fbecae37e1b415" Workload="localhost-k8s-goldmane--7c778bb748--5djlt-eth0" Nov 6 05:31:19.060408 containerd[1686]: 2025-11-06 05:31:19.025 [INFO][4741] cni-plugin/k8s.go 418: Populated endpoint ContainerID="70834d1bdb057f624fb99f9fd4a432ed10eb1c00a5d55c8990fbecae37e1b415" Namespace="calico-system" Pod="goldmane-7c778bb748-5djlt" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--5djlt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7c778bb748--5djlt-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"78c7b0f4-b8bc-4238-84a0-4c0e67aa615a", ResourceVersion:"808", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 5, 30, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-7c778bb748-5djlt", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic2f4f1ecf4e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 05:31:19.060408 containerd[1686]: 2025-11-06 05:31:19.025 [INFO][4741] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="70834d1bdb057f624fb99f9fd4a432ed10eb1c00a5d55c8990fbecae37e1b415" Namespace="calico-system" Pod="goldmane-7c778bb748-5djlt" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--5djlt-eth0" Nov 6 05:31:19.060408 containerd[1686]: 2025-11-06 05:31:19.025 [INFO][4741] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic2f4f1ecf4e ContainerID="70834d1bdb057f624fb99f9fd4a432ed10eb1c00a5d55c8990fbecae37e1b415" Namespace="calico-system" Pod="goldmane-7c778bb748-5djlt" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--5djlt-eth0" Nov 6 05:31:19.060408 containerd[1686]: 2025-11-06 05:31:19.028 [INFO][4741] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="70834d1bdb057f624fb99f9fd4a432ed10eb1c00a5d55c8990fbecae37e1b415" Namespace="calico-system" Pod="goldmane-7c778bb748-5djlt" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--5djlt-eth0" Nov 6 05:31:19.060408 containerd[1686]: 2025-11-06 05:31:19.028 [INFO][4741] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="70834d1bdb057f624fb99f9fd4a432ed10eb1c00a5d55c8990fbecae37e1b415" Namespace="calico-system" Pod="goldmane-7c778bb748-5djlt" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--5djlt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7c778bb748--5djlt-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"78c7b0f4-b8bc-4238-84a0-4c0e67aa615a", ResourceVersion:"808", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 5, 30, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"70834d1bdb057f624fb99f9fd4a432ed10eb1c00a5d55c8990fbecae37e1b415", Pod:"goldmane-7c778bb748-5djlt", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic2f4f1ecf4e", MAC:"a2:45:db:f7:e7:74", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 05:31:19.060408 containerd[1686]: 2025-11-06 05:31:19.047 [INFO][4741] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="70834d1bdb057f624fb99f9fd4a432ed10eb1c00a5d55c8990fbecae37e1b415" Namespace="calico-system" Pod="goldmane-7c778bb748-5djlt" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--5djlt-eth0" Nov 6 05:31:19.070263 systemd-networkd[1562]: cali731ef3c1e03: Gained IPv6LL Nov 6 05:31:19.090903 containerd[1686]: time="2025-11-06T05:31:19.090876735Z" level=info msg="connecting to shim 70834d1bdb057f624fb99f9fd4a432ed10eb1c00a5d55c8990fbecae37e1b415" address="unix:///run/containerd/s/93495544b532857d5dd2c95fa594127cea491ce70426e29b0b0998ca9f3cc7f5" namespace=k8s.io protocol=ttrpc version=3 Nov 6 05:31:19.093263 systemd[1]: Started cri-containerd-38eecbb61db524c3349e418c1bcae948a4aadb1357eb55f60f1a7a592bc5a666.scope - libcontainer container 38eecbb61db524c3349e418c1bcae948a4aadb1357eb55f60f1a7a592bc5a666. Nov 6 05:31:19.100381 systemd[1]: Started cri-containerd-63706bcb0a82edb6883930eb8d8a599dfc5534f695e21f8220547bf4a8c8f06f.scope - libcontainer container 63706bcb0a82edb6883930eb8d8a599dfc5534f695e21f8220547bf4a8c8f06f. Nov 6 05:31:19.117738 systemd-resolved[1563]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 6 05:31:19.123254 systemd[1]: Started cri-containerd-70834d1bdb057f624fb99f9fd4a432ed10eb1c00a5d55c8990fbecae37e1b415.scope - libcontainer container 70834d1bdb057f624fb99f9fd4a432ed10eb1c00a5d55c8990fbecae37e1b415. Nov 6 05:31:19.148608 systemd-resolved[1563]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 6 05:31:19.157961 containerd[1686]: time="2025-11-06T05:31:19.157933575Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-2ph6d,Uid:47bb3fbd-95a7-4d8b-a266-fa714e9b7eb7,Namespace:kube-system,Attempt:0,} returns sandbox id \"63706bcb0a82edb6883930eb8d8a599dfc5534f695e21f8220547bf4a8c8f06f\"" Nov 6 05:31:19.162003 containerd[1686]: time="2025-11-06T05:31:19.161977167Z" level=info msg="CreateContainer within sandbox \"63706bcb0a82edb6883930eb8d8a599dfc5534f695e21f8220547bf4a8c8f06f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 6 05:31:19.170942 systemd-resolved[1563]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 6 05:31:19.172930 containerd[1686]: time="2025-11-06T05:31:19.172749693Z" level=info msg="Container 732ed8aeb8d298c56f9b88079360f3aadac63bada20ef86b557e89952ba59c80: CDI devices from CRI Config.CDIDevices: []" Nov 6 05:31:19.177660 containerd[1686]: time="2025-11-06T05:31:19.177636233Z" level=info msg="CreateContainer within sandbox \"63706bcb0a82edb6883930eb8d8a599dfc5534f695e21f8220547bf4a8c8f06f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"732ed8aeb8d298c56f9b88079360f3aadac63bada20ef86b557e89952ba59c80\"" Nov 6 05:31:19.178561 containerd[1686]: time="2025-11-06T05:31:19.178523398Z" level=info msg="StartContainer for \"732ed8aeb8d298c56f9b88079360f3aadac63bada20ef86b557e89952ba59c80\"" Nov 6 05:31:19.179480 containerd[1686]: time="2025-11-06T05:31:19.179430675Z" level=info msg="connecting to shim 732ed8aeb8d298c56f9b88079360f3aadac63bada20ef86b557e89952ba59c80" address="unix:///run/containerd/s/6c6f9e3553aa785ed7d85e96a1a204dfc0bf22482ee089bc0d120a134afe7f56" protocol=ttrpc version=3 Nov 6 05:31:19.198293 systemd[1]: Started cri-containerd-732ed8aeb8d298c56f9b88079360f3aadac63bada20ef86b557e89952ba59c80.scope - libcontainer container 732ed8aeb8d298c56f9b88079360f3aadac63bada20ef86b557e89952ba59c80. Nov 6 05:31:19.230078 containerd[1686]: time="2025-11-06T05:31:19.229118637Z" level=info msg="StartContainer for \"732ed8aeb8d298c56f9b88079360f3aadac63bada20ef86b557e89952ba59c80\" returns successfully" Nov 6 05:31:19.243805 containerd[1686]: time="2025-11-06T05:31:19.243775604Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-647585d966-4pbqk,Uid:e398733d-8974-4f27-8e6f-2b8309aa0ac5,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"38eecbb61db524c3349e418c1bcae948a4aadb1357eb55f60f1a7a592bc5a666\"" Nov 6 05:31:19.246264 containerd[1686]: time="2025-11-06T05:31:19.245936617Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-5djlt,Uid:78c7b0f4-b8bc-4238-84a0-4c0e67aa615a,Namespace:calico-system,Attempt:0,} returns sandbox id \"70834d1bdb057f624fb99f9fd4a432ed10eb1c00a5d55c8990fbecae37e1b415\"" Nov 6 05:31:19.246264 containerd[1686]: time="2025-11-06T05:31:19.246118858Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 6 05:31:19.597559 containerd[1686]: time="2025-11-06T05:31:19.597516512Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 05:31:19.605057 containerd[1686]: time="2025-11-06T05:31:19.605034423Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 6 05:31:19.605535 containerd[1686]: time="2025-11-06T05:31:19.605095434Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Nov 6 05:31:19.605569 kubelet[2979]: E1106 05:31:19.605198 2979 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 05:31:19.605569 kubelet[2979]: E1106 05:31:19.605228 2979 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 05:31:19.605569 kubelet[2979]: E1106 05:31:19.605335 2979 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-647585d966-4pbqk_calico-apiserver(e398733d-8974-4f27-8e6f-2b8309aa0ac5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 6 05:31:19.605569 kubelet[2979]: E1106 05:31:19.605356 2979 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-647585d966-4pbqk" podUID="e398733d-8974-4f27-8e6f-2b8309aa0ac5" Nov 6 05:31:19.606006 containerd[1686]: time="2025-11-06T05:31:19.605884469Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 6 05:31:19.902600 kubelet[2979]: E1106 05:31:19.902476 2979 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-647585d966-4pbqk" podUID="e398733d-8974-4f27-8e6f-2b8309aa0ac5" Nov 6 05:31:19.925146 containerd[1686]: time="2025-11-06T05:31:19.925108785Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 05:31:19.929711 containerd[1686]: time="2025-11-06T05:31:19.928967457Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 6 05:31:19.929711 containerd[1686]: time="2025-11-06T05:31:19.929054465Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Nov 6 05:31:19.935614 kubelet[2979]: E1106 05:31:19.929896 2979 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 6 05:31:19.935614 kubelet[2979]: E1106 05:31:19.929923 2979 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 6 05:31:19.935614 kubelet[2979]: E1106 05:31:19.929965 2979 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-5djlt_calico-system(78c7b0f4-b8bc-4238-84a0-4c0e67aa615a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 6 05:31:19.935614 kubelet[2979]: E1106 05:31:19.929984 2979 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-5djlt" podUID="78c7b0f4-b8bc-4238-84a0-4c0e67aa615a" Nov 6 05:31:19.973193 kubelet[2979]: I1106 05:31:19.973038 2979 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-2ph6d" podStartSLOduration=43.973025898 podStartE2EDuration="43.973025898s" podCreationTimestamp="2025-11-06 05:30:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 05:31:19.953538436 +0000 UTC m=+48.497161790" watchObservedRunningTime="2025-11-06 05:31:19.973025898 +0000 UTC m=+48.516649245" Nov 6 05:31:20.094307 systemd-networkd[1562]: calic2f4f1ecf4e: Gained IPv6LL Nov 6 05:31:20.478246 systemd-networkd[1562]: cali4c4724fe2ab: Gained IPv6LL Nov 6 05:31:20.906749 kubelet[2979]: E1106 05:31:20.906580 2979 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-5djlt" podUID="78c7b0f4-b8bc-4238-84a0-4c0e67aa615a" Nov 6 05:31:20.907740 kubelet[2979]: E1106 05:31:20.907712 2979 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-647585d966-4pbqk" podUID="e398733d-8974-4f27-8e6f-2b8309aa0ac5" Nov 6 05:31:20.926595 systemd-networkd[1562]: calia7c5d9a2fb7: Gained IPv6LL Nov 6 05:31:25.599154 containerd[1686]: time="2025-11-06T05:31:25.599077416Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 6 05:31:25.939429 containerd[1686]: time="2025-11-06T05:31:25.939228309Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 05:31:25.941140 containerd[1686]: time="2025-11-06T05:31:25.940336410Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 6 05:31:25.941140 containerd[1686]: time="2025-11-06T05:31:25.940391999Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Nov 6 05:31:25.941205 kubelet[2979]: E1106 05:31:25.940474 2979 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 6 05:31:25.941205 kubelet[2979]: E1106 05:31:25.940513 2979 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 6 05:31:25.941205 kubelet[2979]: E1106 05:31:25.940566 2979 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-66c4b95784-sxn7m_calico-system(af2848d9-f5cf-4973-af81-ec9678a8d6c7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 6 05:31:25.942204 containerd[1686]: time="2025-11-06T05:31:25.942188356Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 6 05:31:26.281539 containerd[1686]: time="2025-11-06T05:31:26.281397665Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 05:31:26.281917 containerd[1686]: time="2025-11-06T05:31:26.281894919Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 6 05:31:26.281962 containerd[1686]: time="2025-11-06T05:31:26.281944398Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Nov 6 05:31:26.282069 kubelet[2979]: E1106 05:31:26.282042 2979 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 6 05:31:26.282125 kubelet[2979]: E1106 05:31:26.282076 2979 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 6 05:31:26.282230 kubelet[2979]: E1106 05:31:26.282125 2979 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-66c4b95784-sxn7m_calico-system(af2848d9-f5cf-4973-af81-ec9678a8d6c7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 6 05:31:26.282230 kubelet[2979]: E1106 05:31:26.282180 2979 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-66c4b95784-sxn7m" podUID="af2848d9-f5cf-4973-af81-ec9678a8d6c7" Nov 6 05:31:29.598228 containerd[1686]: time="2025-11-06T05:31:29.597535538Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 6 05:31:29.940930 containerd[1686]: time="2025-11-06T05:31:29.940851196Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 05:31:29.941271 containerd[1686]: time="2025-11-06T05:31:29.941175539Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 6 05:31:29.941424 kubelet[2979]: E1106 05:31:29.941403 2979 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 6 05:31:29.941893 kubelet[2979]: E1106 05:31:29.941613 2979 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 6 05:31:29.941893 kubelet[2979]: E1106 05:31:29.941670 2979 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-678545c96c-k4phr_calico-system(d9191e4f-5d9b-4552-8319-79f0f354642e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 6 05:31:29.941893 kubelet[2979]: E1106 05:31:29.941693 2979 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-678545c96c-k4phr" podUID="d9191e4f-5d9b-4552-8319-79f0f354642e" Nov 6 05:31:29.942693 containerd[1686]: time="2025-11-06T05:31:29.941294058Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Nov 6 05:31:32.598501 containerd[1686]: time="2025-11-06T05:31:32.598190643Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 6 05:31:32.953920 containerd[1686]: time="2025-11-06T05:31:32.953828586Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 05:31:32.958506 containerd[1686]: time="2025-11-06T05:31:32.958475108Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 6 05:31:32.958572 containerd[1686]: time="2025-11-06T05:31:32.958535763Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Nov 6 05:31:32.958699 kubelet[2979]: E1106 05:31:32.958656 2979 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 6 05:31:32.958699 kubelet[2979]: E1106 05:31:32.958691 2979 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 6 05:31:32.959862 kubelet[2979]: E1106 05:31:32.958751 2979 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-5djlt_calico-system(78c7b0f4-b8bc-4238-84a0-4c0e67aa615a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 6 05:31:32.959862 kubelet[2979]: E1106 05:31:32.958777 2979 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-5djlt" podUID="78c7b0f4-b8bc-4238-84a0-4c0e67aa615a" Nov 6 05:31:33.598398 containerd[1686]: time="2025-11-06T05:31:33.598292395Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 6 05:31:33.927542 containerd[1686]: time="2025-11-06T05:31:33.927301285Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 05:31:33.933217 containerd[1686]: time="2025-11-06T05:31:33.933136796Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 6 05:31:33.933217 containerd[1686]: time="2025-11-06T05:31:33.933186614Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Nov 6 05:31:33.933443 kubelet[2979]: E1106 05:31:33.933414 2979 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 05:31:33.933509 kubelet[2979]: E1106 05:31:33.933445 2979 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 05:31:33.933509 kubelet[2979]: E1106 05:31:33.933497 2979 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-647585d966-dj8ql_calico-apiserver(f3152899-b21d-4929-8f12-210f707e4efb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 6 05:31:33.933596 kubelet[2979]: E1106 05:31:33.933519 2979 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-647585d966-dj8ql" podUID="f3152899-b21d-4929-8f12-210f707e4efb" Nov 6 05:31:34.598177 containerd[1686]: time="2025-11-06T05:31:34.598117224Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 6 05:31:34.950247 containerd[1686]: time="2025-11-06T05:31:34.950001562Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 05:31:34.969565 containerd[1686]: time="2025-11-06T05:31:34.969491591Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 6 05:31:34.969746 containerd[1686]: time="2025-11-06T05:31:34.969658742Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Nov 6 05:31:34.969847 kubelet[2979]: E1106 05:31:34.969827 2979 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 6 05:31:34.970202 kubelet[2979]: E1106 05:31:34.970037 2979 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 6 05:31:34.970202 kubelet[2979]: E1106 05:31:34.970093 2979 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-65j25_calico-system(8c33e9bc-df47-4f69-aab6-628eca0dd480): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 6 05:31:34.970570 containerd[1686]: time="2025-11-06T05:31:34.970559837Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 6 05:31:35.313239 containerd[1686]: time="2025-11-06T05:31:35.313156030Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 05:31:35.318157 containerd[1686]: time="2025-11-06T05:31:35.318102548Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 6 05:31:35.318280 containerd[1686]: time="2025-11-06T05:31:35.318187699Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Nov 6 05:31:35.318339 kubelet[2979]: E1106 05:31:35.318311 2979 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 6 05:31:35.324518 kubelet[2979]: E1106 05:31:35.318341 2979 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 6 05:31:35.324518 kubelet[2979]: E1106 05:31:35.318397 2979 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-65j25_calico-system(8c33e9bc-df47-4f69-aab6-628eca0dd480): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 6 05:31:35.324518 kubelet[2979]: E1106 05:31:35.318423 2979 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-65j25" podUID="8c33e9bc-df47-4f69-aab6-628eca0dd480" Nov 6 05:31:36.598107 containerd[1686]: time="2025-11-06T05:31:36.598026291Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 6 05:31:36.946206 containerd[1686]: time="2025-11-06T05:31:36.945671931Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 05:31:36.956843 containerd[1686]: time="2025-11-06T05:31:36.956746714Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 6 05:31:36.956843 containerd[1686]: time="2025-11-06T05:31:36.956821206Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Nov 6 05:31:36.957214 kubelet[2979]: E1106 05:31:36.957038 2979 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 05:31:36.957214 kubelet[2979]: E1106 05:31:36.957069 2979 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 05:31:36.957214 kubelet[2979]: E1106 05:31:36.957143 2979 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-647585d966-4pbqk_calico-apiserver(e398733d-8974-4f27-8e6f-2b8309aa0ac5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 6 05:31:36.957214 kubelet[2979]: E1106 05:31:36.957174 2979 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-647585d966-4pbqk" podUID="e398733d-8974-4f27-8e6f-2b8309aa0ac5" Nov 6 05:31:38.600857 kubelet[2979]: E1106 05:31:38.600767 2979 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-66c4b95784-sxn7m" podUID="af2848d9-f5cf-4973-af81-ec9678a8d6c7" Nov 6 05:31:45.601153 kubelet[2979]: E1106 05:31:45.598865 2979 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-678545c96c-k4phr" podUID="d9191e4f-5d9b-4552-8319-79f0f354642e" Nov 6 05:31:46.598580 kubelet[2979]: E1106 05:31:46.598552 2979 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-647585d966-dj8ql" podUID="f3152899-b21d-4929-8f12-210f707e4efb" Nov 6 05:31:46.598942 kubelet[2979]: E1106 05:31:46.598826 2979 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-5djlt" podUID="78c7b0f4-b8bc-4238-84a0-4c0e67aa615a" Nov 6 05:31:48.597674 kubelet[2979]: E1106 05:31:48.597635 2979 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-647585d966-4pbqk" podUID="e398733d-8974-4f27-8e6f-2b8309aa0ac5" Nov 6 05:31:48.600711 kubelet[2979]: E1106 05:31:48.600667 2979 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-65j25" podUID="8c33e9bc-df47-4f69-aab6-628eca0dd480" Nov 6 05:31:50.113396 systemd[1]: Started sshd@7-139.178.70.103:22-139.178.68.195:55092.service - OpenSSH per-connection server daemon (139.178.68.195:55092). Nov 6 05:31:50.232653 sshd[5073]: Accepted publickey for core from 139.178.68.195 port 55092 ssh2: RSA SHA256:w/qImmbEUNw9+sIBk+qE22O5sIpXkB5Q8JXFGUWR9JU Nov 6 05:31:50.234407 sshd-session[5073]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 05:31:50.239126 systemd-logind[1657]: New session 10 of user core. Nov 6 05:31:50.243217 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 6 05:31:50.959039 sshd[5076]: Connection closed by 139.178.68.195 port 55092 Nov 6 05:31:50.959352 sshd-session[5073]: pam_unix(sshd:session): session closed for user core Nov 6 05:31:50.964503 systemd[1]: sshd@7-139.178.70.103:22-139.178.68.195:55092.service: Deactivated successfully. Nov 6 05:31:50.966845 systemd[1]: session-10.scope: Deactivated successfully. Nov 6 05:31:50.968444 systemd-logind[1657]: Session 10 logged out. Waiting for processes to exit. Nov 6 05:31:50.969710 systemd-logind[1657]: Removed session 10. Nov 6 05:31:52.598749 containerd[1686]: time="2025-11-06T05:31:52.598715194Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 6 05:31:52.932837 containerd[1686]: time="2025-11-06T05:31:52.932574867Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 05:31:52.933280 containerd[1686]: time="2025-11-06T05:31:52.933093783Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 6 05:31:52.933280 containerd[1686]: time="2025-11-06T05:31:52.933165589Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Nov 6 05:31:52.933375 kubelet[2979]: E1106 05:31:52.933296 2979 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 6 05:31:52.933375 kubelet[2979]: E1106 05:31:52.933342 2979 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 6 05:31:52.936335 kubelet[2979]: E1106 05:31:52.936124 2979 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-66c4b95784-sxn7m_calico-system(af2848d9-f5cf-4973-af81-ec9678a8d6c7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 6 05:31:52.937038 containerd[1686]: time="2025-11-06T05:31:52.936976536Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 6 05:31:53.337695 containerd[1686]: time="2025-11-06T05:31:53.337504053Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 05:31:53.337973 containerd[1686]: time="2025-11-06T05:31:53.337948293Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 6 05:31:53.338057 containerd[1686]: time="2025-11-06T05:31:53.338003741Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Nov 6 05:31:53.338221 kubelet[2979]: E1106 05:31:53.338194 2979 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 6 05:31:53.338267 kubelet[2979]: E1106 05:31:53.338228 2979 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 6 05:31:53.338294 kubelet[2979]: E1106 05:31:53.338282 2979 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-66c4b95784-sxn7m_calico-system(af2848d9-f5cf-4973-af81-ec9678a8d6c7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 6 05:31:53.338338 kubelet[2979]: E1106 05:31:53.338318 2979 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-66c4b95784-sxn7m" podUID="af2848d9-f5cf-4973-af81-ec9678a8d6c7" Nov 6 05:31:55.970955 systemd[1]: Started sshd@8-139.178.70.103:22-139.178.68.195:43180.service - OpenSSH per-connection server daemon (139.178.68.195:43180). Nov 6 05:31:56.052975 sshd[5095]: Accepted publickey for core from 139.178.68.195 port 43180 ssh2: RSA SHA256:w/qImmbEUNw9+sIBk+qE22O5sIpXkB5Q8JXFGUWR9JU Nov 6 05:31:56.054008 sshd-session[5095]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 05:31:56.057513 systemd-logind[1657]: New session 11 of user core. Nov 6 05:31:56.065352 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 6 05:31:56.142023 sshd[5098]: Connection closed by 139.178.68.195 port 43180 Nov 6 05:31:56.142512 sshd-session[5095]: pam_unix(sshd:session): session closed for user core Nov 6 05:31:56.145847 systemd[1]: sshd@8-139.178.70.103:22-139.178.68.195:43180.service: Deactivated successfully. Nov 6 05:31:56.149056 systemd[1]: session-11.scope: Deactivated successfully. Nov 6 05:31:56.150647 systemd-logind[1657]: Session 11 logged out. Waiting for processes to exit. Nov 6 05:31:56.151631 systemd-logind[1657]: Removed session 11. Nov 6 05:31:57.599576 containerd[1686]: time="2025-11-06T05:31:57.598830831Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 6 05:31:57.939872 containerd[1686]: time="2025-11-06T05:31:57.939794591Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 05:31:57.946627 containerd[1686]: time="2025-11-06T05:31:57.946588621Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 6 05:31:57.946735 containerd[1686]: time="2025-11-06T05:31:57.946655549Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Nov 6 05:31:57.946932 kubelet[2979]: E1106 05:31:57.946816 2979 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 6 05:31:57.946932 kubelet[2979]: E1106 05:31:57.946859 2979 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 6 05:31:57.946932 kubelet[2979]: E1106 05:31:57.946912 2979 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-678545c96c-k4phr_calico-system(d9191e4f-5d9b-4552-8319-79f0f354642e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 6 05:31:57.947414 kubelet[2979]: E1106 05:31:57.947211 2979 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-678545c96c-k4phr" podUID="d9191e4f-5d9b-4552-8319-79f0f354642e" Nov 6 05:31:59.601430 containerd[1686]: time="2025-11-06T05:31:59.601400167Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 6 05:31:59.996815 containerd[1686]: time="2025-11-06T05:31:59.996610374Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 05:31:59.998265 containerd[1686]: time="2025-11-06T05:31:59.998247461Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 6 05:31:59.998383 containerd[1686]: time="2025-11-06T05:31:59.998291199Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Nov 6 05:31:59.998538 kubelet[2979]: E1106 05:31:59.998510 2979 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 6 05:31:59.998816 kubelet[2979]: E1106 05:31:59.998788 2979 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 6 05:31:59.999030 containerd[1686]: time="2025-11-06T05:31:59.999015138Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 6 05:31:59.999439 kubelet[2979]: E1106 05:31:59.999159 2979 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-5djlt_calico-system(78c7b0f4-b8bc-4238-84a0-4c0e67aa615a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 6 05:31:59.999439 kubelet[2979]: E1106 05:31:59.999186 2979 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-5djlt" podUID="78c7b0f4-b8bc-4238-84a0-4c0e67aa615a" Nov 6 05:32:00.342267 containerd[1686]: time="2025-11-06T05:32:00.342228806Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 05:32:00.342749 containerd[1686]: time="2025-11-06T05:32:00.342730368Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 6 05:32:00.343666 containerd[1686]: time="2025-11-06T05:32:00.342778937Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Nov 6 05:32:00.343666 containerd[1686]: time="2025-11-06T05:32:00.343226798Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 6 05:32:00.344856 kubelet[2979]: E1106 05:32:00.342977 2979 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 05:32:00.344856 kubelet[2979]: E1106 05:32:00.343013 2979 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 05:32:00.344856 kubelet[2979]: E1106 05:32:00.343550 2979 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-647585d966-dj8ql_calico-apiserver(f3152899-b21d-4929-8f12-210f707e4efb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 6 05:32:00.344856 kubelet[2979]: E1106 05:32:00.343587 2979 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-647585d966-dj8ql" podUID="f3152899-b21d-4929-8f12-210f707e4efb" Nov 6 05:32:00.678346 containerd[1686]: time="2025-11-06T05:32:00.678264394Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 05:32:00.678754 containerd[1686]: time="2025-11-06T05:32:00.678733541Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 6 05:32:00.678803 containerd[1686]: time="2025-11-06T05:32:00.678790004Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Nov 6 05:32:00.678933 kubelet[2979]: E1106 05:32:00.678905 2979 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 05:32:00.678973 kubelet[2979]: E1106 05:32:00.678940 2979 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 05:32:00.679140 kubelet[2979]: E1106 05:32:00.679070 2979 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-647585d966-4pbqk_calico-apiserver(e398733d-8974-4f27-8e6f-2b8309aa0ac5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 6 05:32:00.679140 kubelet[2979]: E1106 05:32:00.679091 2979 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-647585d966-4pbqk" podUID="e398733d-8974-4f27-8e6f-2b8309aa0ac5" Nov 6 05:32:00.679279 containerd[1686]: time="2025-11-06T05:32:00.679239346Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 6 05:32:01.073226 containerd[1686]: time="2025-11-06T05:32:01.073191221Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 05:32:01.073537 containerd[1686]: time="2025-11-06T05:32:01.073517415Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 6 05:32:01.073602 containerd[1686]: time="2025-11-06T05:32:01.073573283Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Nov 6 05:32:01.073701 kubelet[2979]: E1106 05:32:01.073675 2979 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 6 05:32:01.073973 kubelet[2979]: E1106 05:32:01.073708 2979 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 6 05:32:01.073973 kubelet[2979]: E1106 05:32:01.073755 2979 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-65j25_calico-system(8c33e9bc-df47-4f69-aab6-628eca0dd480): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 6 05:32:01.074889 containerd[1686]: time="2025-11-06T05:32:01.074752770Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 6 05:32:01.154300 systemd[1]: Started sshd@9-139.178.70.103:22-139.178.68.195:43196.service - OpenSSH per-connection server daemon (139.178.68.195:43196). Nov 6 05:32:01.191188 sshd[5113]: Accepted publickey for core from 139.178.68.195 port 43196 ssh2: RSA SHA256:w/qImmbEUNw9+sIBk+qE22O5sIpXkB5Q8JXFGUWR9JU Nov 6 05:32:01.191767 sshd-session[5113]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 05:32:01.195199 systemd-logind[1657]: New session 12 of user core. Nov 6 05:32:01.200231 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 6 05:32:01.309217 sshd[5116]: Connection closed by 139.178.68.195 port 43196 Nov 6 05:32:01.309572 sshd-session[5113]: pam_unix(sshd:session): session closed for user core Nov 6 05:32:01.318585 systemd[1]: sshd@9-139.178.70.103:22-139.178.68.195:43196.service: Deactivated successfully. Nov 6 05:32:01.320501 systemd[1]: session-12.scope: Deactivated successfully. Nov 6 05:32:01.321254 systemd-logind[1657]: Session 12 logged out. Waiting for processes to exit. Nov 6 05:32:01.328416 systemd[1]: Started sshd@10-139.178.70.103:22-139.178.68.195:43210.service - OpenSSH per-connection server daemon (139.178.68.195:43210). Nov 6 05:32:01.330774 systemd-logind[1657]: Removed session 12. Nov 6 05:32:01.375313 containerd[1686]: time="2025-11-06T05:32:01.374847312Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 05:32:01.376467 containerd[1686]: time="2025-11-06T05:32:01.376450452Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 6 05:32:01.376573 containerd[1686]: time="2025-11-06T05:32:01.376520335Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Nov 6 05:32:01.376765 kubelet[2979]: E1106 05:32:01.376709 2979 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 6 05:32:01.376765 kubelet[2979]: E1106 05:32:01.376752 2979 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 6 05:32:01.376926 kubelet[2979]: E1106 05:32:01.376915 2979 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-65j25_calico-system(8c33e9bc-df47-4f69-aab6-628eca0dd480): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 6 05:32:01.378210 kubelet[2979]: E1106 05:32:01.378179 2979 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-65j25" podUID="8c33e9bc-df47-4f69-aab6-628eca0dd480" Nov 6 05:32:01.385994 sshd[5129]: Accepted publickey for core from 139.178.68.195 port 43210 ssh2: RSA SHA256:w/qImmbEUNw9+sIBk+qE22O5sIpXkB5Q8JXFGUWR9JU Nov 6 05:32:01.386944 sshd-session[5129]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 05:32:01.389643 systemd-logind[1657]: New session 13 of user core. Nov 6 05:32:01.395369 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 6 05:32:01.548147 sshd[5132]: Connection closed by 139.178.68.195 port 43210 Nov 6 05:32:01.548873 sshd-session[5129]: pam_unix(sshd:session): session closed for user core Nov 6 05:32:01.559684 systemd[1]: sshd@10-139.178.70.103:22-139.178.68.195:43210.service: Deactivated successfully. Nov 6 05:32:01.562643 systemd[1]: session-13.scope: Deactivated successfully. Nov 6 05:32:01.563647 systemd-logind[1657]: Session 13 logged out. Waiting for processes to exit. Nov 6 05:32:01.569335 systemd[1]: Started sshd@11-139.178.70.103:22-139.178.68.195:43212.service - OpenSSH per-connection server daemon (139.178.68.195:43212). Nov 6 05:32:01.570582 systemd-logind[1657]: Removed session 13. Nov 6 05:32:01.607466 sshd[5143]: Accepted publickey for core from 139.178.68.195 port 43212 ssh2: RSA SHA256:w/qImmbEUNw9+sIBk+qE22O5sIpXkB5Q8JXFGUWR9JU Nov 6 05:32:01.608455 sshd-session[5143]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 05:32:01.614310 systemd-logind[1657]: New session 14 of user core. Nov 6 05:32:01.619580 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 6 05:32:01.711400 sshd[5146]: Connection closed by 139.178.68.195 port 43212 Nov 6 05:32:01.711864 sshd-session[5143]: pam_unix(sshd:session): session closed for user core Nov 6 05:32:01.714635 systemd-logind[1657]: Session 14 logged out. Waiting for processes to exit. Nov 6 05:32:01.715423 systemd[1]: sshd@11-139.178.70.103:22-139.178.68.195:43212.service: Deactivated successfully. Nov 6 05:32:01.718278 systemd[1]: session-14.scope: Deactivated successfully. Nov 6 05:32:01.720341 systemd-logind[1657]: Removed session 14. Nov 6 05:32:06.721264 systemd[1]: Started sshd@12-139.178.70.103:22-139.178.68.195:52438.service - OpenSSH per-connection server daemon (139.178.68.195:52438). Nov 6 05:32:06.802658 sshd[5163]: Accepted publickey for core from 139.178.68.195 port 52438 ssh2: RSA SHA256:w/qImmbEUNw9+sIBk+qE22O5sIpXkB5Q8JXFGUWR9JU Nov 6 05:32:06.804090 sshd-session[5163]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 05:32:06.809282 systemd-logind[1657]: New session 15 of user core. Nov 6 05:32:06.815426 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 6 05:32:06.891200 sshd[5167]: Connection closed by 139.178.68.195 port 52438 Nov 6 05:32:06.892077 sshd-session[5163]: pam_unix(sshd:session): session closed for user core Nov 6 05:32:06.898390 systemd[1]: sshd@12-139.178.70.103:22-139.178.68.195:52438.service: Deactivated successfully. Nov 6 05:32:06.899968 systemd[1]: session-15.scope: Deactivated successfully. Nov 6 05:32:06.900976 systemd-logind[1657]: Session 15 logged out. Waiting for processes to exit. Nov 6 05:32:06.902022 systemd-logind[1657]: Removed session 15. Nov 6 05:32:07.599835 kubelet[2979]: E1106 05:32:07.599795 2979 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-66c4b95784-sxn7m" podUID="af2848d9-f5cf-4973-af81-ec9678a8d6c7" Nov 6 05:32:09.599870 kubelet[2979]: E1106 05:32:09.599692 2979 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-678545c96c-k4phr" podUID="d9191e4f-5d9b-4552-8319-79f0f354642e" Nov 6 05:32:11.598647 kubelet[2979]: E1106 05:32:11.598475 2979 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-5djlt" podUID="78c7b0f4-b8bc-4238-84a0-4c0e67aa615a" Nov 6 05:32:11.901886 systemd[1]: Started sshd@13-139.178.70.103:22-139.178.68.195:52446.service - OpenSSH per-connection server daemon (139.178.68.195:52446). Nov 6 05:32:12.459902 sshd[5205]: Accepted publickey for core from 139.178.68.195 port 52446 ssh2: RSA SHA256:w/qImmbEUNw9+sIBk+qE22O5sIpXkB5Q8JXFGUWR9JU Nov 6 05:32:12.461035 sshd-session[5205]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 05:32:12.479699 systemd-logind[1657]: New session 16 of user core. Nov 6 05:32:12.482223 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 6 05:32:12.599171 kubelet[2979]: E1106 05:32:12.598993 2979 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-647585d966-dj8ql" podUID="f3152899-b21d-4929-8f12-210f707e4efb" Nov 6 05:32:12.682255 sshd[5208]: Connection closed by 139.178.68.195 port 52446 Nov 6 05:32:12.685397 sshd-session[5205]: pam_unix(sshd:session): session closed for user core Nov 6 05:32:12.687452 systemd[1]: sshd@13-139.178.70.103:22-139.178.68.195:52446.service: Deactivated successfully. Nov 6 05:32:12.690571 systemd[1]: session-16.scope: Deactivated successfully. Nov 6 05:32:12.691634 systemd-logind[1657]: Session 16 logged out. Waiting for processes to exit. Nov 6 05:32:12.693778 systemd-logind[1657]: Removed session 16. Nov 6 05:32:13.599814 kubelet[2979]: E1106 05:32:13.599765 2979 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-65j25" podUID="8c33e9bc-df47-4f69-aab6-628eca0dd480" Nov 6 05:32:14.598793 kubelet[2979]: E1106 05:32:14.598596 2979 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-647585d966-4pbqk" podUID="e398733d-8974-4f27-8e6f-2b8309aa0ac5" Nov 6 05:32:17.695306 systemd[1]: Started sshd@14-139.178.70.103:22-139.178.68.195:49228.service - OpenSSH per-connection server daemon (139.178.68.195:49228). Nov 6 05:32:17.742648 sshd[5223]: Accepted publickey for core from 139.178.68.195 port 49228 ssh2: RSA SHA256:w/qImmbEUNw9+sIBk+qE22O5sIpXkB5Q8JXFGUWR9JU Nov 6 05:32:17.743718 sshd-session[5223]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 05:32:17.747931 systemd-logind[1657]: New session 17 of user core. Nov 6 05:32:17.751251 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 6 05:32:17.837253 sshd[5226]: Connection closed by 139.178.68.195 port 49228 Nov 6 05:32:17.838040 sshd-session[5223]: pam_unix(sshd:session): session closed for user core Nov 6 05:32:17.846032 systemd[1]: sshd@14-139.178.70.103:22-139.178.68.195:49228.service: Deactivated successfully. Nov 6 05:32:17.847568 systemd-logind[1657]: Session 17 logged out. Waiting for processes to exit. Nov 6 05:32:17.847935 systemd[1]: session-17.scope: Deactivated successfully. Nov 6 05:32:17.848928 systemd-logind[1657]: Removed session 17. Nov 6 05:32:20.598382 kubelet[2979]: E1106 05:32:20.598356 2979 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-66c4b95784-sxn7m" podUID="af2848d9-f5cf-4973-af81-ec9678a8d6c7" Nov 6 05:32:21.600090 kubelet[2979]: E1106 05:32:21.599891 2979 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-678545c96c-k4phr" podUID="d9191e4f-5d9b-4552-8319-79f0f354642e" Nov 6 05:32:22.851565 systemd[1]: Started sshd@15-139.178.70.103:22-139.178.68.195:49238.service - OpenSSH per-connection server daemon (139.178.68.195:49238). Nov 6 05:32:22.929437 sshd[5237]: Accepted publickey for core from 139.178.68.195 port 49238 ssh2: RSA SHA256:w/qImmbEUNw9+sIBk+qE22O5sIpXkB5Q8JXFGUWR9JU Nov 6 05:32:22.930301 sshd-session[5237]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 05:32:22.935016 systemd-logind[1657]: New session 18 of user core. Nov 6 05:32:22.943224 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 6 05:32:23.122190 sshd[5240]: Connection closed by 139.178.68.195 port 49238 Nov 6 05:32:23.122616 sshd-session[5237]: pam_unix(sshd:session): session closed for user core Nov 6 05:32:23.130365 systemd[1]: sshd@15-139.178.70.103:22-139.178.68.195:49238.service: Deactivated successfully. Nov 6 05:32:23.132002 systemd[1]: session-18.scope: Deactivated successfully. Nov 6 05:32:23.133104 systemd-logind[1657]: Session 18 logged out. Waiting for processes to exit. Nov 6 05:32:23.136197 systemd[1]: Started sshd@16-139.178.70.103:22-139.178.68.195:34308.service - OpenSSH per-connection server daemon (139.178.68.195:34308). Nov 6 05:32:23.137159 systemd-logind[1657]: Removed session 18. Nov 6 05:32:23.181635 sshd[5252]: Accepted publickey for core from 139.178.68.195 port 34308 ssh2: RSA SHA256:w/qImmbEUNw9+sIBk+qE22O5sIpXkB5Q8JXFGUWR9JU Nov 6 05:32:23.182504 sshd-session[5252]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 05:32:23.186091 systemd-logind[1657]: New session 19 of user core. Nov 6 05:32:23.194229 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 6 05:32:24.598257 kubelet[2979]: E1106 05:32:24.598212 2979 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-647585d966-dj8ql" podUID="f3152899-b21d-4929-8f12-210f707e4efb" Nov 6 05:32:24.653025 sshd[5255]: Connection closed by 139.178.68.195 port 34308 Nov 6 05:32:24.657833 sshd-session[5252]: pam_unix(sshd:session): session closed for user core Nov 6 05:32:24.662840 systemd[1]: Started sshd@17-139.178.70.103:22-139.178.68.195:34316.service - OpenSSH per-connection server daemon (139.178.68.195:34316). Nov 6 05:32:24.671186 systemd[1]: sshd@16-139.178.70.103:22-139.178.68.195:34308.service: Deactivated successfully. Nov 6 05:32:24.676550 systemd[1]: session-19.scope: Deactivated successfully. Nov 6 05:32:24.677281 systemd-logind[1657]: Session 19 logged out. Waiting for processes to exit. Nov 6 05:32:24.680576 systemd-logind[1657]: Removed session 19. Nov 6 05:32:24.735524 sshd[5262]: Accepted publickey for core from 139.178.68.195 port 34316 ssh2: RSA SHA256:w/qImmbEUNw9+sIBk+qE22O5sIpXkB5Q8JXFGUWR9JU Nov 6 05:32:24.736428 sshd-session[5262]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 05:32:24.739160 systemd-logind[1657]: New session 20 of user core. Nov 6 05:32:24.750264 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 6 05:32:25.333499 sshd[5268]: Connection closed by 139.178.68.195 port 34316 Nov 6 05:32:25.334647 sshd-session[5262]: pam_unix(sshd:session): session closed for user core Nov 6 05:32:25.343304 systemd[1]: sshd@17-139.178.70.103:22-139.178.68.195:34316.service: Deactivated successfully. Nov 6 05:32:25.345781 systemd[1]: session-20.scope: Deactivated successfully. Nov 6 05:32:25.347467 systemd-logind[1657]: Session 20 logged out. Waiting for processes to exit. Nov 6 05:32:25.350800 systemd[1]: Started sshd@18-139.178.70.103:22-139.178.68.195:34328.service - OpenSSH per-connection server daemon (139.178.68.195:34328). Nov 6 05:32:25.351589 systemd-logind[1657]: Removed session 20. Nov 6 05:32:25.407987 sshd[5282]: Accepted publickey for core from 139.178.68.195 port 34328 ssh2: RSA SHA256:w/qImmbEUNw9+sIBk+qE22O5sIpXkB5Q8JXFGUWR9JU Nov 6 05:32:25.409759 sshd-session[5282]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 05:32:25.415163 systemd-logind[1657]: New session 21 of user core. Nov 6 05:32:25.419267 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 6 05:32:25.598335 kubelet[2979]: E1106 05:32:25.598241 2979 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-5djlt" podUID="78c7b0f4-b8bc-4238-84a0-4c0e67aa615a" Nov 6 05:32:25.667421 sshd[5285]: Connection closed by 139.178.68.195 port 34328 Nov 6 05:32:25.668920 sshd-session[5282]: pam_unix(sshd:session): session closed for user core Nov 6 05:32:25.676317 systemd[1]: sshd@18-139.178.70.103:22-139.178.68.195:34328.service: Deactivated successfully. Nov 6 05:32:25.678588 systemd[1]: session-21.scope: Deactivated successfully. Nov 6 05:32:25.680972 systemd-logind[1657]: Session 21 logged out. Waiting for processes to exit. Nov 6 05:32:25.684678 systemd[1]: Started sshd@19-139.178.70.103:22-139.178.68.195:34336.service - OpenSSH per-connection server daemon (139.178.68.195:34336). Nov 6 05:32:25.687498 systemd-logind[1657]: Removed session 21. Nov 6 05:32:25.742147 sshd[5296]: Accepted publickey for core from 139.178.68.195 port 34336 ssh2: RSA SHA256:w/qImmbEUNw9+sIBk+qE22O5sIpXkB5Q8JXFGUWR9JU Nov 6 05:32:25.742746 sshd-session[5296]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 05:32:25.749293 systemd-logind[1657]: New session 22 of user core. Nov 6 05:32:25.753247 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 6 05:32:25.827820 sshd[5299]: Connection closed by 139.178.68.195 port 34336 Nov 6 05:32:25.828272 sshd-session[5296]: pam_unix(sshd:session): session closed for user core Nov 6 05:32:25.831191 systemd[1]: sshd@19-139.178.70.103:22-139.178.68.195:34336.service: Deactivated successfully. Nov 6 05:32:25.832718 systemd[1]: session-22.scope: Deactivated successfully. Nov 6 05:32:25.833641 systemd-logind[1657]: Session 22 logged out. Waiting for processes to exit. Nov 6 05:32:25.834661 systemd-logind[1657]: Removed session 22. Nov 6 05:32:27.599463 kubelet[2979]: E1106 05:32:27.599430 2979 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-647585d966-4pbqk" podUID="e398733d-8974-4f27-8e6f-2b8309aa0ac5" Nov 6 05:32:27.600339 kubelet[2979]: E1106 05:32:27.599545 2979 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-65j25" podUID="8c33e9bc-df47-4f69-aab6-628eca0dd480" Nov 6 05:32:30.838314 systemd[1]: Started sshd@20-139.178.70.103:22-139.178.68.195:34344.service - OpenSSH per-connection server daemon (139.178.68.195:34344). Nov 6 05:32:30.878852 sshd[5313]: Accepted publickey for core from 139.178.68.195 port 34344 ssh2: RSA SHA256:w/qImmbEUNw9+sIBk+qE22O5sIpXkB5Q8JXFGUWR9JU Nov 6 05:32:30.880097 sshd-session[5313]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 05:32:30.882755 systemd-logind[1657]: New session 23 of user core. Nov 6 05:32:30.892307 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 6 05:32:31.037235 sshd[5316]: Connection closed by 139.178.68.195 port 34344 Nov 6 05:32:31.040516 systemd[1]: sshd@20-139.178.70.103:22-139.178.68.195:34344.service: Deactivated successfully. Nov 6 05:32:31.037572 sshd-session[5313]: pam_unix(sshd:session): session closed for user core Nov 6 05:32:31.041782 systemd[1]: session-23.scope: Deactivated successfully. Nov 6 05:32:31.042420 systemd-logind[1657]: Session 23 logged out. Waiting for processes to exit. Nov 6 05:32:31.043193 systemd-logind[1657]: Removed session 23. Nov 6 05:32:32.613820 kubelet[2979]: E1106 05:32:32.613620 2979 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-66c4b95784-sxn7m" podUID="af2848d9-f5cf-4973-af81-ec9678a8d6c7" Nov 6 05:32:35.599004 kubelet[2979]: E1106 05:32:35.598736 2979 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-678545c96c-k4phr" podUID="d9191e4f-5d9b-4552-8319-79f0f354642e" Nov 6 05:32:35.600671 kubelet[2979]: E1106 05:32:35.599632 2979 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-647585d966-dj8ql" podUID="f3152899-b21d-4929-8f12-210f707e4efb" Nov 6 05:32:36.051249 systemd[1]: Started sshd@21-139.178.70.103:22-139.178.68.195:35076.service - OpenSSH per-connection server daemon (139.178.68.195:35076). Nov 6 05:32:36.093028 sshd[5340]: Accepted publickey for core from 139.178.68.195 port 35076 ssh2: RSA SHA256:w/qImmbEUNw9+sIBk+qE22O5sIpXkB5Q8JXFGUWR9JU Nov 6 05:32:36.094029 sshd-session[5340]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 05:32:36.097193 systemd-logind[1657]: New session 24 of user core. Nov 6 05:32:36.107392 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 6 05:32:36.192712 sshd[5343]: Connection closed by 139.178.68.195 port 35076 Nov 6 05:32:36.193167 sshd-session[5340]: pam_unix(sshd:session): session closed for user core Nov 6 05:32:36.196594 systemd[1]: sshd@21-139.178.70.103:22-139.178.68.195:35076.service: Deactivated successfully. Nov 6 05:32:36.198717 systemd[1]: session-24.scope: Deactivated successfully. Nov 6 05:32:36.200671 systemd-logind[1657]: Session 24 logged out. Waiting for processes to exit. Nov 6 05:32:36.203447 systemd-logind[1657]: Removed session 24. Nov 6 05:32:36.598145 kubelet[2979]: E1106 05:32:36.598102 2979 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-5djlt" podUID="78c7b0f4-b8bc-4238-84a0-4c0e67aa615a" Nov 6 05:32:39.599385 kubelet[2979]: E1106 05:32:39.599323 2979 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-65j25" podUID="8c33e9bc-df47-4f69-aab6-628eca0dd480" Nov 6 05:32:39.616595 kubelet[2979]: E1106 05:32:39.616469 2979 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-647585d966-4pbqk" podUID="e398733d-8974-4f27-8e6f-2b8309aa0ac5" Nov 6 05:32:41.202721 systemd[1]: Started sshd@22-139.178.70.103:22-139.178.68.195:35090.service - OpenSSH per-connection server daemon (139.178.68.195:35090). Nov 6 05:32:41.257733 sshd[5383]: Accepted publickey for core from 139.178.68.195 port 35090 ssh2: RSA SHA256:w/qImmbEUNw9+sIBk+qE22O5sIpXkB5Q8JXFGUWR9JU Nov 6 05:32:41.259577 sshd-session[5383]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 05:32:41.262415 systemd-logind[1657]: New session 25 of user core. Nov 6 05:32:41.267303 systemd[1]: Started session-25.scope - Session 25 of User core. Nov 6 05:32:41.374549 sshd[5386]: Connection closed by 139.178.68.195 port 35090 Nov 6 05:32:41.374813 sshd-session[5383]: pam_unix(sshd:session): session closed for user core Nov 6 05:32:41.380409 systemd[1]: sshd@22-139.178.70.103:22-139.178.68.195:35090.service: Deactivated successfully. Nov 6 05:32:41.382016 systemd[1]: session-25.scope: Deactivated successfully. Nov 6 05:32:41.382736 systemd-logind[1657]: Session 25 logged out. Waiting for processes to exit. Nov 6 05:32:41.384318 systemd-logind[1657]: Removed session 25.