Jul 12 10:10:26.716664 kernel: Linux version 6.12.36-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Sat Jul 12 08:25:04 -00 2025 Jul 12 10:10:26.716682 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=4aa07c6f7fdf02f2e05d879e4d058ee0cec0fba29acc0516234352104ac4e6c4 Jul 12 10:10:26.716688 kernel: Disabled fast string operations Jul 12 10:10:26.716692 kernel: BIOS-provided physical RAM map: Jul 12 10:10:26.716696 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ebff] usable Jul 12 10:10:26.716700 kernel: BIOS-e820: [mem 0x000000000009ec00-0x000000000009ffff] reserved Jul 12 10:10:26.716706 kernel: BIOS-e820: [mem 0x00000000000dc000-0x00000000000fffff] reserved Jul 12 10:10:26.716711 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007fedffff] usable Jul 12 10:10:26.716715 kernel: BIOS-e820: [mem 0x000000007fee0000-0x000000007fefefff] ACPI data Jul 12 10:10:26.716719 kernel: BIOS-e820: [mem 0x000000007feff000-0x000000007fefffff] ACPI NVS Jul 12 10:10:26.716724 kernel: BIOS-e820: [mem 0x000000007ff00000-0x000000007fffffff] usable Jul 12 10:10:26.716728 kernel: BIOS-e820: [mem 0x00000000f0000000-0x00000000f7ffffff] reserved Jul 12 10:10:26.716732 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec0ffff] reserved Jul 12 10:10:26.716736 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Jul 12 10:10:26.716743 kernel: BIOS-e820: [mem 0x00000000fffe0000-0x00000000ffffffff] reserved Jul 12 10:10:26.716747 kernel: NX (Execute Disable) protection: active Jul 12 10:10:26.716752 kernel: APIC: Static calls initialized Jul 12 10:10:26.716757 kernel: SMBIOS 2.7 present. Jul 12 10:10:26.716762 kernel: DMI: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 05/28/2020 Jul 12 10:10:26.716767 kernel: DMI: Memory slots populated: 1/128 Jul 12 10:10:26.716772 kernel: vmware: hypercall mode: 0x00 Jul 12 10:10:26.716777 kernel: Hypervisor detected: VMware Jul 12 10:10:26.716782 kernel: vmware: TSC freq read from hypervisor : 3408.000 MHz Jul 12 10:10:26.716787 kernel: vmware: Host bus clock speed read from hypervisor : 66000000 Hz Jul 12 10:10:26.716791 kernel: vmware: using clock offset of 5032486802 ns Jul 12 10:10:26.716796 kernel: tsc: Detected 3408.000 MHz processor Jul 12 10:10:26.716801 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 12 10:10:26.716806 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 12 10:10:26.716811 kernel: last_pfn = 0x80000 max_arch_pfn = 0x400000000 Jul 12 10:10:26.716816 kernel: total RAM covered: 3072M Jul 12 10:10:26.716822 kernel: Found optimal setting for mtrr clean up Jul 12 10:10:26.716828 kernel: gran_size: 64K chunk_size: 64K num_reg: 2 lose cover RAM: 0G Jul 12 10:10:26.717035 kernel: MTRR map: 6 entries (5 fixed + 1 variable; max 21), built from 8 variable MTRRs Jul 12 10:10:26.717043 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 12 10:10:26.717049 kernel: Using GB pages for direct mapping Jul 12 10:10:26.717054 kernel: ACPI: Early table checksum verification disabled Jul 12 10:10:26.717059 kernel: ACPI: RSDP 0x00000000000F6A00 000024 (v02 PTLTD ) Jul 12 10:10:26.717064 kernel: ACPI: XSDT 0x000000007FEE965B 00005C (v01 INTEL 440BX 06040000 VMW 01324272) Jul 12 10:10:26.717069 kernel: ACPI: FACP 0x000000007FEFEE73 0000F4 (v04 INTEL 440BX 06040000 PTL 000F4240) Jul 12 10:10:26.717076 kernel: ACPI: DSDT 0x000000007FEEAD55 01411E (v01 PTLTD Custom 06040000 MSFT 03000001) Jul 12 10:10:26.717083 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Jul 12 10:10:26.717088 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Jul 12 10:10:26.717093 kernel: ACPI: BOOT 0x000000007FEEAD2D 000028 (v01 PTLTD $SBFTBL$ 06040000 LTP 00000001) Jul 12 10:10:26.717099 kernel: ACPI: APIC 0x000000007FEEA5EB 000742 (v01 PTLTD ? APIC 06040000 LTP 00000000) Jul 12 10:10:26.717105 kernel: ACPI: MCFG 0x000000007FEEA5AF 00003C (v01 PTLTD $PCITBL$ 06040000 LTP 00000001) Jul 12 10:10:26.717110 kernel: ACPI: SRAT 0x000000007FEE9757 0008A8 (v02 VMWARE MEMPLUG 06040000 VMW 00000001) Jul 12 10:10:26.717116 kernel: ACPI: HPET 0x000000007FEE971F 000038 (v01 VMWARE VMW HPET 06040000 VMW 00000001) Jul 12 10:10:26.717121 kernel: ACPI: WAET 0x000000007FEE96F7 000028 (v01 VMWARE VMW WAET 06040000 VMW 00000001) Jul 12 10:10:26.717126 kernel: ACPI: Reserving FACP table memory at [mem 0x7fefee73-0x7fefef66] Jul 12 10:10:26.717131 kernel: ACPI: Reserving DSDT table memory at [mem 0x7feead55-0x7fefee72] Jul 12 10:10:26.717136 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Jul 12 10:10:26.717142 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Jul 12 10:10:26.717147 kernel: ACPI: Reserving BOOT table memory at [mem 0x7feead2d-0x7feead54] Jul 12 10:10:26.717153 kernel: ACPI: Reserving APIC table memory at [mem 0x7feea5eb-0x7feead2c] Jul 12 10:10:26.717158 kernel: ACPI: Reserving MCFG table memory at [mem 0x7feea5af-0x7feea5ea] Jul 12 10:10:26.717163 kernel: ACPI: Reserving SRAT table memory at [mem 0x7fee9757-0x7fee9ffe] Jul 12 10:10:26.717168 kernel: ACPI: Reserving HPET table memory at [mem 0x7fee971f-0x7fee9756] Jul 12 10:10:26.717173 kernel: ACPI: Reserving WAET table memory at [mem 0x7fee96f7-0x7fee971e] Jul 12 10:10:26.717178 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jul 12 10:10:26.717183 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jul 12 10:10:26.717188 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000-0xbfffffff] hotplug Jul 12 10:10:26.717194 kernel: NUMA: Node 0 [mem 0x00001000-0x0009ffff] + [mem 0x00100000-0x7fffffff] -> [mem 0x00001000-0x7fffffff] Jul 12 10:10:26.717199 kernel: NODE_DATA(0) allocated [mem 0x7fff8dc0-0x7fffffff] Jul 12 10:10:26.717206 kernel: Zone ranges: Jul 12 10:10:26.717211 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 12 10:10:26.717216 kernel: DMA32 [mem 0x0000000001000000-0x000000007fffffff] Jul 12 10:10:26.717221 kernel: Normal empty Jul 12 10:10:26.717226 kernel: Device empty Jul 12 10:10:26.717231 kernel: Movable zone start for each node Jul 12 10:10:26.717236 kernel: Early memory node ranges Jul 12 10:10:26.717241 kernel: node 0: [mem 0x0000000000001000-0x000000000009dfff] Jul 12 10:10:26.717246 kernel: node 0: [mem 0x0000000000100000-0x000000007fedffff] Jul 12 10:10:26.717252 kernel: node 0: [mem 0x000000007ff00000-0x000000007fffffff] Jul 12 10:10:26.717258 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007fffffff] Jul 12 10:10:26.717263 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 12 10:10:26.717268 kernel: On node 0, zone DMA: 98 pages in unavailable ranges Jul 12 10:10:26.717273 kernel: On node 0, zone DMA32: 32 pages in unavailable ranges Jul 12 10:10:26.717278 kernel: ACPI: PM-Timer IO Port: 0x1008 Jul 12 10:10:26.717283 kernel: ACPI: LAPIC_NMI (acpi_id[0x00] high edge lint[0x1]) Jul 12 10:10:26.717288 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Jul 12 10:10:26.717293 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Jul 12 10:10:26.717298 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Jul 12 10:10:26.717304 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Jul 12 10:10:26.717310 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Jul 12 10:10:26.717315 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Jul 12 10:10:26.717320 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Jul 12 10:10:26.717325 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Jul 12 10:10:26.717330 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Jul 12 10:10:26.717335 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Jul 12 10:10:26.717340 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Jul 12 10:10:26.717345 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Jul 12 10:10:26.717351 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Jul 12 10:10:26.717356 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Jul 12 10:10:26.717361 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Jul 12 10:10:26.717366 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Jul 12 10:10:26.717371 kernel: ACPI: LAPIC_NMI (acpi_id[0x11] high edge lint[0x1]) Jul 12 10:10:26.717376 kernel: ACPI: LAPIC_NMI (acpi_id[0x12] high edge lint[0x1]) Jul 12 10:10:26.717381 kernel: ACPI: LAPIC_NMI (acpi_id[0x13] high edge lint[0x1]) Jul 12 10:10:26.717386 kernel: ACPI: LAPIC_NMI (acpi_id[0x14] high edge lint[0x1]) Jul 12 10:10:26.717392 kernel: ACPI: LAPIC_NMI (acpi_id[0x15] high edge lint[0x1]) Jul 12 10:10:26.717397 kernel: ACPI: LAPIC_NMI (acpi_id[0x16] high edge lint[0x1]) Jul 12 10:10:26.717403 kernel: ACPI: LAPIC_NMI (acpi_id[0x17] high edge lint[0x1]) Jul 12 10:10:26.717408 kernel: ACPI: LAPIC_NMI (acpi_id[0x18] high edge lint[0x1]) Jul 12 10:10:26.717413 kernel: ACPI: LAPIC_NMI (acpi_id[0x19] high edge lint[0x1]) Jul 12 10:10:26.717418 kernel: ACPI: LAPIC_NMI (acpi_id[0x1a] high edge lint[0x1]) Jul 12 10:10:26.717423 kernel: ACPI: LAPIC_NMI (acpi_id[0x1b] high edge lint[0x1]) Jul 12 10:10:26.717428 kernel: ACPI: LAPIC_NMI (acpi_id[0x1c] high edge lint[0x1]) Jul 12 10:10:26.717433 kernel: ACPI: LAPIC_NMI (acpi_id[0x1d] high edge lint[0x1]) Jul 12 10:10:26.717438 kernel: ACPI: LAPIC_NMI (acpi_id[0x1e] high edge lint[0x1]) Jul 12 10:10:26.717443 kernel: ACPI: LAPIC_NMI (acpi_id[0x1f] high edge lint[0x1]) Jul 12 10:10:26.717448 kernel: ACPI: LAPIC_NMI (acpi_id[0x20] high edge lint[0x1]) Jul 12 10:10:26.717454 kernel: ACPI: LAPIC_NMI (acpi_id[0x21] high edge lint[0x1]) Jul 12 10:10:26.717459 kernel: ACPI: LAPIC_NMI (acpi_id[0x22] high edge lint[0x1]) Jul 12 10:10:26.717464 kernel: ACPI: LAPIC_NMI (acpi_id[0x23] high edge lint[0x1]) Jul 12 10:10:26.717469 kernel: ACPI: LAPIC_NMI (acpi_id[0x24] high edge lint[0x1]) Jul 12 10:10:26.717475 kernel: ACPI: LAPIC_NMI (acpi_id[0x25] high edge lint[0x1]) Jul 12 10:10:26.717479 kernel: ACPI: LAPIC_NMI (acpi_id[0x26] high edge lint[0x1]) Jul 12 10:10:26.717485 kernel: ACPI: LAPIC_NMI (acpi_id[0x27] high edge lint[0x1]) Jul 12 10:10:26.717494 kernel: ACPI: LAPIC_NMI (acpi_id[0x28] high edge lint[0x1]) Jul 12 10:10:26.717499 kernel: ACPI: LAPIC_NMI (acpi_id[0x29] high edge lint[0x1]) Jul 12 10:10:26.717505 kernel: ACPI: LAPIC_NMI (acpi_id[0x2a] high edge lint[0x1]) Jul 12 10:10:26.717511 kernel: ACPI: LAPIC_NMI (acpi_id[0x2b] high edge lint[0x1]) Jul 12 10:10:26.717517 kernel: ACPI: LAPIC_NMI (acpi_id[0x2c] high edge lint[0x1]) Jul 12 10:10:26.717522 kernel: ACPI: LAPIC_NMI (acpi_id[0x2d] high edge lint[0x1]) Jul 12 10:10:26.717527 kernel: ACPI: LAPIC_NMI (acpi_id[0x2e] high edge lint[0x1]) Jul 12 10:10:26.717532 kernel: ACPI: LAPIC_NMI (acpi_id[0x2f] high edge lint[0x1]) Jul 12 10:10:26.717537 kernel: ACPI: LAPIC_NMI (acpi_id[0x30] high edge lint[0x1]) Jul 12 10:10:26.717543 kernel: ACPI: LAPIC_NMI (acpi_id[0x31] high edge lint[0x1]) Jul 12 10:10:26.717548 kernel: ACPI: LAPIC_NMI (acpi_id[0x32] high edge lint[0x1]) Jul 12 10:10:26.717554 kernel: ACPI: LAPIC_NMI (acpi_id[0x33] high edge lint[0x1]) Jul 12 10:10:26.717560 kernel: ACPI: LAPIC_NMI (acpi_id[0x34] high edge lint[0x1]) Jul 12 10:10:26.717565 kernel: ACPI: LAPIC_NMI (acpi_id[0x35] high edge lint[0x1]) Jul 12 10:10:26.717570 kernel: ACPI: LAPIC_NMI (acpi_id[0x36] high edge lint[0x1]) Jul 12 10:10:26.717576 kernel: ACPI: LAPIC_NMI (acpi_id[0x37] high edge lint[0x1]) Jul 12 10:10:26.717581 kernel: ACPI: LAPIC_NMI (acpi_id[0x38] high edge lint[0x1]) Jul 12 10:10:26.717586 kernel: ACPI: LAPIC_NMI (acpi_id[0x39] high edge lint[0x1]) Jul 12 10:10:26.717592 kernel: ACPI: LAPIC_NMI (acpi_id[0x3a] high edge lint[0x1]) Jul 12 10:10:26.717597 kernel: ACPI: LAPIC_NMI (acpi_id[0x3b] high edge lint[0x1]) Jul 12 10:10:26.717603 kernel: ACPI: LAPIC_NMI (acpi_id[0x3c] high edge lint[0x1]) Jul 12 10:10:26.717609 kernel: ACPI: LAPIC_NMI (acpi_id[0x3d] high edge lint[0x1]) Jul 12 10:10:26.717614 kernel: ACPI: LAPIC_NMI (acpi_id[0x3e] high edge lint[0x1]) Jul 12 10:10:26.717620 kernel: ACPI: LAPIC_NMI (acpi_id[0x3f] high edge lint[0x1]) Jul 12 10:10:26.717625 kernel: ACPI: LAPIC_NMI (acpi_id[0x40] high edge lint[0x1]) Jul 12 10:10:26.717630 kernel: ACPI: LAPIC_NMI (acpi_id[0x41] high edge lint[0x1]) Jul 12 10:10:26.717636 kernel: ACPI: LAPIC_NMI (acpi_id[0x42] high edge lint[0x1]) Jul 12 10:10:26.717641 kernel: ACPI: LAPIC_NMI (acpi_id[0x43] high edge lint[0x1]) Jul 12 10:10:26.717646 kernel: ACPI: LAPIC_NMI (acpi_id[0x44] high edge lint[0x1]) Jul 12 10:10:26.717652 kernel: ACPI: LAPIC_NMI (acpi_id[0x45] high edge lint[0x1]) Jul 12 10:10:26.717657 kernel: ACPI: LAPIC_NMI (acpi_id[0x46] high edge lint[0x1]) Jul 12 10:10:26.717663 kernel: ACPI: LAPIC_NMI (acpi_id[0x47] high edge lint[0x1]) Jul 12 10:10:26.717669 kernel: ACPI: LAPIC_NMI (acpi_id[0x48] high edge lint[0x1]) Jul 12 10:10:26.717674 kernel: ACPI: LAPIC_NMI (acpi_id[0x49] high edge lint[0x1]) Jul 12 10:10:26.717680 kernel: ACPI: LAPIC_NMI (acpi_id[0x4a] high edge lint[0x1]) Jul 12 10:10:26.717685 kernel: ACPI: LAPIC_NMI (acpi_id[0x4b] high edge lint[0x1]) Jul 12 10:10:26.717690 kernel: ACPI: LAPIC_NMI (acpi_id[0x4c] high edge lint[0x1]) Jul 12 10:10:26.717696 kernel: ACPI: LAPIC_NMI (acpi_id[0x4d] high edge lint[0x1]) Jul 12 10:10:26.717701 kernel: ACPI: LAPIC_NMI (acpi_id[0x4e] high edge lint[0x1]) Jul 12 10:10:26.717706 kernel: ACPI: LAPIC_NMI (acpi_id[0x4f] high edge lint[0x1]) Jul 12 10:10:26.717712 kernel: ACPI: LAPIC_NMI (acpi_id[0x50] high edge lint[0x1]) Jul 12 10:10:26.717718 kernel: ACPI: LAPIC_NMI (acpi_id[0x51] high edge lint[0x1]) Jul 12 10:10:26.717723 kernel: ACPI: LAPIC_NMI (acpi_id[0x52] high edge lint[0x1]) Jul 12 10:10:26.717729 kernel: ACPI: LAPIC_NMI (acpi_id[0x53] high edge lint[0x1]) Jul 12 10:10:26.717734 kernel: ACPI: LAPIC_NMI (acpi_id[0x54] high edge lint[0x1]) Jul 12 10:10:26.717739 kernel: ACPI: LAPIC_NMI (acpi_id[0x55] high edge lint[0x1]) Jul 12 10:10:26.717745 kernel: ACPI: LAPIC_NMI (acpi_id[0x56] high edge lint[0x1]) Jul 12 10:10:26.717750 kernel: ACPI: LAPIC_NMI (acpi_id[0x57] high edge lint[0x1]) Jul 12 10:10:26.717755 kernel: ACPI: LAPIC_NMI (acpi_id[0x58] high edge lint[0x1]) Jul 12 10:10:26.717761 kernel: ACPI: LAPIC_NMI (acpi_id[0x59] high edge lint[0x1]) Jul 12 10:10:26.717767 kernel: ACPI: LAPIC_NMI (acpi_id[0x5a] high edge lint[0x1]) Jul 12 10:10:26.717772 kernel: ACPI: LAPIC_NMI (acpi_id[0x5b] high edge lint[0x1]) Jul 12 10:10:26.717777 kernel: ACPI: LAPIC_NMI (acpi_id[0x5c] high edge lint[0x1]) Jul 12 10:10:26.717783 kernel: ACPI: LAPIC_NMI (acpi_id[0x5d] high edge lint[0x1]) Jul 12 10:10:26.717788 kernel: ACPI: LAPIC_NMI (acpi_id[0x5e] high edge lint[0x1]) Jul 12 10:10:26.717793 kernel: ACPI: LAPIC_NMI (acpi_id[0x5f] high edge lint[0x1]) Jul 12 10:10:26.717799 kernel: ACPI: LAPIC_NMI (acpi_id[0x60] high edge lint[0x1]) Jul 12 10:10:26.717804 kernel: ACPI: LAPIC_NMI (acpi_id[0x61] high edge lint[0x1]) Jul 12 10:10:26.717809 kernel: ACPI: LAPIC_NMI (acpi_id[0x62] high edge lint[0x1]) Jul 12 10:10:26.717814 kernel: ACPI: LAPIC_NMI (acpi_id[0x63] high edge lint[0x1]) Jul 12 10:10:26.717821 kernel: ACPI: LAPIC_NMI (acpi_id[0x64] high edge lint[0x1]) Jul 12 10:10:26.717826 kernel: ACPI: LAPIC_NMI (acpi_id[0x65] high edge lint[0x1]) Jul 12 10:10:26.717832 kernel: ACPI: LAPIC_NMI (acpi_id[0x66] high edge lint[0x1]) Jul 12 10:10:26.717847 kernel: ACPI: LAPIC_NMI (acpi_id[0x67] high edge lint[0x1]) Jul 12 10:10:26.717852 kernel: ACPI: LAPIC_NMI (acpi_id[0x68] high edge lint[0x1]) Jul 12 10:10:26.717858 kernel: ACPI: LAPIC_NMI (acpi_id[0x69] high edge lint[0x1]) Jul 12 10:10:26.717863 kernel: ACPI: LAPIC_NMI (acpi_id[0x6a] high edge lint[0x1]) Jul 12 10:10:26.717868 kernel: ACPI: LAPIC_NMI (acpi_id[0x6b] high edge lint[0x1]) Jul 12 10:10:26.717874 kernel: ACPI: LAPIC_NMI (acpi_id[0x6c] high edge lint[0x1]) Jul 12 10:10:26.717879 kernel: ACPI: LAPIC_NMI (acpi_id[0x6d] high edge lint[0x1]) Jul 12 10:10:26.717886 kernel: ACPI: LAPIC_NMI (acpi_id[0x6e] high edge lint[0x1]) Jul 12 10:10:26.717892 kernel: ACPI: LAPIC_NMI (acpi_id[0x6f] high edge lint[0x1]) Jul 12 10:10:26.717897 kernel: ACPI: LAPIC_NMI (acpi_id[0x70] high edge lint[0x1]) Jul 12 10:10:26.717902 kernel: ACPI: LAPIC_NMI (acpi_id[0x71] high edge lint[0x1]) Jul 12 10:10:26.717908 kernel: ACPI: LAPIC_NMI (acpi_id[0x72] high edge lint[0x1]) Jul 12 10:10:26.717913 kernel: ACPI: LAPIC_NMI (acpi_id[0x73] high edge lint[0x1]) Jul 12 10:10:26.717918 kernel: ACPI: LAPIC_NMI (acpi_id[0x74] high edge lint[0x1]) Jul 12 10:10:26.717923 kernel: ACPI: LAPIC_NMI (acpi_id[0x75] high edge lint[0x1]) Jul 12 10:10:26.717929 kernel: ACPI: LAPIC_NMI (acpi_id[0x76] high edge lint[0x1]) Jul 12 10:10:26.717934 kernel: ACPI: LAPIC_NMI (acpi_id[0x77] high edge lint[0x1]) Jul 12 10:10:26.717941 kernel: ACPI: LAPIC_NMI (acpi_id[0x78] high edge lint[0x1]) Jul 12 10:10:26.717946 kernel: ACPI: LAPIC_NMI (acpi_id[0x79] high edge lint[0x1]) Jul 12 10:10:26.717951 kernel: ACPI: LAPIC_NMI (acpi_id[0x7a] high edge lint[0x1]) Jul 12 10:10:26.717957 kernel: ACPI: LAPIC_NMI (acpi_id[0x7b] high edge lint[0x1]) Jul 12 10:10:26.717962 kernel: ACPI: LAPIC_NMI (acpi_id[0x7c] high edge lint[0x1]) Jul 12 10:10:26.717967 kernel: ACPI: LAPIC_NMI (acpi_id[0x7d] high edge lint[0x1]) Jul 12 10:10:26.717972 kernel: ACPI: LAPIC_NMI (acpi_id[0x7e] high edge lint[0x1]) Jul 12 10:10:26.717978 kernel: ACPI: LAPIC_NMI (acpi_id[0x7f] high edge lint[0x1]) Jul 12 10:10:26.717983 kernel: IOAPIC[0]: apic_id 1, version 17, address 0xfec00000, GSI 0-23 Jul 12 10:10:26.717990 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 high edge) Jul 12 10:10:26.717995 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 12 10:10:26.718001 kernel: ACPI: HPET id: 0x8086af01 base: 0xfed00000 Jul 12 10:10:26.718006 kernel: TSC deadline timer available Jul 12 10:10:26.718012 kernel: CPU topo: Max. logical packages: 128 Jul 12 10:10:26.718017 kernel: CPU topo: Max. logical dies: 128 Jul 12 10:10:26.718023 kernel: CPU topo: Max. dies per package: 1 Jul 12 10:10:26.718028 kernel: CPU topo: Max. threads per core: 1 Jul 12 10:10:26.718033 kernel: CPU topo: Num. cores per package: 1 Jul 12 10:10:26.718038 kernel: CPU topo: Num. threads per package: 1 Jul 12 10:10:26.718045 kernel: CPU topo: Allowing 2 present CPUs plus 126 hotplug CPUs Jul 12 10:10:26.718050 kernel: [mem 0x80000000-0xefffffff] available for PCI devices Jul 12 10:10:26.718056 kernel: Booting paravirtualized kernel on VMware hypervisor Jul 12 10:10:26.718061 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 12 10:10:26.718067 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:128 nr_cpu_ids:128 nr_node_ids:1 Jul 12 10:10:26.718074 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u262144 Jul 12 10:10:26.718083 kernel: pcpu-alloc: s207832 r8192 d29736 u262144 alloc=1*2097152 Jul 12 10:10:26.718092 kernel: pcpu-alloc: [0] 000 001 002 003 004 005 006 007 Jul 12 10:10:26.718101 kernel: pcpu-alloc: [0] 008 009 010 011 012 013 014 015 Jul 12 10:10:26.718108 kernel: pcpu-alloc: [0] 016 017 018 019 020 021 022 023 Jul 12 10:10:26.718114 kernel: pcpu-alloc: [0] 024 025 026 027 028 029 030 031 Jul 12 10:10:26.718119 kernel: pcpu-alloc: [0] 032 033 034 035 036 037 038 039 Jul 12 10:10:26.718124 kernel: pcpu-alloc: [0] 040 041 042 043 044 045 046 047 Jul 12 10:10:26.718129 kernel: pcpu-alloc: [0] 048 049 050 051 052 053 054 055 Jul 12 10:10:26.718135 kernel: pcpu-alloc: [0] 056 057 058 059 060 061 062 063 Jul 12 10:10:26.718140 kernel: pcpu-alloc: [0] 064 065 066 067 068 069 070 071 Jul 12 10:10:26.718145 kernel: pcpu-alloc: [0] 072 073 074 075 076 077 078 079 Jul 12 10:10:26.718151 kernel: pcpu-alloc: [0] 080 081 082 083 084 085 086 087 Jul 12 10:10:26.718157 kernel: pcpu-alloc: [0] 088 089 090 091 092 093 094 095 Jul 12 10:10:26.718162 kernel: pcpu-alloc: [0] 096 097 098 099 100 101 102 103 Jul 12 10:10:26.718168 kernel: pcpu-alloc: [0] 104 105 106 107 108 109 110 111 Jul 12 10:10:26.718173 kernel: pcpu-alloc: [0] 112 113 114 115 116 117 118 119 Jul 12 10:10:26.718179 kernel: pcpu-alloc: [0] 120 121 122 123 124 125 126 127 Jul 12 10:10:26.718185 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=4aa07c6f7fdf02f2e05d879e4d058ee0cec0fba29acc0516234352104ac4e6c4 Jul 12 10:10:26.718191 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 12 10:10:26.718197 kernel: random: crng init done Jul 12 10:10:26.718203 kernel: printk: log_buf_len individual max cpu contribution: 4096 bytes Jul 12 10:10:26.718208 kernel: printk: log_buf_len total cpu_extra contributions: 520192 bytes Jul 12 10:10:26.718213 kernel: printk: log_buf_len min size: 262144 bytes Jul 12 10:10:26.718219 kernel: printk: log_buf_len: 1048576 bytes Jul 12 10:10:26.718224 kernel: printk: early log buf free: 245592(93%) Jul 12 10:10:26.718230 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 12 10:10:26.718235 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jul 12 10:10:26.718241 kernel: Fallback order for Node 0: 0 Jul 12 10:10:26.718246 kernel: Built 1 zonelists, mobility grouping on. Total pages: 524157 Jul 12 10:10:26.718253 kernel: Policy zone: DMA32 Jul 12 10:10:26.718258 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 12 10:10:26.718264 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=128, Nodes=1 Jul 12 10:10:26.718269 kernel: ftrace: allocating 40097 entries in 157 pages Jul 12 10:10:26.718275 kernel: ftrace: allocated 157 pages with 5 groups Jul 12 10:10:26.718280 kernel: Dynamic Preempt: voluntary Jul 12 10:10:26.718285 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 12 10:10:26.718292 kernel: rcu: RCU event tracing is enabled. Jul 12 10:10:26.718297 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=128. Jul 12 10:10:26.718304 kernel: Trampoline variant of Tasks RCU enabled. Jul 12 10:10:26.718309 kernel: Rude variant of Tasks RCU enabled. Jul 12 10:10:26.718315 kernel: Tracing variant of Tasks RCU enabled. Jul 12 10:10:26.718320 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 12 10:10:26.718326 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=128 Jul 12 10:10:26.718331 kernel: RCU Tasks: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Jul 12 10:10:26.718336 kernel: RCU Tasks Rude: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Jul 12 10:10:26.718342 kernel: RCU Tasks Trace: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Jul 12 10:10:26.718347 kernel: NR_IRQS: 33024, nr_irqs: 1448, preallocated irqs: 16 Jul 12 10:10:26.718354 kernel: rcu: srcu_init: Setting srcu_struct sizes to big. Jul 12 10:10:26.718359 kernel: Console: colour VGA+ 80x25 Jul 12 10:10:26.718365 kernel: printk: legacy console [tty0] enabled Jul 12 10:10:26.718370 kernel: printk: legacy console [ttyS0] enabled Jul 12 10:10:26.718376 kernel: ACPI: Core revision 20240827 Jul 12 10:10:26.718381 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 133484882848 ns Jul 12 10:10:26.718387 kernel: APIC: Switch to symmetric I/O mode setup Jul 12 10:10:26.718392 kernel: x2apic enabled Jul 12 10:10:26.718398 kernel: APIC: Switched APIC routing to: physical x2apic Jul 12 10:10:26.718404 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jul 12 10:10:26.718410 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Jul 12 10:10:26.718415 kernel: Calibrating delay loop (skipped) preset value.. 6816.00 BogoMIPS (lpj=3408000) Jul 12 10:10:26.718421 kernel: Disabled fast string operations Jul 12 10:10:26.718426 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jul 12 10:10:26.718432 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Jul 12 10:10:26.718437 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 12 10:10:26.718443 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall and VM exit Jul 12 10:10:26.718448 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Jul 12 10:10:26.718455 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Jul 12 10:10:26.718460 kernel: RETBleed: Mitigation: Enhanced IBRS Jul 12 10:10:26.718466 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 12 10:10:26.718471 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jul 12 10:10:26.718477 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jul 12 10:10:26.718482 kernel: SRBDS: Unknown: Dependent on hypervisor status Jul 12 10:10:26.718487 kernel: GDS: Unknown: Dependent on hypervisor status Jul 12 10:10:26.718493 kernel: ITS: Mitigation: Aligned branch/return thunks Jul 12 10:10:26.718498 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 12 10:10:26.718505 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 12 10:10:26.718510 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 12 10:10:26.718516 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 12 10:10:26.718521 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jul 12 10:10:26.718527 kernel: Freeing SMP alternatives memory: 32K Jul 12 10:10:26.718532 kernel: pid_max: default: 131072 minimum: 1024 Jul 12 10:10:26.718538 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jul 12 10:10:26.718543 kernel: landlock: Up and running. Jul 12 10:10:26.718548 kernel: SELinux: Initializing. Jul 12 10:10:26.718555 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jul 12 10:10:26.718560 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jul 12 10:10:26.718566 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Jul 12 10:10:26.718571 kernel: Performance Events: Skylake events, core PMU driver. Jul 12 10:10:26.718577 kernel: core: CPUID marked event: 'cpu cycles' unavailable Jul 12 10:10:26.718582 kernel: core: CPUID marked event: 'instructions' unavailable Jul 12 10:10:26.718588 kernel: core: CPUID marked event: 'bus cycles' unavailable Jul 12 10:10:26.718593 kernel: core: CPUID marked event: 'cache references' unavailable Jul 12 10:10:26.718599 kernel: core: CPUID marked event: 'cache misses' unavailable Jul 12 10:10:26.718605 kernel: core: CPUID marked event: 'branch instructions' unavailable Jul 12 10:10:26.718610 kernel: core: CPUID marked event: 'branch misses' unavailable Jul 12 10:10:26.718615 kernel: ... version: 1 Jul 12 10:10:26.718621 kernel: ... bit width: 48 Jul 12 10:10:26.718626 kernel: ... generic registers: 4 Jul 12 10:10:26.718632 kernel: ... value mask: 0000ffffffffffff Jul 12 10:10:26.718637 kernel: ... max period: 000000007fffffff Jul 12 10:10:26.718643 kernel: ... fixed-purpose events: 0 Jul 12 10:10:26.718649 kernel: ... event mask: 000000000000000f Jul 12 10:10:26.718654 kernel: signal: max sigframe size: 1776 Jul 12 10:10:26.718660 kernel: rcu: Hierarchical SRCU implementation. Jul 12 10:10:26.718666 kernel: rcu: Max phase no-delay instances is 400. Jul 12 10:10:26.718671 kernel: Timer migration: 3 hierarchy levels; 8 children per group; 3 crossnode level Jul 12 10:10:26.718676 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jul 12 10:10:26.718682 kernel: smp: Bringing up secondary CPUs ... Jul 12 10:10:26.718688 kernel: smpboot: x86: Booting SMP configuration: Jul 12 10:10:26.718693 kernel: .... node #0, CPUs: #1 Jul 12 10:10:26.718698 kernel: Disabled fast string operations Jul 12 10:10:26.718705 kernel: smp: Brought up 1 node, 2 CPUs Jul 12 10:10:26.718710 kernel: smpboot: Total of 2 processors activated (13632.00 BogoMIPS) Jul 12 10:10:26.718716 kernel: Memory: 1924232K/2096628K available (14336K kernel code, 2430K rwdata, 9956K rodata, 54608K init, 2360K bss, 161012K reserved, 0K cma-reserved) Jul 12 10:10:26.718721 kernel: devtmpfs: initialized Jul 12 10:10:26.718727 kernel: x86/mm: Memory block size: 128MB Jul 12 10:10:26.718733 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7feff000-0x7fefffff] (4096 bytes) Jul 12 10:10:26.718738 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 12 10:10:26.718743 kernel: futex hash table entries: 32768 (order: 9, 2097152 bytes, linear) Jul 12 10:10:26.718749 kernel: pinctrl core: initialized pinctrl subsystem Jul 12 10:10:26.718756 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 12 10:10:26.718761 kernel: audit: initializing netlink subsys (disabled) Jul 12 10:10:26.718767 kernel: audit: type=2000 audit(1752315023.300:1): state=initialized audit_enabled=0 res=1 Jul 12 10:10:26.718772 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 12 10:10:26.718782 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 12 10:10:26.718787 kernel: cpuidle: using governor menu Jul 12 10:10:26.718793 kernel: Simple Boot Flag at 0x36 set to 0x80 Jul 12 10:10:26.718798 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 12 10:10:26.718804 kernel: dca service started, version 1.12.1 Jul 12 10:10:26.718810 kernel: PCI: ECAM [mem 0xf0000000-0xf7ffffff] (base 0xf0000000) for domain 0000 [bus 00-7f] Jul 12 10:10:26.718823 kernel: PCI: Using configuration type 1 for base access Jul 12 10:10:26.718830 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 12 10:10:26.720889 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 12 10:10:26.720899 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jul 12 10:10:26.720905 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 12 10:10:26.720911 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 12 10:10:26.720917 kernel: ACPI: Added _OSI(Module Device) Jul 12 10:10:26.720925 kernel: ACPI: Added _OSI(Processor Device) Jul 12 10:10:26.720931 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 12 10:10:26.720937 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 12 10:10:26.720943 kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored Jul 12 10:10:26.720949 kernel: ACPI: Interpreter enabled Jul 12 10:10:26.720955 kernel: ACPI: PM: (supports S0 S1 S5) Jul 12 10:10:26.720960 kernel: ACPI: Using IOAPIC for interrupt routing Jul 12 10:10:26.720966 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 12 10:10:26.720972 kernel: PCI: Using E820 reservations for host bridge windows Jul 12 10:10:26.720979 kernel: ACPI: Enabled 4 GPEs in block 00 to 0F Jul 12 10:10:26.720985 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-7f]) Jul 12 10:10:26.721074 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 12 10:10:26.721130 kernel: acpi PNP0A03:00: _OSC: platform does not support [AER LTR] Jul 12 10:10:26.721178 kernel: acpi PNP0A03:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability] Jul 12 10:10:26.721186 kernel: PCI host bridge to bus 0000:00 Jul 12 10:10:26.721238 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 12 10:10:26.721285 kernel: pci_bus 0000:00: root bus resource [mem 0x000cc000-0x000dbfff window] Jul 12 10:10:26.721328 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jul 12 10:10:26.721370 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 12 10:10:26.721412 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xfeff window] Jul 12 10:10:26.721454 kernel: pci_bus 0000:00: root bus resource [bus 00-7f] Jul 12 10:10:26.721512 kernel: pci 0000:00:00.0: [8086:7190] type 00 class 0x060000 conventional PCI endpoint Jul 12 10:10:26.721572 kernel: pci 0000:00:01.0: [8086:7191] type 01 class 0x060400 conventional PCI bridge Jul 12 10:10:26.721625 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jul 12 10:10:26.721680 kernel: pci 0000:00:07.0: [8086:7110] type 00 class 0x060100 conventional PCI endpoint Jul 12 10:10:26.721736 kernel: pci 0000:00:07.1: [8086:7111] type 00 class 0x01018a conventional PCI endpoint Jul 12 10:10:26.721794 kernel: pci 0000:00:07.1: BAR 4 [io 0x1060-0x106f] Jul 12 10:10:26.722866 kernel: pci 0000:00:07.1: BAR 0 [io 0x01f0-0x01f7]: legacy IDE quirk Jul 12 10:10:26.722927 kernel: pci 0000:00:07.1: BAR 1 [io 0x03f6]: legacy IDE quirk Jul 12 10:10:26.722980 kernel: pci 0000:00:07.1: BAR 2 [io 0x0170-0x0177]: legacy IDE quirk Jul 12 10:10:26.723031 kernel: pci 0000:00:07.1: BAR 3 [io 0x0376]: legacy IDE quirk Jul 12 10:10:26.723086 kernel: pci 0000:00:07.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint Jul 12 10:10:26.723136 kernel: pci 0000:00:07.3: quirk: [io 0x1000-0x103f] claimed by PIIX4 ACPI Jul 12 10:10:26.723188 kernel: pci 0000:00:07.3: quirk: [io 0x1040-0x104f] claimed by PIIX4 SMB Jul 12 10:10:26.723241 kernel: pci 0000:00:07.7: [15ad:0740] type 00 class 0x088000 conventional PCI endpoint Jul 12 10:10:26.723291 kernel: pci 0000:00:07.7: BAR 0 [io 0x1080-0x10bf] Jul 12 10:10:26.723340 kernel: pci 0000:00:07.7: BAR 1 [mem 0xfebfe000-0xfebfffff 64bit] Jul 12 10:10:26.723393 kernel: pci 0000:00:0f.0: [15ad:0405] type 00 class 0x030000 conventional PCI endpoint Jul 12 10:10:26.723442 kernel: pci 0000:00:0f.0: BAR 0 [io 0x1070-0x107f] Jul 12 10:10:26.723495 kernel: pci 0000:00:0f.0: BAR 1 [mem 0xe8000000-0xefffffff pref] Jul 12 10:10:26.723543 kernel: pci 0000:00:0f.0: BAR 2 [mem 0xfe000000-0xfe7fffff] Jul 12 10:10:26.723592 kernel: pci 0000:00:0f.0: ROM [mem 0x00000000-0x00007fff pref] Jul 12 10:10:26.723639 kernel: pci 0000:00:0f.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 12 10:10:26.723694 kernel: pci 0000:00:11.0: [15ad:0790] type 01 class 0x060401 conventional PCI bridge Jul 12 10:10:26.723743 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] (subtractive decode) Jul 12 10:10:26.723792 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Jul 12 10:10:26.725965 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Jul 12 10:10:26.726026 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Jul 12 10:10:26.726084 kernel: pci 0000:00:15.0: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jul 12 10:10:26.726136 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Jul 12 10:10:26.726186 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Jul 12 10:10:26.726235 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Jul 12 10:10:26.726283 kernel: pci 0000:00:15.0: PME# supported from D0 D3hot D3cold Jul 12 10:10:26.726336 kernel: pci 0000:00:15.1: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jul 12 10:10:26.726391 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Jul 12 10:10:26.726441 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Jul 12 10:10:26.726490 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Jul 12 10:10:26.726539 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Jul 12 10:10:26.726587 kernel: pci 0000:00:15.1: PME# supported from D0 D3hot D3cold Jul 12 10:10:26.726642 kernel: pci 0000:00:15.2: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jul 12 10:10:26.726694 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Jul 12 10:10:26.726743 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Jul 12 10:10:26.726795 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Jul 12 10:10:26.726858 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Jul 12 10:10:26.726909 kernel: pci 0000:00:15.2: PME# supported from D0 D3hot D3cold Jul 12 10:10:26.726965 kernel: pci 0000:00:15.3: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jul 12 10:10:26.727019 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Jul 12 10:10:26.727068 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Jul 12 10:10:26.727116 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Jul 12 10:10:26.727165 kernel: pci 0000:00:15.3: PME# supported from D0 D3hot D3cold Jul 12 10:10:26.727218 kernel: pci 0000:00:15.4: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jul 12 10:10:26.727270 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Jul 12 10:10:26.727318 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Jul 12 10:10:26.727367 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Jul 12 10:10:26.727418 kernel: pci 0000:00:15.4: PME# supported from D0 D3hot D3cold Jul 12 10:10:26.727471 kernel: pci 0000:00:15.5: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jul 12 10:10:26.727521 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Jul 12 10:10:26.727570 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Jul 12 10:10:26.727618 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Jul 12 10:10:26.727667 kernel: pci 0000:00:15.5: PME# supported from D0 D3hot D3cold Jul 12 10:10:26.727720 kernel: pci 0000:00:15.6: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jul 12 10:10:26.727772 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Jul 12 10:10:26.727821 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Jul 12 10:10:26.729902 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Jul 12 10:10:26.729962 kernel: pci 0000:00:15.6: PME# supported from D0 D3hot D3cold Jul 12 10:10:26.730020 kernel: pci 0000:00:15.7: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jul 12 10:10:26.730071 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Jul 12 10:10:26.730120 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Jul 12 10:10:26.730174 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Jul 12 10:10:26.730224 kernel: pci 0000:00:15.7: PME# supported from D0 D3hot D3cold Jul 12 10:10:26.730280 kernel: pci 0000:00:16.0: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jul 12 10:10:26.730331 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Jul 12 10:10:26.730380 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Jul 12 10:10:26.730430 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Jul 12 10:10:26.730479 kernel: pci 0000:00:16.0: PME# supported from D0 D3hot D3cold Jul 12 10:10:26.730536 kernel: pci 0000:00:16.1: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jul 12 10:10:26.730586 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Jul 12 10:10:26.730636 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Jul 12 10:10:26.730686 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Jul 12 10:10:26.730734 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Jul 12 10:10:26.730783 kernel: pci 0000:00:16.1: PME# supported from D0 D3hot D3cold Jul 12 10:10:26.730942 kernel: pci 0000:00:16.2: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jul 12 10:10:26.731002 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Jul 12 10:10:26.731051 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Jul 12 10:10:26.731100 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Jul 12 10:10:26.731149 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Jul 12 10:10:26.731197 kernel: pci 0000:00:16.2: PME# supported from D0 D3hot D3cold Jul 12 10:10:26.731251 kernel: pci 0000:00:16.3: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jul 12 10:10:26.731301 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Jul 12 10:10:26.731352 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Jul 12 10:10:26.731400 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Jul 12 10:10:26.731448 kernel: pci 0000:00:16.3: PME# supported from D0 D3hot D3cold Jul 12 10:10:26.731504 kernel: pci 0000:00:16.4: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jul 12 10:10:26.731637 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Jul 12 10:10:26.731688 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Jul 12 10:10:26.731737 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Jul 12 10:10:26.731789 kernel: pci 0000:00:16.4: PME# supported from D0 D3hot D3cold Jul 12 10:10:26.731851 kernel: pci 0000:00:16.5: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jul 12 10:10:26.731902 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Jul 12 10:10:26.731950 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Jul 12 10:10:26.731999 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Jul 12 10:10:26.732048 kernel: pci 0000:00:16.5: PME# supported from D0 D3hot D3cold Jul 12 10:10:26.732102 kernel: pci 0000:00:16.6: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jul 12 10:10:26.732154 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Jul 12 10:10:26.732203 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Jul 12 10:10:26.732251 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Jul 12 10:10:26.732300 kernel: pci 0000:00:16.6: PME# supported from D0 D3hot D3cold Jul 12 10:10:26.732353 kernel: pci 0000:00:16.7: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jul 12 10:10:26.732404 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Jul 12 10:10:26.732453 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Jul 12 10:10:26.732501 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Jul 12 10:10:26.732553 kernel: pci 0000:00:16.7: PME# supported from D0 D3hot D3cold Jul 12 10:10:26.732608 kernel: pci 0000:00:17.0: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jul 12 10:10:26.732658 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Jul 12 10:10:26.732707 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Jul 12 10:10:26.732755 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Jul 12 10:10:26.732811 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Jul 12 10:10:26.732871 kernel: pci 0000:00:17.0: PME# supported from D0 D3hot D3cold Jul 12 10:10:26.732929 kernel: pci 0000:00:17.1: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jul 12 10:10:26.732979 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Jul 12 10:10:26.733029 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Jul 12 10:10:26.733077 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Jul 12 10:10:26.733129 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Jul 12 10:10:26.733185 kernel: pci 0000:00:17.1: PME# supported from D0 D3hot D3cold Jul 12 10:10:26.733238 kernel: pci 0000:00:17.2: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jul 12 10:10:26.733287 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Jul 12 10:10:26.733336 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Jul 12 10:10:26.733384 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Jul 12 10:10:26.733432 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Jul 12 10:10:26.733484 kernel: pci 0000:00:17.2: PME# supported from D0 D3hot D3cold Jul 12 10:10:26.733537 kernel: pci 0000:00:17.3: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jul 12 10:10:26.733587 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Jul 12 10:10:26.733635 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Jul 12 10:10:26.733684 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Jul 12 10:10:26.733731 kernel: pci 0000:00:17.3: PME# supported from D0 D3hot D3cold Jul 12 10:10:26.733784 kernel: pci 0000:00:17.4: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jul 12 10:10:26.733847 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Jul 12 10:10:26.733901 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Jul 12 10:10:26.733950 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Jul 12 10:10:26.734000 kernel: pci 0000:00:17.4: PME# supported from D0 D3hot D3cold Jul 12 10:10:26.734056 kernel: pci 0000:00:17.5: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jul 12 10:10:26.734106 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Jul 12 10:10:26.734155 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Jul 12 10:10:26.734207 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Jul 12 10:10:26.734257 kernel: pci 0000:00:17.5: PME# supported from D0 D3hot D3cold Jul 12 10:10:26.734311 kernel: pci 0000:00:17.6: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jul 12 10:10:26.734362 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Jul 12 10:10:26.734411 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Jul 12 10:10:26.734461 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Jul 12 10:10:26.734509 kernel: pci 0000:00:17.6: PME# supported from D0 D3hot D3cold Jul 12 10:10:26.734566 kernel: pci 0000:00:17.7: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jul 12 10:10:26.734616 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Jul 12 10:10:26.734666 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Jul 12 10:10:26.734714 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Jul 12 10:10:26.734763 kernel: pci 0000:00:17.7: PME# supported from D0 D3hot D3cold Jul 12 10:10:26.734817 kernel: pci 0000:00:18.0: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jul 12 10:10:26.734876 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Jul 12 10:10:26.734930 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Jul 12 10:10:26.734979 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Jul 12 10:10:26.735027 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Jul 12 10:10:26.735076 kernel: pci 0000:00:18.0: PME# supported from D0 D3hot D3cold Jul 12 10:10:26.735132 kernel: pci 0000:00:18.1: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jul 12 10:10:26.735182 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Jul 12 10:10:26.735231 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Jul 12 10:10:26.735283 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Jul 12 10:10:26.735336 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Jul 12 10:10:26.735396 kernel: pci 0000:00:18.1: PME# supported from D0 D3hot D3cold Jul 12 10:10:26.735464 kernel: pci 0000:00:18.2: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jul 12 10:10:26.735531 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Jul 12 10:10:26.735591 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Jul 12 10:10:26.735649 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Jul 12 10:10:26.735718 kernel: pci 0000:00:18.2: PME# supported from D0 D3hot D3cold Jul 12 10:10:26.735790 kernel: pci 0000:00:18.3: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jul 12 10:10:26.735867 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Jul 12 10:10:26.735929 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Jul 12 10:10:26.735979 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Jul 12 10:10:26.736029 kernel: pci 0000:00:18.3: PME# supported from D0 D3hot D3cold Jul 12 10:10:26.736082 kernel: pci 0000:00:18.4: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jul 12 10:10:26.736136 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Jul 12 10:10:26.736185 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Jul 12 10:10:26.736234 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Jul 12 10:10:26.736282 kernel: pci 0000:00:18.4: PME# supported from D0 D3hot D3cold Jul 12 10:10:26.736339 kernel: pci 0000:00:18.5: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jul 12 10:10:26.736389 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Jul 12 10:10:26.736437 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Jul 12 10:10:26.736488 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Jul 12 10:10:26.736539 kernel: pci 0000:00:18.5: PME# supported from D0 D3hot D3cold Jul 12 10:10:26.736593 kernel: pci 0000:00:18.6: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jul 12 10:10:26.736642 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Jul 12 10:10:26.736707 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Jul 12 10:10:26.736775 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Jul 12 10:10:26.736825 kernel: pci 0000:00:18.6: PME# supported from D0 D3hot D3cold Jul 12 10:10:26.736890 kernel: pci 0000:00:18.7: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jul 12 10:10:26.736944 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Jul 12 10:10:26.736993 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Jul 12 10:10:26.737041 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Jul 12 10:10:26.737090 kernel: pci 0000:00:18.7: PME# supported from D0 D3hot D3cold Jul 12 10:10:26.737148 kernel: pci_bus 0000:01: extended config space not accessible Jul 12 10:10:26.737201 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jul 12 10:10:26.737254 kernel: pci_bus 0000:02: extended config space not accessible Jul 12 10:10:26.737265 kernel: acpiphp: Slot [32] registered Jul 12 10:10:26.737271 kernel: acpiphp: Slot [33] registered Jul 12 10:10:26.737277 kernel: acpiphp: Slot [34] registered Jul 12 10:10:26.737283 kernel: acpiphp: Slot [35] registered Jul 12 10:10:26.737289 kernel: acpiphp: Slot [36] registered Jul 12 10:10:26.737294 kernel: acpiphp: Slot [37] registered Jul 12 10:10:26.737300 kernel: acpiphp: Slot [38] registered Jul 12 10:10:26.737306 kernel: acpiphp: Slot [39] registered Jul 12 10:10:26.737312 kernel: acpiphp: Slot [40] registered Jul 12 10:10:26.737319 kernel: acpiphp: Slot [41] registered Jul 12 10:10:26.737324 kernel: acpiphp: Slot [42] registered Jul 12 10:10:26.737330 kernel: acpiphp: Slot [43] registered Jul 12 10:10:26.737336 kernel: acpiphp: Slot [44] registered Jul 12 10:10:26.737342 kernel: acpiphp: Slot [45] registered Jul 12 10:10:26.737348 kernel: acpiphp: Slot [46] registered Jul 12 10:10:26.737353 kernel: acpiphp: Slot [47] registered Jul 12 10:10:26.737359 kernel: acpiphp: Slot [48] registered Jul 12 10:10:26.737365 kernel: acpiphp: Slot [49] registered Jul 12 10:10:26.737370 kernel: acpiphp: Slot [50] registered Jul 12 10:10:26.737377 kernel: acpiphp: Slot [51] registered Jul 12 10:10:26.737383 kernel: acpiphp: Slot [52] registered Jul 12 10:10:26.737389 kernel: acpiphp: Slot [53] registered Jul 12 10:10:26.737394 kernel: acpiphp: Slot [54] registered Jul 12 10:10:26.737400 kernel: acpiphp: Slot [55] registered Jul 12 10:10:26.737406 kernel: acpiphp: Slot [56] registered Jul 12 10:10:26.737411 kernel: acpiphp: Slot [57] registered Jul 12 10:10:26.737417 kernel: acpiphp: Slot [58] registered Jul 12 10:10:26.737423 kernel: acpiphp: Slot [59] registered Jul 12 10:10:26.737430 kernel: acpiphp: Slot [60] registered Jul 12 10:10:26.737436 kernel: acpiphp: Slot [61] registered Jul 12 10:10:26.737441 kernel: acpiphp: Slot [62] registered Jul 12 10:10:26.737447 kernel: acpiphp: Slot [63] registered Jul 12 10:10:26.737496 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] (subtractive decode) Jul 12 10:10:26.737545 kernel: pci 0000:00:11.0: bridge window [mem 0x000a0000-0x000bffff window] (subtractive decode) Jul 12 10:10:26.737594 kernel: pci 0000:00:11.0: bridge window [mem 0x000cc000-0x000dbfff window] (subtractive decode) Jul 12 10:10:26.737642 kernel: pci 0000:00:11.0: bridge window [mem 0xc0000000-0xfebfffff window] (subtractive decode) Jul 12 10:10:26.737693 kernel: pci 0000:00:11.0: bridge window [io 0x0000-0x0cf7 window] (subtractive decode) Jul 12 10:10:26.737741 kernel: pci 0000:00:11.0: bridge window [io 0x0d00-0xfeff window] (subtractive decode) Jul 12 10:10:26.737798 kernel: pci 0000:03:00.0: [15ad:07c0] type 00 class 0x010700 PCIe Endpoint Jul 12 10:10:26.737867 kernel: pci 0000:03:00.0: BAR 0 [io 0x4000-0x4007] Jul 12 10:10:26.737918 kernel: pci 0000:03:00.0: BAR 1 [mem 0xfd5f8000-0xfd5fffff 64bit] Jul 12 10:10:26.737968 kernel: pci 0000:03:00.0: ROM [mem 0x00000000-0x0000ffff pref] Jul 12 10:10:26.738025 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Jul 12 10:10:26.738076 kernel: pci 0000:03:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Jul 12 10:10:26.738130 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Jul 12 10:10:26.738203 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Jul 12 10:10:26.738256 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Jul 12 10:10:26.738308 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Jul 12 10:10:26.738359 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Jul 12 10:10:26.738411 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Jul 12 10:10:26.738461 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Jul 12 10:10:26.738518 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Jul 12 10:10:26.738599 kernel: pci 0000:0b:00.0: [15ad:07b0] type 00 class 0x020000 PCIe Endpoint Jul 12 10:10:26.738652 kernel: pci 0000:0b:00.0: BAR 0 [mem 0xfd4fc000-0xfd4fcfff] Jul 12 10:10:26.738702 kernel: pci 0000:0b:00.0: BAR 1 [mem 0xfd4fd000-0xfd4fdfff] Jul 12 10:10:26.738752 kernel: pci 0000:0b:00.0: BAR 2 [mem 0xfd4fe000-0xfd4fffff] Jul 12 10:10:26.738807 kernel: pci 0000:0b:00.0: BAR 3 [io 0x5000-0x500f] Jul 12 10:10:26.738877 kernel: pci 0000:0b:00.0: ROM [mem 0x00000000-0x0000ffff pref] Jul 12 10:10:26.738931 kernel: pci 0000:0b:00.0: supports D1 D2 Jul 12 10:10:26.738981 kernel: pci 0000:0b:00.0: PME# supported from D0 D1 D2 D3hot D3cold Jul 12 10:10:26.739032 kernel: pci 0000:0b:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Jul 12 10:10:26.739083 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Jul 12 10:10:26.739134 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Jul 12 10:10:26.739186 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Jul 12 10:10:26.739237 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Jul 12 10:10:26.739289 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Jul 12 10:10:26.739343 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Jul 12 10:10:26.739394 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Jul 12 10:10:26.739445 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Jul 12 10:10:26.739495 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Jul 12 10:10:26.739547 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Jul 12 10:10:26.739598 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Jul 12 10:10:26.739649 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Jul 12 10:10:26.739715 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Jul 12 10:10:26.739780 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Jul 12 10:10:26.741843 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Jul 12 10:10:26.741918 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Jul 12 10:10:26.741974 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Jul 12 10:10:26.742029 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Jul 12 10:10:26.742082 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Jul 12 10:10:26.742134 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Jul 12 10:10:26.742190 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Jul 12 10:10:26.742241 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Jul 12 10:10:26.742293 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Jul 12 10:10:26.742344 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Jul 12 10:10:26.742354 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 9 Jul 12 10:10:26.742361 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 0 Jul 12 10:10:26.742367 kernel: ACPI: PCI: Interrupt link LNKB disabled Jul 12 10:10:26.742375 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 12 10:10:26.742381 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 10 Jul 12 10:10:26.742387 kernel: iommu: Default domain type: Translated Jul 12 10:10:26.742393 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 12 10:10:26.742399 kernel: PCI: Using ACPI for IRQ routing Jul 12 10:10:26.742405 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 12 10:10:26.742411 kernel: e820: reserve RAM buffer [mem 0x0009ec00-0x0009ffff] Jul 12 10:10:26.742417 kernel: e820: reserve RAM buffer [mem 0x7fee0000-0x7fffffff] Jul 12 10:10:26.742467 kernel: pci 0000:00:0f.0: vgaarb: setting as boot VGA device Jul 12 10:10:26.742519 kernel: pci 0000:00:0f.0: vgaarb: bridge control possible Jul 12 10:10:26.742569 kernel: pci 0000:00:0f.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 12 10:10:26.742577 kernel: vgaarb: loaded Jul 12 10:10:26.742584 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 Jul 12 10:10:26.742590 kernel: hpet0: 16 comparators, 64-bit 14.318180 MHz counter Jul 12 10:10:26.742595 kernel: clocksource: Switched to clocksource tsc-early Jul 12 10:10:26.742601 kernel: VFS: Disk quotas dquot_6.6.0 Jul 12 10:10:26.742607 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 12 10:10:26.742613 kernel: pnp: PnP ACPI init Jul 12 10:10:26.742670 kernel: system 00:00: [io 0x1000-0x103f] has been reserved Jul 12 10:10:26.742718 kernel: system 00:00: [io 0x1040-0x104f] has been reserved Jul 12 10:10:26.742763 kernel: system 00:00: [io 0x0cf0-0x0cf1] has been reserved Jul 12 10:10:26.742873 kernel: system 00:04: [mem 0xfed00000-0xfed003ff] has been reserved Jul 12 10:10:26.742935 kernel: pnp 00:06: [dma 2] Jul 12 10:10:26.742989 kernel: system 00:07: [io 0xfce0-0xfcff] has been reserved Jul 12 10:10:26.743041 kernel: system 00:07: [mem 0xf0000000-0xf7ffffff] has been reserved Jul 12 10:10:26.743087 kernel: system 00:07: [mem 0xfe800000-0xfe9fffff] has been reserved Jul 12 10:10:26.743096 kernel: pnp: PnP ACPI: found 8 devices Jul 12 10:10:26.743103 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 12 10:10:26.743109 kernel: NET: Registered PF_INET protocol family Jul 12 10:10:26.743115 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 12 10:10:26.743121 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jul 12 10:10:26.743127 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 12 10:10:26.743135 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jul 12 10:10:26.743141 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jul 12 10:10:26.743147 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jul 12 10:10:26.743153 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jul 12 10:10:26.743159 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jul 12 10:10:26.743164 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 12 10:10:26.743170 kernel: NET: Registered PF_XDP protocol family Jul 12 10:10:26.743225 kernel: pci 0000:00:15.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Jul 12 10:10:26.743280 kernel: pci 0000:00:15.3: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jul 12 10:10:26.743336 kernel: pci 0000:00:15.4: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jul 12 10:10:26.743390 kernel: pci 0000:00:15.5: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jul 12 10:10:26.743442 kernel: pci 0000:00:15.6: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jul 12 10:10:26.743513 kernel: pci 0000:00:15.7: bridge window [io 0x1000-0x0fff] to [bus 0a] add_size 1000 Jul 12 10:10:26.743568 kernel: pci 0000:00:16.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0b] add_size 200000 add_align 100000 Jul 12 10:10:26.743620 kernel: pci 0000:00:16.3: bridge window [io 0x1000-0x0fff] to [bus 0e] add_size 1000 Jul 12 10:10:26.743673 kernel: pci 0000:00:16.4: bridge window [io 0x1000-0x0fff] to [bus 0f] add_size 1000 Jul 12 10:10:26.743807 kernel: pci 0000:00:16.5: bridge window [io 0x1000-0x0fff] to [bus 10] add_size 1000 Jul 12 10:10:26.744078 kernel: pci 0000:00:16.6: bridge window [io 0x1000-0x0fff] to [bus 11] add_size 1000 Jul 12 10:10:26.744155 kernel: pci 0000:00:16.7: bridge window [io 0x1000-0x0fff] to [bus 12] add_size 1000 Jul 12 10:10:26.744231 kernel: pci 0000:00:17.3: bridge window [io 0x1000-0x0fff] to [bus 16] add_size 1000 Jul 12 10:10:26.744308 kernel: pci 0000:00:17.4: bridge window [io 0x1000-0x0fff] to [bus 17] add_size 1000 Jul 12 10:10:26.744442 kernel: pci 0000:00:17.5: bridge window [io 0x1000-0x0fff] to [bus 18] add_size 1000 Jul 12 10:10:26.744502 kernel: pci 0000:00:17.6: bridge window [io 0x1000-0x0fff] to [bus 19] add_size 1000 Jul 12 10:10:26.744557 kernel: pci 0000:00:17.7: bridge window [io 0x1000-0x0fff] to [bus 1a] add_size 1000 Jul 12 10:10:26.744615 kernel: pci 0000:00:18.2: bridge window [io 0x1000-0x0fff] to [bus 1d] add_size 1000 Jul 12 10:10:26.744669 kernel: pci 0000:00:18.3: bridge window [io 0x1000-0x0fff] to [bus 1e] add_size 1000 Jul 12 10:10:26.744720 kernel: pci 0000:00:18.4: bridge window [io 0x1000-0x0fff] to [bus 1f] add_size 1000 Jul 12 10:10:26.744771 kernel: pci 0000:00:18.5: bridge window [io 0x1000-0x0fff] to [bus 20] add_size 1000 Jul 12 10:10:26.744821 kernel: pci 0000:00:18.6: bridge window [io 0x1000-0x0fff] to [bus 21] add_size 1000 Jul 12 10:10:26.744891 kernel: pci 0000:00:18.7: bridge window [io 0x1000-0x0fff] to [bus 22] add_size 1000 Jul 12 10:10:26.744944 kernel: pci 0000:00:15.0: bridge window [mem 0xc0000000-0xc01fffff 64bit pref]: assigned Jul 12 10:10:26.744995 kernel: pci 0000:00:16.0: bridge window [mem 0xc0200000-0xc03fffff 64bit pref]: assigned Jul 12 10:10:26.745048 kernel: pci 0000:00:15.3: bridge window [io size 0x1000]: can't assign; no space Jul 12 10:10:26.745098 kernel: pci 0000:00:15.3: bridge window [io size 0x1000]: failed to assign Jul 12 10:10:26.745148 kernel: pci 0000:00:15.4: bridge window [io size 0x1000]: can't assign; no space Jul 12 10:10:26.745197 kernel: pci 0000:00:15.4: bridge window [io size 0x1000]: failed to assign Jul 12 10:10:26.745247 kernel: pci 0000:00:15.5: bridge window [io size 0x1000]: can't assign; no space Jul 12 10:10:26.745296 kernel: pci 0000:00:15.5: bridge window [io size 0x1000]: failed to assign Jul 12 10:10:26.745346 kernel: pci 0000:00:15.6: bridge window [io size 0x1000]: can't assign; no space Jul 12 10:10:26.745401 kernel: pci 0000:00:15.6: bridge window [io size 0x1000]: failed to assign Jul 12 10:10:26.745454 kernel: pci 0000:00:15.7: bridge window [io size 0x1000]: can't assign; no space Jul 12 10:10:26.745505 kernel: pci 0000:00:15.7: bridge window [io size 0x1000]: failed to assign Jul 12 10:10:26.745555 kernel: pci 0000:00:16.3: bridge window [io size 0x1000]: can't assign; no space Jul 12 10:10:26.745606 kernel: pci 0000:00:16.3: bridge window [io size 0x1000]: failed to assign Jul 12 10:10:26.745656 kernel: pci 0000:00:16.4: bridge window [io size 0x1000]: can't assign; no space Jul 12 10:10:26.745705 kernel: pci 0000:00:16.4: bridge window [io size 0x1000]: failed to assign Jul 12 10:10:26.745755 kernel: pci 0000:00:16.5: bridge window [io size 0x1000]: can't assign; no space Jul 12 10:10:26.745804 kernel: pci 0000:00:16.5: bridge window [io size 0x1000]: failed to assign Jul 12 10:10:26.745871 kernel: pci 0000:00:16.6: bridge window [io size 0x1000]: can't assign; no space Jul 12 10:10:26.745922 kernel: pci 0000:00:16.6: bridge window [io size 0x1000]: failed to assign Jul 12 10:10:26.745971 kernel: pci 0000:00:16.7: bridge window [io size 0x1000]: can't assign; no space Jul 12 10:10:26.746020 kernel: pci 0000:00:16.7: bridge window [io size 0x1000]: failed to assign Jul 12 10:10:26.746070 kernel: pci 0000:00:17.3: bridge window [io size 0x1000]: can't assign; no space Jul 12 10:10:26.746119 kernel: pci 0000:00:17.3: bridge window [io size 0x1000]: failed to assign Jul 12 10:10:26.746169 kernel: pci 0000:00:17.4: bridge window [io size 0x1000]: can't assign; no space Jul 12 10:10:26.746221 kernel: pci 0000:00:17.4: bridge window [io size 0x1000]: failed to assign Jul 12 10:10:26.746286 kernel: pci 0000:00:17.5: bridge window [io size 0x1000]: can't assign; no space Jul 12 10:10:26.746339 kernel: pci 0000:00:17.5: bridge window [io size 0x1000]: failed to assign Jul 12 10:10:26.746388 kernel: pci 0000:00:17.6: bridge window [io size 0x1000]: can't assign; no space Jul 12 10:10:26.746437 kernel: pci 0000:00:17.6: bridge window [io size 0x1000]: failed to assign Jul 12 10:10:26.746486 kernel: pci 0000:00:17.7: bridge window [io size 0x1000]: can't assign; no space Jul 12 10:10:26.746536 kernel: pci 0000:00:17.7: bridge window [io size 0x1000]: failed to assign Jul 12 10:10:26.746586 kernel: pci 0000:00:18.2: bridge window [io size 0x1000]: can't assign; no space Jul 12 10:10:26.746639 kernel: pci 0000:00:18.2: bridge window [io size 0x1000]: failed to assign Jul 12 10:10:26.746688 kernel: pci 0000:00:18.3: bridge window [io size 0x1000]: can't assign; no space Jul 12 10:10:26.746737 kernel: pci 0000:00:18.3: bridge window [io size 0x1000]: failed to assign Jul 12 10:10:26.746787 kernel: pci 0000:00:18.4: bridge window [io size 0x1000]: can't assign; no space Jul 12 10:10:26.747427 kernel: pci 0000:00:18.4: bridge window [io size 0x1000]: failed to assign Jul 12 10:10:26.747495 kernel: pci 0000:00:18.5: bridge window [io size 0x1000]: can't assign; no space Jul 12 10:10:26.747549 kernel: pci 0000:00:18.5: bridge window [io size 0x1000]: failed to assign Jul 12 10:10:26.747602 kernel: pci 0000:00:18.6: bridge window [io size 0x1000]: can't assign; no space Jul 12 10:10:26.747659 kernel: pci 0000:00:18.6: bridge window [io size 0x1000]: failed to assign Jul 12 10:10:26.747717 kernel: pci 0000:00:18.7: bridge window [io size 0x1000]: can't assign; no space Jul 12 10:10:26.747791 kernel: pci 0000:00:18.7: bridge window [io size 0x1000]: failed to assign Jul 12 10:10:26.747898 kernel: pci 0000:00:18.7: bridge window [io size 0x1000]: can't assign; no space Jul 12 10:10:26.747955 kernel: pci 0000:00:18.7: bridge window [io size 0x1000]: failed to assign Jul 12 10:10:26.748006 kernel: pci 0000:00:18.6: bridge window [io size 0x1000]: can't assign; no space Jul 12 10:10:26.748064 kernel: pci 0000:00:18.6: bridge window [io size 0x1000]: failed to assign Jul 12 10:10:26.748137 kernel: pci 0000:00:18.5: bridge window [io size 0x1000]: can't assign; no space Jul 12 10:10:26.748188 kernel: pci 0000:00:18.5: bridge window [io size 0x1000]: failed to assign Jul 12 10:10:26.748241 kernel: pci 0000:00:18.4: bridge window [io size 0x1000]: can't assign; no space Jul 12 10:10:26.748290 kernel: pci 0000:00:18.4: bridge window [io size 0x1000]: failed to assign Jul 12 10:10:26.748339 kernel: pci 0000:00:18.3: bridge window [io size 0x1000]: can't assign; no space Jul 12 10:10:26.748388 kernel: pci 0000:00:18.3: bridge window [io size 0x1000]: failed to assign Jul 12 10:10:26.748438 kernel: pci 0000:00:18.2: bridge window [io size 0x1000]: can't assign; no space Jul 12 10:10:26.748487 kernel: pci 0000:00:18.2: bridge window [io size 0x1000]: failed to assign Jul 12 10:10:26.748537 kernel: pci 0000:00:17.7: bridge window [io size 0x1000]: can't assign; no space Jul 12 10:10:26.748586 kernel: pci 0000:00:17.7: bridge window [io size 0x1000]: failed to assign Jul 12 10:10:26.748636 kernel: pci 0000:00:17.6: bridge window [io size 0x1000]: can't assign; no space Jul 12 10:10:26.748693 kernel: pci 0000:00:17.6: bridge window [io size 0x1000]: failed to assign Jul 12 10:10:26.748749 kernel: pci 0000:00:17.5: bridge window [io size 0x1000]: can't assign; no space Jul 12 10:10:26.748800 kernel: pci 0000:00:17.5: bridge window [io size 0x1000]: failed to assign Jul 12 10:10:26.748861 kernel: pci 0000:00:17.4: bridge window [io size 0x1000]: can't assign; no space Jul 12 10:10:26.748921 kernel: pci 0000:00:17.4: bridge window [io size 0x1000]: failed to assign Jul 12 10:10:26.748973 kernel: pci 0000:00:17.3: bridge window [io size 0x1000]: can't assign; no space Jul 12 10:10:26.749023 kernel: pci 0000:00:17.3: bridge window [io size 0x1000]: failed to assign Jul 12 10:10:26.749074 kernel: pci 0000:00:16.7: bridge window [io size 0x1000]: can't assign; no space Jul 12 10:10:26.749122 kernel: pci 0000:00:16.7: bridge window [io size 0x1000]: failed to assign Jul 12 10:10:26.749174 kernel: pci 0000:00:16.6: bridge window [io size 0x1000]: can't assign; no space Jul 12 10:10:26.749227 kernel: pci 0000:00:16.6: bridge window [io size 0x1000]: failed to assign Jul 12 10:10:26.749278 kernel: pci 0000:00:16.5: bridge window [io size 0x1000]: can't assign; no space Jul 12 10:10:26.749328 kernel: pci 0000:00:16.5: bridge window [io size 0x1000]: failed to assign Jul 12 10:10:26.749379 kernel: pci 0000:00:16.4: bridge window [io size 0x1000]: can't assign; no space Jul 12 10:10:26.749429 kernel: pci 0000:00:16.4: bridge window [io size 0x1000]: failed to assign Jul 12 10:10:26.749479 kernel: pci 0000:00:16.3: bridge window [io size 0x1000]: can't assign; no space Jul 12 10:10:26.749529 kernel: pci 0000:00:16.3: bridge window [io size 0x1000]: failed to assign Jul 12 10:10:26.749589 kernel: pci 0000:00:15.7: bridge window [io size 0x1000]: can't assign; no space Jul 12 10:10:26.749648 kernel: pci 0000:00:15.7: bridge window [io size 0x1000]: failed to assign Jul 12 10:10:26.749699 kernel: pci 0000:00:15.6: bridge window [io size 0x1000]: can't assign; no space Jul 12 10:10:26.749748 kernel: pci 0000:00:15.6: bridge window [io size 0x1000]: failed to assign Jul 12 10:10:26.749800 kernel: pci 0000:00:15.5: bridge window [io size 0x1000]: can't assign; no space Jul 12 10:10:26.749867 kernel: pci 0000:00:15.5: bridge window [io size 0x1000]: failed to assign Jul 12 10:10:26.749921 kernel: pci 0000:00:15.4: bridge window [io size 0x1000]: can't assign; no space Jul 12 10:10:26.749972 kernel: pci 0000:00:15.4: bridge window [io size 0x1000]: failed to assign Jul 12 10:10:26.750025 kernel: pci 0000:00:15.3: bridge window [io size 0x1000]: can't assign; no space Jul 12 10:10:26.750082 kernel: pci 0000:00:15.3: bridge window [io size 0x1000]: failed to assign Jul 12 10:10:26.750137 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jul 12 10:10:26.750188 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] Jul 12 10:10:26.750238 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Jul 12 10:10:26.750287 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Jul 12 10:10:26.750337 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Jul 12 10:10:26.750390 kernel: pci 0000:03:00.0: ROM [mem 0xfd500000-0xfd50ffff pref]: assigned Jul 12 10:10:26.750444 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Jul 12 10:10:26.750500 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Jul 12 10:10:26.750554 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Jul 12 10:10:26.750603 kernel: pci 0000:00:15.0: bridge window [mem 0xc0000000-0xc01fffff 64bit pref] Jul 12 10:10:26.750655 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Jul 12 10:10:26.750704 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Jul 12 10:10:26.750752 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Jul 12 10:10:26.750806 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Jul 12 10:10:26.750880 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Jul 12 10:10:26.750934 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Jul 12 10:10:26.750993 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Jul 12 10:10:26.751044 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Jul 12 10:10:26.751094 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Jul 12 10:10:26.751144 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Jul 12 10:10:26.751195 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Jul 12 10:10:26.751245 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Jul 12 10:10:26.751301 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Jul 12 10:10:26.751353 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Jul 12 10:10:26.751406 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Jul 12 10:10:26.751456 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Jul 12 10:10:26.751505 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Jul 12 10:10:26.751555 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Jul 12 10:10:26.751604 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Jul 12 10:10:26.751663 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Jul 12 10:10:26.751714 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Jul 12 10:10:26.751766 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Jul 12 10:10:26.751816 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Jul 12 10:10:26.751885 kernel: pci 0000:0b:00.0: ROM [mem 0xfd400000-0xfd40ffff pref]: assigned Jul 12 10:10:26.751947 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Jul 12 10:10:26.752004 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Jul 12 10:10:26.752069 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Jul 12 10:10:26.752142 kernel: pci 0000:00:16.0: bridge window [mem 0xc0200000-0xc03fffff 64bit pref] Jul 12 10:10:26.752204 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Jul 12 10:10:26.752331 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Jul 12 10:10:26.752415 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Jul 12 10:10:26.752490 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Jul 12 10:10:26.752543 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Jul 12 10:10:26.752593 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Jul 12 10:10:26.752643 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Jul 12 10:10:26.752692 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Jul 12 10:10:26.752744 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Jul 12 10:10:26.752801 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Jul 12 10:10:26.752869 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Jul 12 10:10:26.752925 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Jul 12 10:10:26.752976 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Jul 12 10:10:26.753026 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Jul 12 10:10:26.753078 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Jul 12 10:10:26.753127 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Jul 12 10:10:26.753194 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Jul 12 10:10:26.753266 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Jul 12 10:10:26.753328 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Jul 12 10:10:26.753400 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Jul 12 10:10:26.753477 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Jul 12 10:10:26.753538 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Jul 12 10:10:26.753619 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Jul 12 10:10:26.753680 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Jul 12 10:10:26.753730 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Jul 12 10:10:26.753780 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Jul 12 10:10:26.755552 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Jul 12 10:10:26.755665 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Jul 12 10:10:26.755746 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Jul 12 10:10:26.755806 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Jul 12 10:10:26.755870 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Jul 12 10:10:26.755924 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Jul 12 10:10:26.755975 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Jul 12 10:10:26.756025 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Jul 12 10:10:26.756074 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Jul 12 10:10:26.756129 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Jul 12 10:10:26.756182 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Jul 12 10:10:26.756231 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Jul 12 10:10:26.756283 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Jul 12 10:10:26.756348 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Jul 12 10:10:26.756410 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Jul 12 10:10:26.756485 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Jul 12 10:10:26.756558 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Jul 12 10:10:26.756635 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Jul 12 10:10:26.756702 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Jul 12 10:10:26.756768 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Jul 12 10:10:26.756848 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Jul 12 10:10:26.756915 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Jul 12 10:10:26.756982 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Jul 12 10:10:26.757055 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Jul 12 10:10:26.757129 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Jul 12 10:10:26.757184 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Jul 12 10:10:26.757234 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Jul 12 10:10:26.757283 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Jul 12 10:10:26.757333 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Jul 12 10:10:26.757382 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Jul 12 10:10:26.757431 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Jul 12 10:10:26.757488 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Jul 12 10:10:26.757540 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Jul 12 10:10:26.757593 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Jul 12 10:10:26.757643 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Jul 12 10:10:26.757694 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Jul 12 10:10:26.757744 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Jul 12 10:10:26.757804 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Jul 12 10:10:26.757880 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Jul 12 10:10:26.757945 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Jul 12 10:10:26.758021 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Jul 12 10:10:26.758086 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Jul 12 10:10:26.758145 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Jul 12 10:10:26.758195 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Jul 12 10:10:26.758263 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Jul 12 10:10:26.758323 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Jul 12 10:10:26.758385 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Jul 12 10:10:26.758464 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Jul 12 10:10:26.758524 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Jul 12 10:10:26.758809 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Jul 12 10:10:26.758902 kernel: pci_bus 0000:00: resource 4 [mem 0x000a0000-0x000bffff window] Jul 12 10:10:26.758953 kernel: pci_bus 0000:00: resource 5 [mem 0x000cc000-0x000dbfff window] Jul 12 10:10:26.758998 kernel: pci_bus 0000:00: resource 6 [mem 0xc0000000-0xfebfffff window] Jul 12 10:10:26.759042 kernel: pci_bus 0000:00: resource 7 [io 0x0000-0x0cf7 window] Jul 12 10:10:26.759091 kernel: pci_bus 0000:00: resource 8 [io 0x0d00-0xfeff window] Jul 12 10:10:26.759143 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x3fff] Jul 12 10:10:26.759204 kernel: pci_bus 0000:02: resource 1 [mem 0xfd600000-0xfdffffff] Jul 12 10:10:26.759258 kernel: pci_bus 0000:02: resource 2 [mem 0xe7b00000-0xe7ffffff 64bit pref] Jul 12 10:10:26.759304 kernel: pci_bus 0000:02: resource 4 [mem 0x000a0000-0x000bffff window] Jul 12 10:10:26.759349 kernel: pci_bus 0000:02: resource 5 [mem 0x000cc000-0x000dbfff window] Jul 12 10:10:26.759394 kernel: pci_bus 0000:02: resource 6 [mem 0xc0000000-0xfebfffff window] Jul 12 10:10:26.759439 kernel: pci_bus 0000:02: resource 7 [io 0x0000-0x0cf7 window] Jul 12 10:10:26.759486 kernel: pci_bus 0000:02: resource 8 [io 0x0d00-0xfeff window] Jul 12 10:10:26.759539 kernel: pci_bus 0000:03: resource 0 [io 0x4000-0x4fff] Jul 12 10:10:26.759585 kernel: pci_bus 0000:03: resource 1 [mem 0xfd500000-0xfd5fffff] Jul 12 10:10:26.759629 kernel: pci_bus 0000:03: resource 2 [mem 0xc0000000-0xc01fffff 64bit pref] Jul 12 10:10:26.759679 kernel: pci_bus 0000:04: resource 0 [io 0x8000-0x8fff] Jul 12 10:10:26.759725 kernel: pci_bus 0000:04: resource 1 [mem 0xfd100000-0xfd1fffff] Jul 12 10:10:26.759779 kernel: pci_bus 0000:04: resource 2 [mem 0xe7800000-0xe78fffff 64bit pref] Jul 12 10:10:26.759859 kernel: pci_bus 0000:05: resource 0 [io 0xc000-0xcfff] Jul 12 10:10:26.759909 kernel: pci_bus 0000:05: resource 1 [mem 0xfcd00000-0xfcdfffff] Jul 12 10:10:26.759972 kernel: pci_bus 0000:05: resource 2 [mem 0xe7400000-0xe74fffff 64bit pref] Jul 12 10:10:26.760039 kernel: pci_bus 0000:06: resource 1 [mem 0xfc900000-0xfc9fffff] Jul 12 10:10:26.760086 kernel: pci_bus 0000:06: resource 2 [mem 0xe7000000-0xe70fffff 64bit pref] Jul 12 10:10:26.760138 kernel: pci_bus 0000:07: resource 1 [mem 0xfc500000-0xfc5fffff] Jul 12 10:10:26.760184 kernel: pci_bus 0000:07: resource 2 [mem 0xe6c00000-0xe6cfffff 64bit pref] Jul 12 10:10:26.760236 kernel: pci_bus 0000:08: resource 1 [mem 0xfc100000-0xfc1fffff] Jul 12 10:10:26.760282 kernel: pci_bus 0000:08: resource 2 [mem 0xe6800000-0xe68fffff 64bit pref] Jul 12 10:10:26.760331 kernel: pci_bus 0000:09: resource 1 [mem 0xfbd00000-0xfbdfffff] Jul 12 10:10:26.760376 kernel: pci_bus 0000:09: resource 2 [mem 0xe6400000-0xe64fffff 64bit pref] Jul 12 10:10:26.760426 kernel: pci_bus 0000:0a: resource 1 [mem 0xfb900000-0xfb9fffff] Jul 12 10:10:26.760471 kernel: pci_bus 0000:0a: resource 2 [mem 0xe6000000-0xe60fffff 64bit pref] Jul 12 10:10:26.760524 kernel: pci_bus 0000:0b: resource 0 [io 0x5000-0x5fff] Jul 12 10:10:26.760575 kernel: pci_bus 0000:0b: resource 1 [mem 0xfd400000-0xfd4fffff] Jul 12 10:10:26.760647 kernel: pci_bus 0000:0b: resource 2 [mem 0xc0200000-0xc03fffff 64bit pref] Jul 12 10:10:26.760703 kernel: pci_bus 0000:0c: resource 0 [io 0x9000-0x9fff] Jul 12 10:10:26.760774 kernel: pci_bus 0000:0c: resource 1 [mem 0xfd000000-0xfd0fffff] Jul 12 10:10:26.761278 kernel: pci_bus 0000:0c: resource 2 [mem 0xe7700000-0xe77fffff 64bit pref] Jul 12 10:10:26.761365 kernel: pci_bus 0000:0d: resource 0 [io 0xd000-0xdfff] Jul 12 10:10:26.761421 kernel: pci_bus 0000:0d: resource 1 [mem 0xfcc00000-0xfccfffff] Jul 12 10:10:26.761473 kernel: pci_bus 0000:0d: resource 2 [mem 0xe7300000-0xe73fffff 64bit pref] Jul 12 10:10:26.761540 kernel: pci_bus 0000:0e: resource 1 [mem 0xfc800000-0xfc8fffff] Jul 12 10:10:26.761593 kernel: pci_bus 0000:0e: resource 2 [mem 0xe6f00000-0xe6ffffff 64bit pref] Jul 12 10:10:26.761669 kernel: pci_bus 0000:0f: resource 1 [mem 0xfc400000-0xfc4fffff] Jul 12 10:10:26.761738 kernel: pci_bus 0000:0f: resource 2 [mem 0xe6b00000-0xe6bfffff 64bit pref] Jul 12 10:10:26.761803 kernel: pci_bus 0000:10: resource 1 [mem 0xfc000000-0xfc0fffff] Jul 12 10:10:26.761929 kernel: pci_bus 0000:10: resource 2 [mem 0xe6700000-0xe67fffff 64bit pref] Jul 12 10:10:26.762003 kernel: pci_bus 0000:11: resource 1 [mem 0xfbc00000-0xfbcfffff] Jul 12 10:10:26.763897 kernel: pci_bus 0000:11: resource 2 [mem 0xe6300000-0xe63fffff 64bit pref] Jul 12 10:10:26.763971 kernel: pci_bus 0000:12: resource 1 [mem 0xfb800000-0xfb8fffff] Jul 12 10:10:26.764025 kernel: pci_bus 0000:12: resource 2 [mem 0xe5f00000-0xe5ffffff 64bit pref] Jul 12 10:10:26.764077 kernel: pci_bus 0000:13: resource 0 [io 0x6000-0x6fff] Jul 12 10:10:26.764126 kernel: pci_bus 0000:13: resource 1 [mem 0xfd300000-0xfd3fffff] Jul 12 10:10:26.764172 kernel: pci_bus 0000:13: resource 2 [mem 0xe7a00000-0xe7afffff 64bit pref] Jul 12 10:10:26.764227 kernel: pci_bus 0000:14: resource 0 [io 0xa000-0xafff] Jul 12 10:10:26.764275 kernel: pci_bus 0000:14: resource 1 [mem 0xfcf00000-0xfcffffff] Jul 12 10:10:26.764322 kernel: pci_bus 0000:14: resource 2 [mem 0xe7600000-0xe76fffff 64bit pref] Jul 12 10:10:26.764377 kernel: pci_bus 0000:15: resource 0 [io 0xe000-0xefff] Jul 12 10:10:26.764425 kernel: pci_bus 0000:15: resource 1 [mem 0xfcb00000-0xfcbfffff] Jul 12 10:10:26.764483 kernel: pci_bus 0000:15: resource 2 [mem 0xe7200000-0xe72fffff 64bit pref] Jul 12 10:10:26.764536 kernel: pci_bus 0000:16: resource 1 [mem 0xfc700000-0xfc7fffff] Jul 12 10:10:26.764583 kernel: pci_bus 0000:16: resource 2 [mem 0xe6e00000-0xe6efffff 64bit pref] Jul 12 10:10:26.764633 kernel: pci_bus 0000:17: resource 1 [mem 0xfc300000-0xfc3fffff] Jul 12 10:10:26.764682 kernel: pci_bus 0000:17: resource 2 [mem 0xe6a00000-0xe6afffff 64bit pref] Jul 12 10:10:26.764735 kernel: pci_bus 0000:18: resource 1 [mem 0xfbf00000-0xfbffffff] Jul 12 10:10:26.764781 kernel: pci_bus 0000:18: resource 2 [mem 0xe6600000-0xe66fffff 64bit pref] Jul 12 10:10:26.764831 kernel: pci_bus 0000:19: resource 1 [mem 0xfbb00000-0xfbbfffff] Jul 12 10:10:26.764902 kernel: pci_bus 0000:19: resource 2 [mem 0xe6200000-0xe62fffff 64bit pref] Jul 12 10:10:26.764954 kernel: pci_bus 0000:1a: resource 1 [mem 0xfb700000-0xfb7fffff] Jul 12 10:10:26.765000 kernel: pci_bus 0000:1a: resource 2 [mem 0xe5e00000-0xe5efffff 64bit pref] Jul 12 10:10:26.765053 kernel: pci_bus 0000:1b: resource 0 [io 0x7000-0x7fff] Jul 12 10:10:26.765098 kernel: pci_bus 0000:1b: resource 1 [mem 0xfd200000-0xfd2fffff] Jul 12 10:10:26.765144 kernel: pci_bus 0000:1b: resource 2 [mem 0xe7900000-0xe79fffff 64bit pref] Jul 12 10:10:26.765194 kernel: pci_bus 0000:1c: resource 0 [io 0xb000-0xbfff] Jul 12 10:10:26.765239 kernel: pci_bus 0000:1c: resource 1 [mem 0xfce00000-0xfcefffff] Jul 12 10:10:26.765293 kernel: pci_bus 0000:1c: resource 2 [mem 0xe7500000-0xe75fffff 64bit pref] Jul 12 10:10:26.765349 kernel: pci_bus 0000:1d: resource 1 [mem 0xfca00000-0xfcafffff] Jul 12 10:10:26.765396 kernel: pci_bus 0000:1d: resource 2 [mem 0xe7100000-0xe71fffff 64bit pref] Jul 12 10:10:26.765447 kernel: pci_bus 0000:1e: resource 1 [mem 0xfc600000-0xfc6fffff] Jul 12 10:10:26.765493 kernel: pci_bus 0000:1e: resource 2 [mem 0xe6d00000-0xe6dfffff 64bit pref] Jul 12 10:10:26.765542 kernel: pci_bus 0000:1f: resource 1 [mem 0xfc200000-0xfc2fffff] Jul 12 10:10:26.765588 kernel: pci_bus 0000:1f: resource 2 [mem 0xe6900000-0xe69fffff 64bit pref] Jul 12 10:10:26.765638 kernel: pci_bus 0000:20: resource 1 [mem 0xfbe00000-0xfbefffff] Jul 12 10:10:26.765695 kernel: pci_bus 0000:20: resource 2 [mem 0xe6500000-0xe65fffff 64bit pref] Jul 12 10:10:26.765750 kernel: pci_bus 0000:21: resource 1 [mem 0xfba00000-0xfbafffff] Jul 12 10:10:26.765802 kernel: pci_bus 0000:21: resource 2 [mem 0xe6100000-0xe61fffff 64bit pref] Jul 12 10:10:26.765869 kernel: pci_bus 0000:22: resource 1 [mem 0xfb600000-0xfb6fffff] Jul 12 10:10:26.765917 kernel: pci_bus 0000:22: resource 2 [mem 0xe5d00000-0xe5dfffff 64bit pref] Jul 12 10:10:26.765974 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jul 12 10:10:26.765986 kernel: PCI: CLS 32 bytes, default 64 Jul 12 10:10:26.765993 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jul 12 10:10:26.765999 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Jul 12 10:10:26.766005 kernel: clocksource: Switched to clocksource tsc Jul 12 10:10:26.766012 kernel: Initialise system trusted keyrings Jul 12 10:10:26.766018 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jul 12 10:10:26.766024 kernel: Key type asymmetric registered Jul 12 10:10:26.766034 kernel: Asymmetric key parser 'x509' registered Jul 12 10:10:26.766044 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 12 10:10:26.766052 kernel: io scheduler mq-deadline registered Jul 12 10:10:26.766058 kernel: io scheduler kyber registered Jul 12 10:10:26.766064 kernel: io scheduler bfq registered Jul 12 10:10:26.766121 kernel: pcieport 0000:00:15.0: PME: Signaling with IRQ 24 Jul 12 10:10:26.766174 kernel: pcieport 0000:00:15.0: pciehp: Slot #160 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 12 10:10:26.766228 kernel: pcieport 0000:00:15.1: PME: Signaling with IRQ 25 Jul 12 10:10:26.766279 kernel: pcieport 0000:00:15.1: pciehp: Slot #161 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 12 10:10:26.766334 kernel: pcieport 0000:00:15.2: PME: Signaling with IRQ 26 Jul 12 10:10:26.766384 kernel: pcieport 0000:00:15.2: pciehp: Slot #162 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 12 10:10:26.766435 kernel: pcieport 0000:00:15.3: PME: Signaling with IRQ 27 Jul 12 10:10:26.766486 kernel: pcieport 0000:00:15.3: pciehp: Slot #163 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 12 10:10:26.766537 kernel: pcieport 0000:00:15.4: PME: Signaling with IRQ 28 Jul 12 10:10:26.766587 kernel: pcieport 0000:00:15.4: pciehp: Slot #164 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 12 10:10:26.766638 kernel: pcieport 0000:00:15.5: PME: Signaling with IRQ 29 Jul 12 10:10:26.766688 kernel: pcieport 0000:00:15.5: pciehp: Slot #165 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 12 10:10:26.766744 kernel: pcieport 0000:00:15.6: PME: Signaling with IRQ 30 Jul 12 10:10:26.766795 kernel: pcieport 0000:00:15.6: pciehp: Slot #166 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 12 10:10:26.767351 kernel: pcieport 0000:00:15.7: PME: Signaling with IRQ 31 Jul 12 10:10:26.767413 kernel: pcieport 0000:00:15.7: pciehp: Slot #167 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 12 10:10:26.767468 kernel: pcieport 0000:00:16.0: PME: Signaling with IRQ 32 Jul 12 10:10:26.767520 kernel: pcieport 0000:00:16.0: pciehp: Slot #192 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 12 10:10:26.767576 kernel: pcieport 0000:00:16.1: PME: Signaling with IRQ 33 Jul 12 10:10:26.767627 kernel: pcieport 0000:00:16.1: pciehp: Slot #193 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 12 10:10:26.767678 kernel: pcieport 0000:00:16.2: PME: Signaling with IRQ 34 Jul 12 10:10:26.767729 kernel: pcieport 0000:00:16.2: pciehp: Slot #194 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 12 10:10:26.767781 kernel: pcieport 0000:00:16.3: PME: Signaling with IRQ 35 Jul 12 10:10:26.767886 kernel: pcieport 0000:00:16.3: pciehp: Slot #195 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 12 10:10:26.767960 kernel: pcieport 0000:00:16.4: PME: Signaling with IRQ 36 Jul 12 10:10:26.768014 kernel: pcieport 0000:00:16.4: pciehp: Slot #196 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 12 10:10:26.768069 kernel: pcieport 0000:00:16.5: PME: Signaling with IRQ 37 Jul 12 10:10:26.768119 kernel: pcieport 0000:00:16.5: pciehp: Slot #197 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 12 10:10:26.768170 kernel: pcieport 0000:00:16.6: PME: Signaling with IRQ 38 Jul 12 10:10:26.768220 kernel: pcieport 0000:00:16.6: pciehp: Slot #198 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 12 10:10:26.768274 kernel: pcieport 0000:00:16.7: PME: Signaling with IRQ 39 Jul 12 10:10:26.768325 kernel: pcieport 0000:00:16.7: pciehp: Slot #199 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 12 10:10:26.768376 kernel: pcieport 0000:00:17.0: PME: Signaling with IRQ 40 Jul 12 10:10:26.768429 kernel: pcieport 0000:00:17.0: pciehp: Slot #224 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 12 10:10:26.768799 kernel: pcieport 0000:00:17.1: PME: Signaling with IRQ 41 Jul 12 10:10:26.768866 kernel: pcieport 0000:00:17.1: pciehp: Slot #225 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 12 10:10:26.768921 kernel: pcieport 0000:00:17.2: PME: Signaling with IRQ 42 Jul 12 10:10:26.768972 kernel: pcieport 0000:00:17.2: pciehp: Slot #226 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 12 10:10:26.769024 kernel: pcieport 0000:00:17.3: PME: Signaling with IRQ 43 Jul 12 10:10:26.769075 kernel: pcieport 0000:00:17.3: pciehp: Slot #227 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 12 10:10:26.769126 kernel: pcieport 0000:00:17.4: PME: Signaling with IRQ 44 Jul 12 10:10:26.769180 kernel: pcieport 0000:00:17.4: pciehp: Slot #228 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 12 10:10:26.769232 kernel: pcieport 0000:00:17.5: PME: Signaling with IRQ 45 Jul 12 10:10:26.769282 kernel: pcieport 0000:00:17.5: pciehp: Slot #229 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 12 10:10:26.769334 kernel: pcieport 0000:00:17.6: PME: Signaling with IRQ 46 Jul 12 10:10:26.769384 kernel: pcieport 0000:00:17.6: pciehp: Slot #230 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 12 10:10:26.769435 kernel: pcieport 0000:00:17.7: PME: Signaling with IRQ 47 Jul 12 10:10:26.769485 kernel: pcieport 0000:00:17.7: pciehp: Slot #231 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 12 10:10:26.769539 kernel: pcieport 0000:00:18.0: PME: Signaling with IRQ 48 Jul 12 10:10:26.769589 kernel: pcieport 0000:00:18.0: pciehp: Slot #256 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 12 10:10:26.769640 kernel: pcieport 0000:00:18.1: PME: Signaling with IRQ 49 Jul 12 10:10:26.769690 kernel: pcieport 0000:00:18.1: pciehp: Slot #257 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 12 10:10:26.769748 kernel: pcieport 0000:00:18.2: PME: Signaling with IRQ 50 Jul 12 10:10:26.769801 kernel: pcieport 0000:00:18.2: pciehp: Slot #258 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 12 10:10:26.769875 kernel: pcieport 0000:00:18.3: PME: Signaling with IRQ 51 Jul 12 10:10:26.769928 kernel: pcieport 0000:00:18.3: pciehp: Slot #259 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 12 10:10:26.769985 kernel: pcieport 0000:00:18.4: PME: Signaling with IRQ 52 Jul 12 10:10:26.770037 kernel: pcieport 0000:00:18.4: pciehp: Slot #260 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 12 10:10:26.770089 kernel: pcieport 0000:00:18.5: PME: Signaling with IRQ 53 Jul 12 10:10:26.770139 kernel: pcieport 0000:00:18.5: pciehp: Slot #261 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 12 10:10:26.770191 kernel: pcieport 0000:00:18.6: PME: Signaling with IRQ 54 Jul 12 10:10:26.770242 kernel: pcieport 0000:00:18.6: pciehp: Slot #262 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 12 10:10:26.770294 kernel: pcieport 0000:00:18.7: PME: Signaling with IRQ 55 Jul 12 10:10:26.770348 kernel: pcieport 0000:00:18.7: pciehp: Slot #263 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 12 10:10:26.770357 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 12 10:10:26.770366 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 12 10:10:26.770373 kernel: 00:05: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 12 10:10:26.770379 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBC,PNP0f13:MOUS] at 0x60,0x64 irq 1,12 Jul 12 10:10:26.770385 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 12 10:10:26.770392 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 12 10:10:26.770445 kernel: rtc_cmos 00:01: registered as rtc0 Jul 12 10:10:26.770495 kernel: rtc_cmos 00:01: setting system clock to 2025-07-12T10:10:26 UTC (1752315026) Jul 12 10:10:26.770505 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 12 10:10:26.770548 kernel: rtc_cmos 00:01: alarms up to one month, y3k, 114 bytes nvram Jul 12 10:10:26.770557 kernel: intel_pstate: CPU model not supported Jul 12 10:10:26.770563 kernel: NET: Registered PF_INET6 protocol family Jul 12 10:10:26.770570 kernel: Segment Routing with IPv6 Jul 12 10:10:26.770576 kernel: In-situ OAM (IOAM) with IPv6 Jul 12 10:10:26.770582 kernel: NET: Registered PF_PACKET protocol family Jul 12 10:10:26.770590 kernel: Key type dns_resolver registered Jul 12 10:10:26.770600 kernel: IPI shorthand broadcast: enabled Jul 12 10:10:26.770611 kernel: sched_clock: Marking stable (2772003415, 178860604)->(2964920288, -14056269) Jul 12 10:10:26.770617 kernel: registered taskstats version 1 Jul 12 10:10:26.770624 kernel: Loading compiled-in X.509 certificates Jul 12 10:10:26.770630 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.36-flatcar: 0b66546913a05d1e6699856b7b667f16de808d3b' Jul 12 10:10:26.770636 kernel: Demotion targets for Node 0: null Jul 12 10:10:26.770642 kernel: Key type .fscrypt registered Jul 12 10:10:26.770649 kernel: Key type fscrypt-provisioning registered Jul 12 10:10:26.770656 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 12 10:10:26.770663 kernel: ima: Allocated hash algorithm: sha1 Jul 12 10:10:26.770669 kernel: ima: No architecture policies found Jul 12 10:10:26.770675 kernel: clk: Disabling unused clocks Jul 12 10:10:26.770681 kernel: Warning: unable to open an initial console. Jul 12 10:10:26.770688 kernel: Freeing unused kernel image (initmem) memory: 54608K Jul 12 10:10:26.770694 kernel: Write protecting the kernel read-only data: 24576k Jul 12 10:10:26.770701 kernel: Freeing unused kernel image (rodata/data gap) memory: 284K Jul 12 10:10:26.770707 kernel: Run /init as init process Jul 12 10:10:26.770714 kernel: with arguments: Jul 12 10:10:26.770721 kernel: /init Jul 12 10:10:26.770727 kernel: with environment: Jul 12 10:10:26.770733 kernel: HOME=/ Jul 12 10:10:26.770739 kernel: TERM=linux Jul 12 10:10:26.770745 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 12 10:10:26.770752 systemd[1]: Successfully made /usr/ read-only. Jul 12 10:10:26.770761 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 12 10:10:26.770769 systemd[1]: Detected virtualization vmware. Jul 12 10:10:26.770775 systemd[1]: Detected architecture x86-64. Jul 12 10:10:26.770781 systemd[1]: Running in initrd. Jul 12 10:10:26.770787 systemd[1]: No hostname configured, using default hostname. Jul 12 10:10:26.770794 systemd[1]: Hostname set to . Jul 12 10:10:26.770800 systemd[1]: Initializing machine ID from random generator. Jul 12 10:10:26.770807 systemd[1]: Queued start job for default target initrd.target. Jul 12 10:10:26.770813 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 12 10:10:26.770821 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 12 10:10:26.770828 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 12 10:10:26.770845 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 12 10:10:26.770852 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 12 10:10:26.770859 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 12 10:10:26.770867 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 12 10:10:26.770875 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 12 10:10:26.770882 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 12 10:10:26.770888 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 12 10:10:26.770895 systemd[1]: Reached target paths.target - Path Units. Jul 12 10:10:26.770901 systemd[1]: Reached target slices.target - Slice Units. Jul 12 10:10:26.770907 systemd[1]: Reached target swap.target - Swaps. Jul 12 10:10:26.770913 systemd[1]: Reached target timers.target - Timer Units. Jul 12 10:10:26.770920 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 12 10:10:26.770926 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 12 10:10:26.770934 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 12 10:10:26.770940 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 12 10:10:26.770947 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 12 10:10:26.770953 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 12 10:10:26.770959 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 12 10:10:26.770966 systemd[1]: Reached target sockets.target - Socket Units. Jul 12 10:10:26.770972 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 12 10:10:26.770979 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 12 10:10:26.770986 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 12 10:10:26.770993 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jul 12 10:10:26.771003 systemd[1]: Starting systemd-fsck-usr.service... Jul 12 10:10:26.771012 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 12 10:10:26.771019 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 12 10:10:26.771025 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 12 10:10:26.771031 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 12 10:10:26.771040 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 12 10:10:26.771046 systemd[1]: Finished systemd-fsck-usr.service. Jul 12 10:10:26.771053 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 12 10:10:26.771060 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 12 10:10:26.771083 systemd-journald[244]: Collecting audit messages is disabled. Jul 12 10:10:26.771101 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 12 10:10:26.771108 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 12 10:10:26.771115 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 12 10:10:26.771121 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 12 10:10:26.771128 kernel: Bridge firewalling registered Jul 12 10:10:26.771136 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 12 10:10:26.771142 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 12 10:10:26.771149 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 12 10:10:26.771156 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 12 10:10:26.771162 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 12 10:10:26.771168 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 12 10:10:26.771176 systemd-journald[244]: Journal started Jul 12 10:10:26.771192 systemd-journald[244]: Runtime Journal (/run/log/journal/3bbb225b108441439fd9a9615c3af1a8) is 4.8M, max 38.8M, 34M free. Jul 12 10:10:26.718853 systemd-modules-load[245]: Inserted module 'overlay' Jul 12 10:10:26.748721 systemd-modules-load[245]: Inserted module 'br_netfilter' Jul 12 10:10:26.779943 systemd[1]: Started systemd-journald.service - Journal Service. Jul 12 10:10:26.781090 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 12 10:10:26.787559 dracut-cmdline[271]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=4aa07c6f7fdf02f2e05d879e4d058ee0cec0fba29acc0516234352104ac4e6c4 Jul 12 10:10:26.792963 systemd-tmpfiles[274]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jul 12 10:10:26.795186 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 12 10:10:26.797807 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 12 10:10:26.823661 systemd-resolved[309]: Positive Trust Anchors: Jul 12 10:10:26.823945 systemd-resolved[309]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 12 10:10:26.824129 systemd-resolved[309]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 12 10:10:26.826428 systemd-resolved[309]: Defaulting to hostname 'linux'. Jul 12 10:10:26.827196 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 12 10:10:26.827488 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 12 10:10:26.844856 kernel: SCSI subsystem initialized Jul 12 10:10:26.862854 kernel: Loading iSCSI transport class v2.0-870. Jul 12 10:10:26.871860 kernel: iscsi: registered transport (tcp) Jul 12 10:10:26.895129 kernel: iscsi: registered transport (qla4xxx) Jul 12 10:10:26.895176 kernel: QLogic iSCSI HBA Driver Jul 12 10:10:26.906119 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 12 10:10:26.916682 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 12 10:10:26.917937 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 12 10:10:26.939916 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 12 10:10:26.940993 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 12 10:10:26.980854 kernel: raid6: avx2x4 gen() 43013 MB/s Jul 12 10:10:26.997854 kernel: raid6: avx2x2 gen() 47103 MB/s Jul 12 10:10:27.015060 kernel: raid6: avx2x1 gen() 38497 MB/s Jul 12 10:10:27.015108 kernel: raid6: using algorithm avx2x2 gen() 47103 MB/s Jul 12 10:10:27.033058 kernel: raid6: .... xor() 32176 MB/s, rmw enabled Jul 12 10:10:27.033106 kernel: raid6: using avx2x2 recovery algorithm Jul 12 10:10:27.047858 kernel: xor: automatically using best checksumming function avx Jul 12 10:10:27.161865 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 12 10:10:27.165699 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 12 10:10:27.166744 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 12 10:10:27.188717 systemd-udevd[493]: Using default interface naming scheme 'v255'. Jul 12 10:10:27.192149 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 12 10:10:27.193229 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 12 10:10:27.212495 dracut-pre-trigger[500]: rd.md=0: removing MD RAID activation Jul 12 10:10:27.226089 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 12 10:10:27.227186 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 12 10:10:27.312977 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 12 10:10:27.315565 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 12 10:10:27.390850 kernel: VMware PVSCSI driver - version 1.0.7.0-k Jul 12 10:10:27.392694 kernel: vmw_pvscsi: using 64bit dma Jul 12 10:10:27.392723 kernel: vmw_pvscsi: max_id: 16 Jul 12 10:10:27.392731 kernel: vmw_pvscsi: setting ring_pages to 8 Jul 12 10:10:27.398314 kernel: vmw_pvscsi: enabling reqCallThreshold Jul 12 10:10:27.398355 kernel: vmw_pvscsi: driver-based request coalescing enabled Jul 12 10:10:27.398367 kernel: vmw_pvscsi: using MSI-X Jul 12 10:10:27.402565 kernel: VMware vmxnet3 virtual NIC driver - version 1.9.0.0-k-NAPI Jul 12 10:10:27.405867 kernel: scsi host0: VMware PVSCSI storage adapter rev 2, req/cmp/msg rings: 8/8/1 pages, cmd_per_lun=254 Jul 12 10:10:27.408849 kernel: vmxnet3 0000:0b:00.0: # of Tx queues : 2, # of Rx queues : 2 Jul 12 10:10:27.414874 kernel: vmw_pvscsi 0000:03:00.0: VMware PVSCSI rev 2 host #0 Jul 12 10:10:27.415001 kernel: scsi 0:0:0:0: Direct-Access VMware Virtual disk 2.0 PQ: 0 ANSI: 6 Jul 12 10:10:27.422865 kernel: vmxnet3 0000:0b:00.0 eth0: NIC Link is Up 10000 Mbps Jul 12 10:10:27.441096 kernel: vmxnet3 0000:0b:00.0 ens192: renamed from eth0 Jul 12 10:10:27.443876 (udev-worker)[544]: id: Truncating stdout of 'dmi_memory_id' up to 16384 byte. Jul 12 10:10:27.447122 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 12 10:10:27.450412 kernel: cryptd: max_cpu_qlen set to 1000 Jul 12 10:10:27.447200 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 12 10:10:27.450703 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 12 10:10:27.453009 kernel: sd 0:0:0:0: [sda] 17805312 512-byte logical blocks: (9.12 GB/8.49 GiB) Jul 12 10:10:27.453149 kernel: sd 0:0:0:0: [sda] Write Protect is off Jul 12 10:10:27.453218 kernel: sd 0:0:0:0: [sda] Mode Sense: 31 00 00 00 Jul 12 10:10:27.455969 kernel: sd 0:0:0:0: [sda] Cache data unavailable Jul 12 10:10:27.456068 kernel: sd 0:0:0:0: [sda] Assuming drive cache: write through Jul 12 10:10:27.455243 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 12 10:10:27.457847 kernel: libata version 3.00 loaded. Jul 12 10:10:27.478847 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 12 10:10:27.478883 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jul 12 10:10:27.483938 kernel: ata_piix 0000:00:07.1: version 2.13 Jul 12 10:10:27.486864 kernel: scsi host1: ata_piix Jul 12 10:10:27.487001 kernel: scsi host2: ata_piix Jul 12 10:10:27.487844 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input2 Jul 12 10:10:27.488851 kernel: ata1: PATA max UDMA/33 cmd 0x1f0 ctl 0x3f6 bmdma 0x1060 irq 14 lpm-pol 0 Jul 12 10:10:27.488868 kernel: ata2: PATA max UDMA/33 cmd 0x170 ctl 0x376 bmdma 0x1068 irq 15 lpm-pol 0 Jul 12 10:10:27.494852 kernel: AES CTR mode by8 optimization enabled Jul 12 10:10:27.496216 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 12 10:10:27.660859 kernel: ata2.00: ATAPI: VMware Virtual IDE CDROM Drive, 00000001, max UDMA/33 Jul 12 10:10:27.664855 kernel: scsi 2:0:0:0: CD-ROM NECVMWar VMware IDE CDR10 1.00 PQ: 0 ANSI: 5 Jul 12 10:10:27.689876 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 1x/1x writer dvd-ram cd/rw xa/form2 cdda tray Jul 12 10:10:27.690075 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 12 10:10:27.701847 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jul 12 10:10:27.708704 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_disk ROOT. Jul 12 10:10:27.714386 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_disk EFI-SYSTEM. Jul 12 10:10:27.718903 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_disk USR-A. Jul 12 10:10:27.719179 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_disk USR-A. Jul 12 10:10:27.724732 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. Jul 12 10:10:27.725392 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 12 10:10:27.786853 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 12 10:10:27.945847 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 12 10:10:27.946407 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 12 10:10:27.946683 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 12 10:10:27.946947 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 12 10:10:27.947638 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 12 10:10:27.961139 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 12 10:10:28.828140 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 12 10:10:28.828183 disk-uuid[646]: The operation has completed successfully. Jul 12 10:10:28.880002 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 12 10:10:28.880083 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 12 10:10:28.890767 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 12 10:10:28.904938 sh[677]: Success Jul 12 10:10:28.928190 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 12 10:10:28.928239 kernel: device-mapper: uevent: version 1.0.3 Jul 12 10:10:28.929549 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jul 12 10:10:28.937856 kernel: device-mapper: verity: sha256 using shash "sha256-avx2" Jul 12 10:10:29.000632 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 12 10:10:29.001774 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 12 10:10:29.005669 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 12 10:10:29.045888 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jul 12 10:10:29.045927 kernel: BTRFS: device fsid 4d28aa26-35d0-4997-8a2e-14597ed98f41 devid 1 transid 36 /dev/mapper/usr (254:0) scanned by mount (689) Jul 12 10:10:29.050857 kernel: BTRFS info (device dm-0): first mount of filesystem 4d28aa26-35d0-4997-8a2e-14597ed98f41 Jul 12 10:10:29.050895 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 12 10:10:29.050903 kernel: BTRFS info (device dm-0): using free-space-tree Jul 12 10:10:29.063010 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 12 10:10:29.063421 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jul 12 10:10:29.064200 systemd[1]: Starting afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments... Jul 12 10:10:29.064899 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 12 10:10:29.107852 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (712) Jul 12 10:10:29.121289 kernel: BTRFS info (device sda6): first mount of filesystem 2214f333-d3a1-4dd4-b25f-bf0ce0af42b2 Jul 12 10:10:29.121330 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 12 10:10:29.121339 kernel: BTRFS info (device sda6): using free-space-tree Jul 12 10:10:29.186876 kernel: BTRFS info (device sda6): last unmount of filesystem 2214f333-d3a1-4dd4-b25f-bf0ce0af42b2 Jul 12 10:10:29.188104 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 12 10:10:29.189082 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 12 10:10:29.254303 systemd[1]: Finished afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Jul 12 10:10:29.256073 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 12 10:10:29.337481 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 12 10:10:29.339382 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 12 10:10:29.372389 ignition[731]: Ignition 2.21.0 Jul 12 10:10:29.372398 ignition[731]: Stage: fetch-offline Jul 12 10:10:29.372418 ignition[731]: no configs at "/usr/lib/ignition/base.d" Jul 12 10:10:29.372424 ignition[731]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jul 12 10:10:29.372478 ignition[731]: parsed url from cmdline: "" Jul 12 10:10:29.372480 ignition[731]: no config URL provided Jul 12 10:10:29.372483 ignition[731]: reading system config file "/usr/lib/ignition/user.ign" Jul 12 10:10:29.372487 ignition[731]: no config at "/usr/lib/ignition/user.ign" Jul 12 10:10:29.374013 ignition[731]: config successfully fetched Jul 12 10:10:29.374041 ignition[731]: parsing config with SHA512: c49aad8d12ada013ac3ef4a700a7be6a41b13feffc654c2aaaa5ea57c3a641aa6ed861d2bce8b647132dd9b30061b4fd0d544547efb34499a06ee37c6fd381d5 Jul 12 10:10:29.376110 systemd-networkd[872]: lo: Link UP Jul 12 10:10:29.376114 systemd-networkd[872]: lo: Gained carrier Jul 12 10:10:29.376832 systemd-networkd[872]: Enumeration completed Jul 12 10:10:29.376977 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 12 10:10:29.377080 systemd-networkd[872]: ens192: Configuring with /etc/systemd/network/10-dracut-cmdline-99.network. Jul 12 10:10:29.378260 systemd[1]: Reached target network.target - Network. Jul 12 10:10:29.380830 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Jul 12 10:10:29.380965 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Jul 12 10:10:29.381060 systemd-networkd[872]: ens192: Link UP Jul 12 10:10:29.381064 systemd-networkd[872]: ens192: Gained carrier Jul 12 10:10:29.382731 unknown[731]: fetched base config from "system" Jul 12 10:10:29.382859 unknown[731]: fetched user config from "vmware" Jul 12 10:10:29.383083 ignition[731]: fetch-offline: fetch-offline passed Jul 12 10:10:29.383119 ignition[731]: Ignition finished successfully Jul 12 10:10:29.384532 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 12 10:10:29.384892 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 12 10:10:29.385539 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 12 10:10:29.399119 ignition[876]: Ignition 2.21.0 Jul 12 10:10:29.399129 ignition[876]: Stage: kargs Jul 12 10:10:29.399207 ignition[876]: no configs at "/usr/lib/ignition/base.d" Jul 12 10:10:29.399213 ignition[876]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jul 12 10:10:29.400159 ignition[876]: kargs: kargs passed Jul 12 10:10:29.400194 ignition[876]: Ignition finished successfully Jul 12 10:10:29.401774 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 12 10:10:29.402568 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 12 10:10:29.423112 ignition[884]: Ignition 2.21.0 Jul 12 10:10:29.423121 ignition[884]: Stage: disks Jul 12 10:10:29.423201 ignition[884]: no configs at "/usr/lib/ignition/base.d" Jul 12 10:10:29.423208 ignition[884]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jul 12 10:10:29.424400 ignition[884]: disks: disks passed Jul 12 10:10:29.424550 ignition[884]: Ignition finished successfully Jul 12 10:10:29.425664 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 12 10:10:29.426076 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 12 10:10:29.426355 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 12 10:10:29.426644 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 12 10:10:29.426920 systemd[1]: Reached target sysinit.target - System Initialization. Jul 12 10:10:29.427152 systemd[1]: Reached target basic.target - Basic System. Jul 12 10:10:29.427901 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 12 10:10:29.448107 systemd-fsck[893]: ROOT: clean, 15/1628000 files, 120826/1617920 blocks Jul 12 10:10:29.448969 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 12 10:10:29.450876 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 12 10:10:29.622857 kernel: EXT4-fs (sda9): mounted filesystem e7cb62fe-c14e-444a-ae5a-364f9f21d58c r/w with ordered data mode. Quota mode: none. Jul 12 10:10:29.623263 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 12 10:10:29.623643 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 12 10:10:29.624916 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 12 10:10:29.626903 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 12 10:10:29.627346 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 12 10:10:29.627565 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 12 10:10:29.627581 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 12 10:10:29.636180 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 12 10:10:29.637912 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 12 10:10:29.645856 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (902) Jul 12 10:10:29.649723 kernel: BTRFS info (device sda6): first mount of filesystem 2214f333-d3a1-4dd4-b25f-bf0ce0af42b2 Jul 12 10:10:29.649765 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 12 10:10:29.649773 kernel: BTRFS info (device sda6): using free-space-tree Jul 12 10:10:29.676085 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 12 10:10:29.716663 initrd-setup-root[926]: cut: /sysroot/etc/passwd: No such file or directory Jul 12 10:10:29.719908 initrd-setup-root[933]: cut: /sysroot/etc/group: No such file or directory Jul 12 10:10:29.722215 initrd-setup-root[940]: cut: /sysroot/etc/shadow: No such file or directory Jul 12 10:10:29.724486 initrd-setup-root[947]: cut: /sysroot/etc/gshadow: No such file or directory Jul 12 10:10:29.845702 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 12 10:10:29.846909 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 12 10:10:29.847899 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 12 10:10:29.859846 kernel: BTRFS info (device sda6): last unmount of filesystem 2214f333-d3a1-4dd4-b25f-bf0ce0af42b2 Jul 12 10:10:29.877188 ignition[1015]: INFO : Ignition 2.21.0 Jul 12 10:10:29.877188 ignition[1015]: INFO : Stage: mount Jul 12 10:10:29.877761 ignition[1015]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 12 10:10:29.877761 ignition[1015]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jul 12 10:10:29.878479 ignition[1015]: INFO : mount: mount passed Jul 12 10:10:29.878622 ignition[1015]: INFO : Ignition finished successfully Jul 12 10:10:29.879472 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 12 10:10:29.880888 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 12 10:10:29.903386 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 12 10:10:30.045536 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 12 10:10:30.046455 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 12 10:10:30.072855 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (1027) Jul 12 10:10:30.075773 kernel: BTRFS info (device sda6): first mount of filesystem 2214f333-d3a1-4dd4-b25f-bf0ce0af42b2 Jul 12 10:10:30.075797 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 12 10:10:30.075807 kernel: BTRFS info (device sda6): using free-space-tree Jul 12 10:10:30.081486 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 12 10:10:30.100295 ignition[1043]: INFO : Ignition 2.21.0 Jul 12 10:10:30.100295 ignition[1043]: INFO : Stage: files Jul 12 10:10:30.100668 ignition[1043]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 12 10:10:30.100668 ignition[1043]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jul 12 10:10:30.101368 ignition[1043]: DEBUG : files: compiled without relabeling support, skipping Jul 12 10:10:30.111005 ignition[1043]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 12 10:10:30.111005 ignition[1043]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 12 10:10:30.126573 ignition[1043]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 12 10:10:30.126854 ignition[1043]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 12 10:10:30.127193 unknown[1043]: wrote ssh authorized keys file for user: core Jul 12 10:10:30.127463 ignition[1043]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 12 10:10:30.129705 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jul 12 10:10:30.130014 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jul 12 10:10:30.163786 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 12 10:10:30.252306 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jul 12 10:10:30.252613 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 12 10:10:30.252613 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 12 10:10:30.252613 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 12 10:10:30.252613 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 12 10:10:30.252613 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 12 10:10:30.252613 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 12 10:10:30.252613 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 12 10:10:30.253976 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 12 10:10:30.257150 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 12 10:10:30.257366 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 12 10:10:30.257366 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 12 10:10:30.265118 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 12 10:10:30.265118 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 12 10:10:30.265625 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Jul 12 10:10:30.906809 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 12 10:10:31.029934 systemd-networkd[872]: ens192: Gained IPv6LL Jul 12 10:10:31.103439 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 12 10:10:31.103439 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/systemd/network/00-vmware.network" Jul 12 10:10:31.104466 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/systemd/network/00-vmware.network" Jul 12 10:10:31.104466 ignition[1043]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jul 12 10:10:31.104921 ignition[1043]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 12 10:10:31.105561 ignition[1043]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 12 10:10:31.105561 ignition[1043]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jul 12 10:10:31.105561 ignition[1043]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jul 12 10:10:31.105561 ignition[1043]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 12 10:10:31.105561 ignition[1043]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 12 10:10:31.105561 ignition[1043]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jul 12 10:10:31.105561 ignition[1043]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Jul 12 10:10:31.170846 ignition[1043]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 12 10:10:31.173096 ignition[1043]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 12 10:10:31.173582 ignition[1043]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Jul 12 10:10:31.173582 ignition[1043]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Jul 12 10:10:31.173582 ignition[1043]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Jul 12 10:10:31.173582 ignition[1043]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 12 10:10:31.173582 ignition[1043]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 12 10:10:31.173582 ignition[1043]: INFO : files: files passed Jul 12 10:10:31.173582 ignition[1043]: INFO : Ignition finished successfully Jul 12 10:10:31.174554 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 12 10:10:31.175382 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 12 10:10:31.176906 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 12 10:10:31.187369 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 12 10:10:31.187441 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 12 10:10:31.190433 initrd-setup-root-after-ignition[1075]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 12 10:10:31.191160 initrd-setup-root-after-ignition[1075]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 12 10:10:31.191315 initrd-setup-root-after-ignition[1079]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 12 10:10:31.192558 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 12 10:10:31.192935 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 12 10:10:31.193524 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 12 10:10:31.225278 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 12 10:10:31.225346 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 12 10:10:31.225643 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 12 10:10:31.225753 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 12 10:10:31.226106 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 12 10:10:31.226588 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 12 10:10:31.242461 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 12 10:10:31.243259 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 12 10:10:31.256976 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 12 10:10:31.257160 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 12 10:10:31.257381 systemd[1]: Stopped target timers.target - Timer Units. Jul 12 10:10:31.257577 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 12 10:10:31.257649 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 12 10:10:31.258034 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 12 10:10:31.258184 systemd[1]: Stopped target basic.target - Basic System. Jul 12 10:10:31.258365 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 12 10:10:31.258566 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 12 10:10:31.258772 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 12 10:10:31.259004 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jul 12 10:10:31.259206 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 12 10:10:31.259417 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 12 10:10:31.259632 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 12 10:10:31.259879 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 12 10:10:31.260028 systemd[1]: Stopped target swap.target - Swaps. Jul 12 10:10:31.260200 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 12 10:10:31.260262 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 12 10:10:31.260519 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 12 10:10:31.260761 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 12 10:10:31.260979 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 12 10:10:31.261026 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 12 10:10:31.261207 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 12 10:10:31.261268 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 12 10:10:31.261521 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 12 10:10:31.261592 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 12 10:10:31.261831 systemd[1]: Stopped target paths.target - Path Units. Jul 12 10:10:31.261971 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 12 10:10:31.265855 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 12 10:10:31.266030 systemd[1]: Stopped target slices.target - Slice Units. Jul 12 10:10:31.266252 systemd[1]: Stopped target sockets.target - Socket Units. Jul 12 10:10:31.266424 systemd[1]: iscsid.socket: Deactivated successfully. Jul 12 10:10:31.266474 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 12 10:10:31.266617 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 12 10:10:31.266660 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 12 10:10:31.266828 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 12 10:10:31.266904 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 12 10:10:31.267147 systemd[1]: ignition-files.service: Deactivated successfully. Jul 12 10:10:31.267207 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 12 10:10:31.267800 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 12 10:10:31.269337 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 12 10:10:31.269450 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 12 10:10:31.269516 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 12 10:10:31.269683 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 12 10:10:31.269749 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 12 10:10:31.273734 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 12 10:10:31.277913 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 12 10:10:31.285653 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 12 10:10:31.288885 ignition[1099]: INFO : Ignition 2.21.0 Jul 12 10:10:31.290329 ignition[1099]: INFO : Stage: umount Jul 12 10:10:31.290329 ignition[1099]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 12 10:10:31.290329 ignition[1099]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jul 12 10:10:31.290329 ignition[1099]: INFO : umount: umount passed Jul 12 10:10:31.290329 ignition[1099]: INFO : Ignition finished successfully Jul 12 10:10:31.291724 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 12 10:10:31.291939 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 12 10:10:31.292240 systemd[1]: Stopped target network.target - Network. Jul 12 10:10:31.292440 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 12 10:10:31.292559 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 12 10:10:31.292798 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 12 10:10:31.292922 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 12 10:10:31.293169 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 12 10:10:31.293285 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 12 10:10:31.293518 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 12 10:10:31.293635 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 12 10:10:31.293949 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 12 10:10:31.294210 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 12 10:10:31.299812 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 12 10:10:31.299974 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 12 10:10:31.301434 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 12 10:10:31.301716 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 12 10:10:31.301917 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 12 10:10:31.302632 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 12 10:10:31.303250 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jul 12 10:10:31.303493 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 12 10:10:31.303632 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 12 10:10:31.304309 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 12 10:10:31.304518 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 12 10:10:31.304641 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 12 10:10:31.304968 systemd[1]: afterburn-network-kargs.service: Deactivated successfully. Jul 12 10:10:31.305092 systemd[1]: Stopped afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Jul 12 10:10:31.305509 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 12 10:10:31.305647 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 12 10:10:31.305967 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 12 10:10:31.306112 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 12 10:10:31.306338 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 12 10:10:31.306459 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 12 10:10:31.306758 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 12 10:10:31.307620 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 12 10:10:31.307652 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 12 10:10:31.320245 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 12 10:10:31.320643 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 12 10:10:31.321033 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 12 10:10:31.321056 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 12 10:10:31.321157 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 12 10:10:31.321174 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 12 10:10:31.321262 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 12 10:10:31.321284 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 12 10:10:31.321417 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 12 10:10:31.321439 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 12 10:10:31.321559 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 12 10:10:31.321582 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 12 10:10:31.322907 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 12 10:10:31.323003 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jul 12 10:10:31.323028 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jul 12 10:10:31.323200 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 12 10:10:31.323224 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 12 10:10:31.323379 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jul 12 10:10:31.323400 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 12 10:10:31.323557 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 12 10:10:31.323579 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 12 10:10:31.323702 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 12 10:10:31.323723 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 12 10:10:31.325575 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jul 12 10:10:31.325604 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Jul 12 10:10:31.325624 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 12 10:10:31.325644 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 12 10:10:31.325776 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 12 10:10:31.326591 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 12 10:10:31.334929 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 12 10:10:31.335000 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 12 10:10:31.382031 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 12 10:10:31.382130 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 12 10:10:31.382460 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 12 10:10:31.382597 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 12 10:10:31.382629 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 12 10:10:31.383382 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 12 10:10:31.402056 systemd[1]: Switching root. Jul 12 10:10:31.448253 systemd-journald[244]: Journal stopped Jul 12 10:10:33.030950 systemd-journald[244]: Received SIGTERM from PID 1 (systemd). Jul 12 10:10:33.030978 kernel: SELinux: policy capability network_peer_controls=1 Jul 12 10:10:33.030987 kernel: SELinux: policy capability open_perms=1 Jul 12 10:10:33.030992 kernel: SELinux: policy capability extended_socket_class=1 Jul 12 10:10:33.030997 kernel: SELinux: policy capability always_check_network=0 Jul 12 10:10:33.031004 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 12 10:10:33.031011 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 12 10:10:33.031016 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 12 10:10:33.031023 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 12 10:10:33.031028 kernel: SELinux: policy capability userspace_initial_context=0 Jul 12 10:10:33.031034 kernel: audit: type=1403 audit(1752315032.295:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 12 10:10:33.031041 systemd[1]: Successfully loaded SELinux policy in 105.974ms. Jul 12 10:10:33.031049 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 3.737ms. Jul 12 10:10:33.031056 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 12 10:10:33.031063 systemd[1]: Detected virtualization vmware. Jul 12 10:10:33.031070 systemd[1]: Detected architecture x86-64. Jul 12 10:10:33.031077 systemd[1]: Detected first boot. Jul 12 10:10:33.031084 systemd[1]: Initializing machine ID from random generator. Jul 12 10:10:33.031091 zram_generator::config[1143]: No configuration found. Jul 12 10:10:33.031181 kernel: vmw_vmci 0000:00:07.7: Using capabilities 0xc Jul 12 10:10:33.031192 kernel: Guest personality initialized and is active Jul 12 10:10:33.031199 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Jul 12 10:10:33.031205 kernel: Initialized host personality Jul 12 10:10:33.031214 kernel: NET: Registered PF_VSOCK protocol family Jul 12 10:10:33.031220 systemd[1]: Populated /etc with preset unit settings. Jul 12 10:10:33.031228 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Jul 12 10:10:33.031235 systemd[1]: COREOS_CUSTOM_PUBLIC_IPV4=$(ip addr show ens192 | grep -v "inet 10." | grep -Po "inet \K[\d.]+")" > ${OUTPUT}" Jul 12 10:10:33.031242 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 12 10:10:33.031248 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 12 10:10:33.031254 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 12 10:10:33.031262 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 12 10:10:33.031269 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 12 10:10:33.031276 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 12 10:10:33.031283 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 12 10:10:33.031290 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 12 10:10:33.031297 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 12 10:10:33.031304 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 12 10:10:33.031311 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 12 10:10:33.031318 systemd[1]: Created slice user.slice - User and Session Slice. Jul 12 10:10:33.031325 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 12 10:10:33.031333 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 12 10:10:33.031340 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 12 10:10:33.031347 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 12 10:10:33.031354 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 12 10:10:33.031361 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 12 10:10:33.031369 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 12 10:10:33.031376 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 12 10:10:33.031383 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 12 10:10:33.031390 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 12 10:10:33.031396 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 12 10:10:33.031403 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 12 10:10:33.031410 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 12 10:10:33.031417 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 12 10:10:33.031425 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 12 10:10:33.031432 systemd[1]: Reached target slices.target - Slice Units. Jul 12 10:10:33.031439 systemd[1]: Reached target swap.target - Swaps. Jul 12 10:10:33.031446 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 12 10:10:33.031453 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 12 10:10:33.031461 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 12 10:10:33.031468 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 12 10:10:33.031475 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 12 10:10:33.031482 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 12 10:10:33.031488 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 12 10:10:33.031495 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 12 10:10:33.031502 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 12 10:10:33.031509 systemd[1]: Mounting media.mount - External Media Directory... Jul 12 10:10:33.031517 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 12 10:10:33.031524 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 12 10:10:33.031531 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 12 10:10:33.031538 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 12 10:10:33.031545 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 12 10:10:33.031552 systemd[1]: Reached target machines.target - Containers. Jul 12 10:10:33.031560 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 12 10:10:33.031567 systemd[1]: Starting ignition-delete-config.service - Ignition (delete config)... Jul 12 10:10:33.031574 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 12 10:10:33.031581 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 12 10:10:33.031588 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 12 10:10:33.031595 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 12 10:10:33.031602 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 12 10:10:33.031608 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 12 10:10:33.031615 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 12 10:10:33.031622 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 12 10:10:33.031630 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 12 10:10:33.031637 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 12 10:10:33.031644 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 12 10:10:33.031651 systemd[1]: Stopped systemd-fsck-usr.service. Jul 12 10:10:33.031658 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 12 10:10:33.031665 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 12 10:10:33.031672 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 12 10:10:33.031679 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 12 10:10:33.031687 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 12 10:10:33.031694 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 12 10:10:33.031700 kernel: loop: module loaded Jul 12 10:10:33.031708 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 12 10:10:33.031715 systemd[1]: verity-setup.service: Deactivated successfully. Jul 12 10:10:33.031721 systemd[1]: Stopped verity-setup.service. Jul 12 10:10:33.031728 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 12 10:10:33.031735 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 12 10:10:33.031742 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 12 10:10:33.031750 systemd[1]: Mounted media.mount - External Media Directory. Jul 12 10:10:33.031757 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 12 10:10:33.031763 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 12 10:10:33.031770 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 12 10:10:33.031782 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 12 10:10:33.031789 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 12 10:10:33.031796 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 12 10:10:33.031813 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 12 10:10:33.031821 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 12 10:10:33.031828 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 12 10:10:33.031842 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 12 10:10:33.031849 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 12 10:10:33.031856 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 12 10:10:33.031863 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 12 10:10:33.031870 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 12 10:10:33.031878 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 12 10:10:33.031884 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 12 10:10:33.031893 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 12 10:10:33.031900 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 12 10:10:33.031910 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 12 10:10:33.031918 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 12 10:10:33.031925 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 12 10:10:33.031932 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 12 10:10:33.031939 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 12 10:10:33.031948 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 12 10:10:33.031955 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 12 10:10:33.031962 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 12 10:10:33.031970 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 12 10:10:33.031992 systemd-journald[1233]: Collecting audit messages is disabled. Jul 12 10:10:33.032011 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 12 10:10:33.032018 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 12 10:10:33.032026 systemd-journald[1233]: Journal started Jul 12 10:10:33.032041 systemd-journald[1233]: Runtime Journal (/run/log/journal/3b296ccf3caa460fb6f42a0c027e844d) is 4.8M, max 38.8M, 34M free. Jul 12 10:10:32.815306 systemd[1]: Queued start job for default target multi-user.target. Jul 12 10:10:32.824749 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jul 12 10:10:32.824981 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 12 10:10:33.039780 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 12 10:10:33.040863 jq[1213]: true Jul 12 10:10:33.041441 jq[1248]: true Jul 12 10:10:33.044590 kernel: fuse: init (API version 7.41) Jul 12 10:10:33.044637 systemd[1]: Started systemd-journald.service - Journal Service. Jul 12 10:10:33.043872 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 12 10:10:33.049853 kernel: ACPI: bus type drm_connector registered Jul 12 10:10:33.048356 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 12 10:10:33.050454 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 12 10:10:33.050757 systemd-tmpfiles[1251]: ACLs are not supported, ignoring. Jul 12 10:10:33.050768 systemd-tmpfiles[1251]: ACLs are not supported, ignoring. Jul 12 10:10:33.050818 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 12 10:10:33.052152 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 12 10:10:33.052748 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 12 10:10:33.053402 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 12 10:10:33.058017 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 12 10:10:33.070845 kernel: loop0: detected capacity change from 0 to 114000 Jul 12 10:10:33.073025 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 12 10:10:33.075924 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 12 10:10:33.077910 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 12 10:10:33.080950 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 12 10:10:33.085695 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 12 10:10:33.101888 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 12 10:10:33.113982 systemd-journald[1233]: Time spent on flushing to /var/log/journal/3b296ccf3caa460fb6f42a0c027e844d is 48.779ms for 1768 entries. Jul 12 10:10:33.113982 systemd-journald[1233]: System Journal (/var/log/journal/3b296ccf3caa460fb6f42a0c027e844d) is 8M, max 584.8M, 576.8M free. Jul 12 10:10:33.173083 systemd-journald[1233]: Received client request to flush runtime journal. Jul 12 10:10:33.173110 kernel: loop1: detected capacity change from 0 to 229808 Jul 12 10:10:33.154146 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 12 10:10:33.130944 ignition[1268]: Ignition 2.21.0 Jul 12 10:10:33.154549 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 12 10:10:33.131096 ignition[1268]: deleting config from guestinfo properties Jul 12 10:10:33.156919 systemd[1]: Finished ignition-delete-config.service - Ignition (delete config). Jul 12 10:10:33.155512 ignition[1268]: Successfully deleted config Jul 12 10:10:33.174152 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 12 10:10:33.175878 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 12 10:10:33.179131 kernel: loop2: detected capacity change from 0 to 146488 Jul 12 10:10:33.179116 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 12 10:10:33.205317 systemd-tmpfiles[1312]: ACLs are not supported, ignoring. Jul 12 10:10:33.205331 systemd-tmpfiles[1312]: ACLs are not supported, ignoring. Jul 12 10:10:33.208236 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 12 10:10:33.221859 kernel: loop3: detected capacity change from 0 to 2960 Jul 12 10:10:33.250934 kernel: loop4: detected capacity change from 0 to 114000 Jul 12 10:10:33.278327 kernel: loop5: detected capacity change from 0 to 229808 Jul 12 10:10:33.307856 kernel: loop6: detected capacity change from 0 to 146488 Jul 12 10:10:33.323856 kernel: loop7: detected capacity change from 0 to 2960 Jul 12 10:10:33.338426 (sd-merge)[1318]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-vmware'. Jul 12 10:10:33.339448 (sd-merge)[1318]: Merged extensions into '/usr'. Jul 12 10:10:33.344103 systemd[1]: Reload requested from client PID 1267 ('systemd-sysext') (unit systemd-sysext.service)... Jul 12 10:10:33.344114 systemd[1]: Reloading... Jul 12 10:10:33.421663 zram_generator::config[1347]: No configuration found. Jul 12 10:10:33.525415 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 12 10:10:33.556665 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Jul 12 10:10:33.623660 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 12 10:10:33.623978 systemd[1]: Reloading finished in 279 ms. Jul 12 10:10:33.638372 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 12 10:10:33.650251 systemd[1]: Starting ensure-sysext.service... Jul 12 10:10:33.653948 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 12 10:10:33.667042 systemd[1]: Reload requested from client PID 1399 ('systemctl') (unit ensure-sysext.service)... Jul 12 10:10:33.667055 systemd[1]: Reloading... Jul 12 10:10:33.689386 systemd-tmpfiles[1400]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jul 12 10:10:33.690287 systemd-tmpfiles[1400]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jul 12 10:10:33.695074 systemd-tmpfiles[1400]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 12 10:10:33.695371 systemd-tmpfiles[1400]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 12 10:10:33.696372 systemd-tmpfiles[1400]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 12 10:10:33.696611 systemd-tmpfiles[1400]: ACLs are not supported, ignoring. Jul 12 10:10:33.696701 systemd-tmpfiles[1400]: ACLs are not supported, ignoring. Jul 12 10:10:33.705853 zram_generator::config[1427]: No configuration found. Jul 12 10:10:33.710413 systemd-tmpfiles[1400]: Detected autofs mount point /boot during canonicalization of boot. Jul 12 10:10:33.710420 systemd-tmpfiles[1400]: Skipping /boot Jul 12 10:10:33.715340 systemd-tmpfiles[1400]: Detected autofs mount point /boot during canonicalization of boot. Jul 12 10:10:33.715925 systemd-tmpfiles[1400]: Skipping /boot Jul 12 10:10:33.746294 ldconfig[1255]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 12 10:10:33.790510 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 12 10:10:33.798764 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Jul 12 10:10:33.846104 systemd[1]: Reloading finished in 178 ms. Jul 12 10:10:33.859881 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 12 10:10:33.860232 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 12 10:10:33.863413 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 12 10:10:33.868647 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 12 10:10:33.871292 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 12 10:10:33.873905 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 12 10:10:33.875475 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 12 10:10:33.877004 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 12 10:10:33.880918 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 12 10:10:33.882812 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 12 10:10:33.884276 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 12 10:10:33.895155 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 12 10:10:33.897534 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 12 10:10:33.899043 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 12 10:10:33.902238 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 12 10:10:33.904199 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 12 10:10:33.904531 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 12 10:10:33.904604 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 12 10:10:33.904671 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 12 10:10:33.908460 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 12 10:10:33.908592 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 12 10:10:33.908687 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 12 10:10:33.908783 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 12 10:10:33.912606 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 12 10:10:33.913798 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 12 10:10:33.916404 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 12 10:10:33.916864 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 12 10:10:33.916933 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 12 10:10:33.917032 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 12 10:10:33.920276 systemd[1]: Finished ensure-sysext.service. Jul 12 10:10:33.925807 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 12 10:10:33.926170 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 12 10:10:33.928497 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 12 10:10:33.928762 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 12 10:10:33.929639 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 12 10:10:33.933097 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 12 10:10:33.935399 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 12 10:10:33.940989 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 12 10:10:33.941562 systemd-udevd[1492]: Using default interface naming scheme 'v255'. Jul 12 10:10:33.943661 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 12 10:10:33.945413 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 12 10:10:33.945997 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 12 10:10:33.949519 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 12 10:10:33.949649 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 12 10:10:33.955312 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 12 10:10:33.957234 augenrules[1529]: No rules Jul 12 10:10:33.957955 systemd[1]: audit-rules.service: Deactivated successfully. Jul 12 10:10:33.958819 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 12 10:10:33.972072 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 12 10:10:33.975927 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 12 10:10:33.976143 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 12 10:10:33.981421 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 12 10:10:33.981615 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 12 10:10:34.044191 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 12 10:10:34.140390 systemd-networkd[1543]: lo: Link UP Jul 12 10:10:34.141878 systemd-networkd[1543]: lo: Gained carrier Jul 12 10:10:34.142748 systemd-networkd[1543]: Enumeration completed Jul 12 10:10:34.142827 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 12 10:10:34.143143 systemd-networkd[1543]: ens192: Configuring with /etc/systemd/network/00-vmware.network. Jul 12 10:10:34.145851 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Jul 12 10:10:34.145976 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Jul 12 10:10:34.147090 systemd-networkd[1543]: ens192: Link UP Jul 12 10:10:34.147219 systemd-networkd[1543]: ens192: Gained carrier Jul 12 10:10:34.149919 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 12 10:10:34.150917 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 12 10:10:34.156610 systemd-resolved[1491]: Positive Trust Anchors: Jul 12 10:10:34.156619 systemd-resolved[1491]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 12 10:10:34.156644 systemd-resolved[1491]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 12 10:10:34.162183 systemd-resolved[1491]: Defaulting to hostname 'linux'. Jul 12 10:10:34.166204 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 12 10:10:34.166976 systemd[1]: Reached target network.target - Network. Jul 12 10:10:34.167084 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 12 10:10:34.169158 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 12 10:10:34.169320 systemd[1]: Reached target sysinit.target - System Initialization. Jul 12 10:10:34.169606 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 12 10:10:34.169752 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 12 10:10:34.170218 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jul 12 10:10:34.170343 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 12 10:10:34.170474 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 12 10:10:34.170493 systemd[1]: Reached target paths.target - Path Units. Jul 12 10:10:34.170601 systemd[1]: Reached target time-set.target - System Time Set. Jul 12 10:10:34.170800 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 12 10:10:34.171505 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 12 10:10:34.171634 systemd[1]: Reached target timers.target - Timer Units. Jul 12 10:10:34.172152 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 12 10:10:34.173219 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 12 10:10:34.175801 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 12 10:10:34.176945 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 12 10:10:34.177075 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 12 10:10:34.187594 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 12 10:10:34.188543 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 12 10:10:34.189866 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 12 10:10:34.190187 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 12 10:10:34.193859 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jul 12 10:10:34.195628 systemd[1]: Reached target sockets.target - Socket Units. Jul 12 10:10:34.196844 kernel: mousedev: PS/2 mouse device common for all mice Jul 12 10:10:34.196817 systemd[1]: Reached target basic.target - Basic System. Jul 12 10:10:34.197135 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 12 10:10:34.197167 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 12 10:10:34.198936 systemd[1]: Starting containerd.service - containerd container runtime... Jul 12 10:10:34.201553 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 12 10:10:34.203326 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 12 10:10:34.204796 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 12 10:10:34.206334 kernel: ACPI: button: Power Button [PWRF] Jul 12 10:10:34.207140 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 12 10:10:34.207266 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 12 10:10:34.208982 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jul 12 10:10:34.213036 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 12 10:10:34.213889 jq[1590]: false Jul 12 10:10:34.215300 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 12 10:10:34.218781 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 12 10:10:34.220957 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 12 10:10:34.223428 google_oslogin_nss_cache[1592]: oslogin_cache_refresh[1592]: Refreshing passwd entry cache Jul 12 10:10:34.224292 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 12 10:10:34.224886 oslogin_cache_refresh[1592]: Refreshing passwd entry cache Jul 12 10:10:34.224952 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 12 10:10:34.228005 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 12 10:10:34.230481 systemd[1]: Starting update-engine.service - Update Engine... Jul 12 10:10:34.232079 google_oslogin_nss_cache[1592]: oslogin_cache_refresh[1592]: Failure getting users, quitting Jul 12 10:10:34.232079 google_oslogin_nss_cache[1592]: oslogin_cache_refresh[1592]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 12 10:10:34.232079 google_oslogin_nss_cache[1592]: oslogin_cache_refresh[1592]: Refreshing group entry cache Jul 12 10:10:34.231722 oslogin_cache_refresh[1592]: Failure getting users, quitting Jul 12 10:10:34.231733 oslogin_cache_refresh[1592]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 12 10:10:34.231764 oslogin_cache_refresh[1592]: Refreshing group entry cache Jul 12 10:10:34.232293 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 12 10:10:34.233843 systemd[1]: Starting vgauthd.service - VGAuth Service for open-vm-tools... Jul 12 10:10:34.234865 google_oslogin_nss_cache[1592]: oslogin_cache_refresh[1592]: Failure getting groups, quitting Jul 12 10:10:34.234865 google_oslogin_nss_cache[1592]: oslogin_cache_refresh[1592]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 12 10:10:34.234778 oslogin_cache_refresh[1592]: Failure getting groups, quitting Jul 12 10:10:34.234784 oslogin_cache_refresh[1592]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 12 10:10:34.242933 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 12 10:10:34.243333 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 12 10:10:34.243462 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 12 10:10:34.243617 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jul 12 10:10:34.243733 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jul 12 10:10:34.245865 extend-filesystems[1591]: Found /dev/sda6 Jul 12 10:10:34.248128 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 12 10:10:34.251945 jq[1602]: true Jul 12 10:10:34.248290 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 12 10:10:34.259367 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. Jul 12 10:10:34.262273 extend-filesystems[1591]: Found /dev/sda9 Jul 12 10:10:34.263693 extend-filesystems[1591]: Checking size of /dev/sda9 Jul 12 10:10:34.265498 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 12 10:10:34.269563 (ntainerd)[1615]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 12 10:10:34.274018 jq[1614]: true Jul 12 10:10:34.281014 update_engine[1599]: I20250712 10:10:34.276021 1599 main.cc:92] Flatcar Update Engine starting Jul 12 10:10:34.283342 systemd[1]: motdgen.service: Deactivated successfully. Jul 12 10:10:34.283564 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 12 10:10:34.286975 systemd[1]: Started vgauthd.service - VGAuth Service for open-vm-tools. Jul 12 10:10:34.292570 systemd[1]: Starting vmtoolsd.service - Service for virtual machines hosted on VMware... Jul 12 10:10:34.315172 extend-filesystems[1591]: Old size kept for /dev/sda9 Jul 12 10:10:34.315559 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 12 10:10:34.316335 tar[1606]: linux-amd64/LICENSE Jul 12 10:10:34.316335 tar[1606]: linux-amd64/helm Jul 12 10:10:34.321110 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 12 10:10:34.326635 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 12 10:10:34.332152 dbus-daemon[1588]: [system] SELinux support is enabled Jul 12 10:10:34.332270 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 12 10:10:34.335583 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 12 10:10:34.335604 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 12 10:10:34.336272 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 12 10:10:34.336287 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 12 10:10:34.342659 systemd[1]: Started update-engine.service - Update Engine. Jul 12 10:10:34.343115 update_engine[1599]: I20250712 10:10:34.342861 1599 update_check_scheduler.cc:74] Next update check in 6m8s Jul 12 10:10:34.358400 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 12 10:10:34.360400 systemd[1]: Started vmtoolsd.service - Service for virtual machines hosted on VMware. Jul 12 10:10:34.371076 systemd-logind[1597]: New seat seat0. Jul 12 10:10:34.372404 systemd[1]: Started systemd-logind.service - User Login Management. Jul 12 10:10:34.388973 unknown[1632]: Pref_Init: Using '/etc/vmware-tools/vgauth.conf' as preferences filepath Jul 12 10:12:00.833605 systemd-resolved[1491]: Clock change detected. Flushing caches. Jul 12 10:12:00.833726 systemd-timesyncd[1517]: Contacted time server 104.233.211.205:123 (0.flatcar.pool.ntp.org). Jul 12 10:12:00.833776 systemd-timesyncd[1517]: Initial clock synchronization to Sat 2025-07-12 10:12:00.833340 UTC. Jul 12 10:12:00.834162 bash[1658]: Updated "/home/core/.ssh/authorized_keys" Jul 12 10:12:00.834541 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 12 10:12:00.835903 unknown[1632]: Core dump limit set to -1 Jul 12 10:12:00.836651 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 12 10:12:00.989024 locksmithd[1652]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 12 10:12:01.019068 containerd[1615]: time="2025-07-12T10:12:01Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jul 12 10:12:01.020737 containerd[1615]: time="2025-07-12T10:12:01.020597557Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Jul 12 10:12:01.040177 containerd[1615]: time="2025-07-12T10:12:01.039562832Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="5.622µs" Jul 12 10:12:01.042440 containerd[1615]: time="2025-07-12T10:12:01.041443099Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jul 12 10:12:01.042440 containerd[1615]: time="2025-07-12T10:12:01.041464299Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jul 12 10:12:01.042440 containerd[1615]: time="2025-07-12T10:12:01.041552608Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jul 12 10:12:01.042440 containerd[1615]: time="2025-07-12T10:12:01.041561839Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jul 12 10:12:01.042440 containerd[1615]: time="2025-07-12T10:12:01.041578508Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 12 10:12:01.042440 containerd[1615]: time="2025-07-12T10:12:01.041612733Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 12 10:12:01.042440 containerd[1615]: time="2025-07-12T10:12:01.041619812Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 12 10:12:01.042440 containerd[1615]: time="2025-07-12T10:12:01.041736598Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 12 10:12:01.042440 containerd[1615]: time="2025-07-12T10:12:01.041744923Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 12 10:12:01.042440 containerd[1615]: time="2025-07-12T10:12:01.041750592Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 12 10:12:01.042440 containerd[1615]: time="2025-07-12T10:12:01.041755051Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jul 12 10:12:01.042440 containerd[1615]: time="2025-07-12T10:12:01.041798368Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jul 12 10:12:01.042609 containerd[1615]: time="2025-07-12T10:12:01.041904213Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 12 10:12:01.042609 containerd[1615]: time="2025-07-12T10:12:01.041919001Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 12 10:12:01.042609 containerd[1615]: time="2025-07-12T10:12:01.041925256Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jul 12 10:12:01.042609 containerd[1615]: time="2025-07-12T10:12:01.041943121Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jul 12 10:12:01.043926 containerd[1615]: time="2025-07-12T10:12:01.043914300Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jul 12 10:12:01.044289 containerd[1615]: time="2025-07-12T10:12:01.044279416Z" level=info msg="metadata content store policy set" policy=shared Jul 12 10:12:01.045972 containerd[1615]: time="2025-07-12T10:12:01.045961125Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jul 12 10:12:01.046278 containerd[1615]: time="2025-07-12T10:12:01.046268376Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jul 12 10:12:01.046418 containerd[1615]: time="2025-07-12T10:12:01.046318497Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jul 12 10:12:01.046418 containerd[1615]: time="2025-07-12T10:12:01.046329556Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jul 12 10:12:01.046418 containerd[1615]: time="2025-07-12T10:12:01.046336454Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jul 12 10:12:01.046418 containerd[1615]: time="2025-07-12T10:12:01.046347372Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jul 12 10:12:01.046418 containerd[1615]: time="2025-07-12T10:12:01.046354836Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jul 12 10:12:01.046418 containerd[1615]: time="2025-07-12T10:12:01.046361065Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jul 12 10:12:01.046418 containerd[1615]: time="2025-07-12T10:12:01.046366628Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jul 12 10:12:01.046418 containerd[1615]: time="2025-07-12T10:12:01.046371999Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jul 12 10:12:01.046418 containerd[1615]: time="2025-07-12T10:12:01.046377308Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jul 12 10:12:01.046418 containerd[1615]: time="2025-07-12T10:12:01.046384170Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jul 12 10:12:01.046996 containerd[1615]: time="2025-07-12T10:12:01.046673871Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jul 12 10:12:01.046996 containerd[1615]: time="2025-07-12T10:12:01.046688502Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jul 12 10:12:01.046996 containerd[1615]: time="2025-07-12T10:12:01.046697254Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jul 12 10:12:01.046996 containerd[1615]: time="2025-07-12T10:12:01.046703082Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jul 12 10:12:01.046996 containerd[1615]: time="2025-07-12T10:12:01.046727054Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jul 12 10:12:01.046996 containerd[1615]: time="2025-07-12T10:12:01.046733116Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jul 12 10:12:01.046996 containerd[1615]: time="2025-07-12T10:12:01.046738908Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jul 12 10:12:01.046996 containerd[1615]: time="2025-07-12T10:12:01.046748823Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jul 12 10:12:01.046996 containerd[1615]: time="2025-07-12T10:12:01.046770049Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jul 12 10:12:01.046996 containerd[1615]: time="2025-07-12T10:12:01.046776257Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jul 12 10:12:01.046996 containerd[1615]: time="2025-07-12T10:12:01.046782408Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jul 12 10:12:01.046996 containerd[1615]: time="2025-07-12T10:12:01.046816175Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jul 12 10:12:01.046996 containerd[1615]: time="2025-07-12T10:12:01.046823746Z" level=info msg="Start snapshots syncer" Jul 12 10:12:01.048361 containerd[1615]: time="2025-07-12T10:12:01.047709736Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jul 12 10:12:01.048361 containerd[1615]: time="2025-07-12T10:12:01.047927452Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jul 12 10:12:01.048491 containerd[1615]: time="2025-07-12T10:12:01.047964927Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jul 12 10:12:01.048842 containerd[1615]: time="2025-07-12T10:12:01.048788481Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jul 12 10:12:01.049695 containerd[1615]: time="2025-07-12T10:12:01.049075638Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jul 12 10:12:01.049813 containerd[1615]: time="2025-07-12T10:12:01.049332876Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jul 12 10:12:01.049930 containerd[1615]: time="2025-07-12T10:12:01.049920285Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jul 12 10:12:01.050024 containerd[1615]: time="2025-07-12T10:12:01.050014097Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jul 12 10:12:01.050593 containerd[1615]: time="2025-07-12T10:12:01.050298170Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jul 12 10:12:01.050593 containerd[1615]: time="2025-07-12T10:12:01.050311882Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jul 12 10:12:01.050593 containerd[1615]: time="2025-07-12T10:12:01.050321705Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jul 12 10:12:01.050593 containerd[1615]: time="2025-07-12T10:12:01.050339711Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jul 12 10:12:01.050593 containerd[1615]: time="2025-07-12T10:12:01.050349275Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jul 12 10:12:01.050593 containerd[1615]: time="2025-07-12T10:12:01.050358014Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jul 12 10:12:01.050593 containerd[1615]: time="2025-07-12T10:12:01.050379809Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 12 10:12:01.050593 containerd[1615]: time="2025-07-12T10:12:01.050391551Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 12 10:12:01.055054 containerd[1615]: time="2025-07-12T10:12:01.053952965Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 12 10:12:01.055054 containerd[1615]: time="2025-07-12T10:12:01.053995216Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 12 10:12:01.055054 containerd[1615]: time="2025-07-12T10:12:01.054003604Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jul 12 10:12:01.055054 containerd[1615]: time="2025-07-12T10:12:01.054027833Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jul 12 10:12:01.055054 containerd[1615]: time="2025-07-12T10:12:01.054042183Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jul 12 10:12:01.055054 containerd[1615]: time="2025-07-12T10:12:01.054071766Z" level=info msg="runtime interface created" Jul 12 10:12:01.055054 containerd[1615]: time="2025-07-12T10:12:01.054079607Z" level=info msg="created NRI interface" Jul 12 10:12:01.055054 containerd[1615]: time="2025-07-12T10:12:01.054085067Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jul 12 10:12:01.055054 containerd[1615]: time="2025-07-12T10:12:01.054103229Z" level=info msg="Connect containerd service" Jul 12 10:12:01.055054 containerd[1615]: time="2025-07-12T10:12:01.054129190Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 12 10:12:01.057536 containerd[1615]: time="2025-07-12T10:12:01.057094393Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 12 10:12:01.078183 sshd_keygen[1635]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 12 10:12:01.078408 kernel: piix4_smbus 0000:00:07.3: SMBus Host Controller not enabled! Jul 12 10:12:01.097130 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 12 10:12:01.098270 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 12 10:12:01.112560 systemd[1]: issuegen.service: Deactivated successfully. Jul 12 10:12:01.112706 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 12 10:12:01.113934 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 12 10:12:01.131828 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 12 10:12:01.133349 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 12 10:12:01.145776 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 12 10:12:01.146544 systemd[1]: Reached target getty.target - Login Prompts. Jul 12 10:12:01.228696 (udev-worker)[1548]: id: Truncating stdout of 'dmi_memory_id' up to 16384 byte. Jul 12 10:12:01.236668 tar[1606]: linux-amd64/README.md Jul 12 10:12:01.253201 containerd[1615]: time="2025-07-12T10:12:01.252910269Z" level=info msg="Start subscribing containerd event" Jul 12 10:12:01.253201 containerd[1615]: time="2025-07-12T10:12:01.252942244Z" level=info msg="Start recovering state" Jul 12 10:12:01.253201 containerd[1615]: time="2025-07-12T10:12:01.253010891Z" level=info msg="Start event monitor" Jul 12 10:12:01.253201 containerd[1615]: time="2025-07-12T10:12:01.253046692Z" level=info msg="Start cni network conf syncer for default" Jul 12 10:12:01.253201 containerd[1615]: time="2025-07-12T10:12:01.253059702Z" level=info msg="Start streaming server" Jul 12 10:12:01.253201 containerd[1615]: time="2025-07-12T10:12:01.253047824Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 12 10:12:01.253201 containerd[1615]: time="2025-07-12T10:12:01.253131438Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 12 10:12:01.253201 containerd[1615]: time="2025-07-12T10:12:01.253078137Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jul 12 10:12:01.253201 containerd[1615]: time="2025-07-12T10:12:01.253154967Z" level=info msg="runtime interface starting up..." Jul 12 10:12:01.253201 containerd[1615]: time="2025-07-12T10:12:01.253159966Z" level=info msg="starting plugins..." Jul 12 10:12:01.253201 containerd[1615]: time="2025-07-12T10:12:01.253171437Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jul 12 10:12:01.253617 containerd[1615]: time="2025-07-12T10:12:01.253494118Z" level=info msg="containerd successfully booted in 0.235030s" Jul 12 10:12:01.253824 systemd[1]: Started containerd.service - containerd container runtime. Jul 12 10:12:01.260657 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 12 10:12:01.275065 systemd-logind[1597]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 12 10:12:01.280814 systemd-logind[1597]: Watching system buttons on /dev/input/event2 (Power Button) Jul 12 10:12:01.283629 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 12 10:12:01.378729 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 12 10:12:02.209511 systemd-networkd[1543]: ens192: Gained IPv6LL Jul 12 10:12:02.211090 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 12 10:12:02.211511 systemd[1]: Reached target network-online.target - Network is Online. Jul 12 10:12:02.212589 systemd[1]: Starting coreos-metadata.service - VMware metadata agent... Jul 12 10:12:02.218275 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 10:12:02.219893 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 12 10:12:02.245777 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 12 10:12:02.253802 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 12 10:12:02.254010 systemd[1]: Finished coreos-metadata.service - VMware metadata agent. Jul 12 10:12:02.255274 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 12 10:12:03.063199 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 10:12:03.064196 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 12 10:12:03.065388 systemd[1]: Startup finished in 2.808s (kernel) + 5.647s (initrd) + 4.429s (userspace) = 12.885s. Jul 12 10:12:03.065923 (kubelet)[1807]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 12 10:12:03.213038 login[1741]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jul 12 10:12:03.213724 login[1743]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jul 12 10:12:03.225137 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 12 10:12:03.226642 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 12 10:12:03.228559 systemd-logind[1597]: New session 1 of user core. Jul 12 10:12:03.232490 systemd-logind[1597]: New session 2 of user core. Jul 12 10:12:03.244589 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 12 10:12:03.248624 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 12 10:12:03.259649 (systemd)[1815]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 12 10:12:03.261809 systemd-logind[1597]: New session c1 of user core. Jul 12 10:12:03.409628 systemd[1815]: Queued start job for default target default.target. Jul 12 10:12:03.418095 systemd[1815]: Created slice app.slice - User Application Slice. Jul 12 10:12:03.418232 systemd[1815]: Reached target paths.target - Paths. Jul 12 10:12:03.418369 systemd[1815]: Reached target timers.target - Timers. Jul 12 10:12:03.420477 systemd[1815]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 12 10:12:03.429138 systemd[1815]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 12 10:12:03.430165 systemd[1815]: Reached target sockets.target - Sockets. Jul 12 10:12:03.430206 systemd[1815]: Reached target basic.target - Basic System. Jul 12 10:12:03.430239 systemd[1815]: Reached target default.target - Main User Target. Jul 12 10:12:03.430263 systemd[1815]: Startup finished in 163ms. Jul 12 10:12:03.430491 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 12 10:12:03.431565 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 12 10:12:03.432183 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 12 10:12:03.989854 kubelet[1807]: E0712 10:12:03.989817 1807 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 12 10:12:03.991531 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 12 10:12:03.991633 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 12 10:12:03.991882 systemd[1]: kubelet.service: Consumed 674ms CPU time, 267.4M memory peak. Jul 12 10:12:14.241953 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 12 10:12:14.243013 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 10:12:14.481782 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 10:12:14.484177 (kubelet)[1862]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 12 10:12:14.506065 kubelet[1862]: E0712 10:12:14.505992 1862 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 12 10:12:14.508805 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 12 10:12:14.508986 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 12 10:12:14.509321 systemd[1]: kubelet.service: Consumed 92ms CPU time, 107.5M memory peak. Jul 12 10:12:24.667957 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 12 10:12:24.669584 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 10:12:25.017497 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 10:12:25.024722 (kubelet)[1877]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 12 10:12:25.093798 kubelet[1877]: E0712 10:12:25.093761 1877 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 12 10:12:25.095428 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 12 10:12:25.095573 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 12 10:12:25.095936 systemd[1]: kubelet.service: Consumed 95ms CPU time, 110.1M memory peak. Jul 12 10:12:30.928312 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 12 10:12:30.929931 systemd[1]: Started sshd@0-139.178.70.103:22-139.178.89.65:49942.service - OpenSSH per-connection server daemon (139.178.89.65:49942). Jul 12 10:12:30.981693 sshd[1885]: Accepted publickey for core from 139.178.89.65 port 49942 ssh2: RSA SHA256:er45sfTMlm4F7XCjgdqGTRql8gwHayiMHL6WgBzS/8A Jul 12 10:12:30.982529 sshd-session[1885]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 10:12:30.985047 systemd-logind[1597]: New session 3 of user core. Jul 12 10:12:30.994497 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 12 10:12:31.048878 systemd[1]: Started sshd@1-139.178.70.103:22-139.178.89.65:49956.service - OpenSSH per-connection server daemon (139.178.89.65:49956). Jul 12 10:12:31.088698 sshd[1891]: Accepted publickey for core from 139.178.89.65 port 49956 ssh2: RSA SHA256:er45sfTMlm4F7XCjgdqGTRql8gwHayiMHL6WgBzS/8A Jul 12 10:12:31.089588 sshd-session[1891]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 10:12:31.092753 systemd-logind[1597]: New session 4 of user core. Jul 12 10:12:31.098543 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 12 10:12:31.146086 sshd[1894]: Connection closed by 139.178.89.65 port 49956 Jul 12 10:12:31.145880 sshd-session[1891]: pam_unix(sshd:session): session closed for user core Jul 12 10:12:31.154809 systemd[1]: sshd@1-139.178.70.103:22-139.178.89.65:49956.service: Deactivated successfully. Jul 12 10:12:31.155631 systemd[1]: session-4.scope: Deactivated successfully. Jul 12 10:12:31.156334 systemd-logind[1597]: Session 4 logged out. Waiting for processes to exit. Jul 12 10:12:31.157418 systemd[1]: Started sshd@2-139.178.70.103:22-139.178.89.65:49972.service - OpenSSH per-connection server daemon (139.178.89.65:49972). Jul 12 10:12:31.158570 systemd-logind[1597]: Removed session 4. Jul 12 10:12:31.200658 sshd[1900]: Accepted publickey for core from 139.178.89.65 port 49972 ssh2: RSA SHA256:er45sfTMlm4F7XCjgdqGTRql8gwHayiMHL6WgBzS/8A Jul 12 10:12:31.201249 sshd-session[1900]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 10:12:31.203839 systemd-logind[1597]: New session 5 of user core. Jul 12 10:12:31.211541 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 12 10:12:31.256982 sshd[1903]: Connection closed by 139.178.89.65 port 49972 Jul 12 10:12:31.257323 sshd-session[1900]: pam_unix(sshd:session): session closed for user core Jul 12 10:12:31.267231 systemd[1]: sshd@2-139.178.70.103:22-139.178.89.65:49972.service: Deactivated successfully. Jul 12 10:12:31.268227 systemd[1]: session-5.scope: Deactivated successfully. Jul 12 10:12:31.269374 systemd-logind[1597]: Session 5 logged out. Waiting for processes to exit. Jul 12 10:12:31.270541 systemd[1]: Started sshd@3-139.178.70.103:22-139.178.89.65:49986.service - OpenSSH per-connection server daemon (139.178.89.65:49986). Jul 12 10:12:31.271378 systemd-logind[1597]: Removed session 5. Jul 12 10:12:31.308844 sshd[1909]: Accepted publickey for core from 139.178.89.65 port 49986 ssh2: RSA SHA256:er45sfTMlm4F7XCjgdqGTRql8gwHayiMHL6WgBzS/8A Jul 12 10:12:31.309633 sshd-session[1909]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 10:12:31.312252 systemd-logind[1597]: New session 6 of user core. Jul 12 10:12:31.314597 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 12 10:12:31.362850 sshd[1912]: Connection closed by 139.178.89.65 port 49986 Jul 12 10:12:31.362194 sshd-session[1909]: pam_unix(sshd:session): session closed for user core Jul 12 10:12:31.374210 systemd[1]: sshd@3-139.178.70.103:22-139.178.89.65:49986.service: Deactivated successfully. Jul 12 10:12:31.375145 systemd[1]: session-6.scope: Deactivated successfully. Jul 12 10:12:31.375983 systemd-logind[1597]: Session 6 logged out. Waiting for processes to exit. Jul 12 10:12:31.377379 systemd[1]: Started sshd@4-139.178.70.103:22-139.178.89.65:49988.service - OpenSSH per-connection server daemon (139.178.89.65:49988). Jul 12 10:12:31.378525 systemd-logind[1597]: Removed session 6. Jul 12 10:12:31.413211 sshd[1918]: Accepted publickey for core from 139.178.89.65 port 49988 ssh2: RSA SHA256:er45sfTMlm4F7XCjgdqGTRql8gwHayiMHL6WgBzS/8A Jul 12 10:12:31.414030 sshd-session[1918]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 10:12:31.416663 systemd-logind[1597]: New session 7 of user core. Jul 12 10:12:31.428570 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 12 10:12:31.508231 sudo[1922]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 12 10:12:31.508383 sudo[1922]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 12 10:12:31.525750 sudo[1922]: pam_unix(sudo:session): session closed for user root Jul 12 10:12:31.526600 sshd[1921]: Connection closed by 139.178.89.65 port 49988 Jul 12 10:12:31.527416 sshd-session[1918]: pam_unix(sshd:session): session closed for user core Jul 12 10:12:31.535984 systemd[1]: sshd@4-139.178.70.103:22-139.178.89.65:49988.service: Deactivated successfully. Jul 12 10:12:31.536911 systemd[1]: session-7.scope: Deactivated successfully. Jul 12 10:12:31.538221 systemd-logind[1597]: Session 7 logged out. Waiting for processes to exit. Jul 12 10:12:31.538859 systemd[1]: Started sshd@5-139.178.70.103:22-139.178.89.65:49998.service - OpenSSH per-connection server daemon (139.178.89.65:49998). Jul 12 10:12:31.539867 systemd-logind[1597]: Removed session 7. Jul 12 10:12:31.585725 sshd[1928]: Accepted publickey for core from 139.178.89.65 port 49998 ssh2: RSA SHA256:er45sfTMlm4F7XCjgdqGTRql8gwHayiMHL6WgBzS/8A Jul 12 10:12:31.586540 sshd-session[1928]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 10:12:31.589512 systemd-logind[1597]: New session 8 of user core. Jul 12 10:12:31.595517 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 12 10:12:31.642918 sudo[1933]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 12 10:12:31.643259 sudo[1933]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 12 10:12:31.651945 sudo[1933]: pam_unix(sudo:session): session closed for user root Jul 12 10:12:31.654954 sudo[1932]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 12 10:12:31.655102 sudo[1932]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 12 10:12:31.660525 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 12 10:12:31.686713 augenrules[1955]: No rules Jul 12 10:12:31.687509 systemd[1]: audit-rules.service: Deactivated successfully. Jul 12 10:12:31.687654 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 12 10:12:31.688436 sudo[1932]: pam_unix(sudo:session): session closed for user root Jul 12 10:12:31.689074 sshd[1931]: Connection closed by 139.178.89.65 port 49998 Jul 12 10:12:31.689779 sshd-session[1928]: pam_unix(sshd:session): session closed for user core Jul 12 10:12:31.695333 systemd[1]: sshd@5-139.178.70.103:22-139.178.89.65:49998.service: Deactivated successfully. Jul 12 10:12:31.696192 systemd[1]: session-8.scope: Deactivated successfully. Jul 12 10:12:31.696938 systemd-logind[1597]: Session 8 logged out. Waiting for processes to exit. Jul 12 10:12:31.697859 systemd[1]: Started sshd@6-139.178.70.103:22-139.178.89.65:50006.service - OpenSSH per-connection server daemon (139.178.89.65:50006). Jul 12 10:12:31.699563 systemd-logind[1597]: Removed session 8. Jul 12 10:12:31.736183 sshd[1964]: Accepted publickey for core from 139.178.89.65 port 50006 ssh2: RSA SHA256:er45sfTMlm4F7XCjgdqGTRql8gwHayiMHL6WgBzS/8A Jul 12 10:12:31.736955 sshd-session[1964]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 10:12:31.739466 systemd-logind[1597]: New session 9 of user core. Jul 12 10:12:31.749578 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 12 10:12:31.797820 sudo[1968]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 12 10:12:31.798598 sudo[1968]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 12 10:12:32.216066 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 12 10:12:32.227581 (dockerd)[1985]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 12 10:12:32.427417 dockerd[1985]: time="2025-07-12T10:12:32.427036554Z" level=info msg="Starting up" Jul 12 10:12:32.427805 dockerd[1985]: time="2025-07-12T10:12:32.427792597Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jul 12 10:12:32.433501 dockerd[1985]: time="2025-07-12T10:12:32.433470864Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jul 12 10:12:32.468274 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1958402366-merged.mount: Deactivated successfully. Jul 12 10:12:32.487192 dockerd[1985]: time="2025-07-12T10:12:32.487170748Z" level=info msg="Loading containers: start." Jul 12 10:12:32.494426 kernel: Initializing XFRM netlink socket Jul 12 10:12:32.634156 systemd-networkd[1543]: docker0: Link UP Jul 12 10:12:32.635475 dockerd[1985]: time="2025-07-12T10:12:32.635380816Z" level=info msg="Loading containers: done." Jul 12 10:12:32.644928 dockerd[1985]: time="2025-07-12T10:12:32.644697864Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 12 10:12:32.644928 dockerd[1985]: time="2025-07-12T10:12:32.644753210Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jul 12 10:12:32.644928 dockerd[1985]: time="2025-07-12T10:12:32.644806335Z" level=info msg="Initializing buildkit" Jul 12 10:12:32.654130 dockerd[1985]: time="2025-07-12T10:12:32.654113351Z" level=info msg="Completed buildkit initialization" Jul 12 10:12:32.658800 dockerd[1985]: time="2025-07-12T10:12:32.658785473Z" level=info msg="Daemon has completed initialization" Jul 12 10:12:32.659003 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 12 10:12:32.659650 dockerd[1985]: time="2025-07-12T10:12:32.659003083Z" level=info msg="API listen on /run/docker.sock" Jul 12 10:12:33.232523 containerd[1615]: time="2025-07-12T10:12:33.232497133Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\"" Jul 12 10:12:33.466071 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3667805272-merged.mount: Deactivated successfully. Jul 12 10:12:33.835504 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1478554600.mount: Deactivated successfully. Jul 12 10:12:34.802332 containerd[1615]: time="2025-07-12T10:12:34.802136093Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 10:12:34.802721 containerd[1615]: time="2025-07-12T10:12:34.802704470Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.2: active requests=0, bytes read=30079099" Jul 12 10:12:34.803430 containerd[1615]: time="2025-07-12T10:12:34.803415308Z" level=info msg="ImageCreate event name:\"sha256:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 10:12:34.804650 containerd[1615]: time="2025-07-12T10:12:34.804636432Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 10:12:34.805195 containerd[1615]: time="2025-07-12T10:12:34.805183220Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.2\" with image id \"sha256:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\", size \"30075899\" in 1.572662487s" Jul 12 10:12:34.805242 containerd[1615]: time="2025-07-12T10:12:34.805233971Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\" returns image reference \"sha256:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e\"" Jul 12 10:12:34.805735 containerd[1615]: time="2025-07-12T10:12:34.805694854Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\"" Jul 12 10:12:35.167876 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jul 12 10:12:35.168908 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 10:12:35.384627 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 10:12:35.396122 (kubelet)[2257]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 12 10:12:35.423452 kubelet[2257]: E0712 10:12:35.423379 2257 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 12 10:12:35.424887 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 12 10:12:35.424967 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 12 10:12:35.425282 systemd[1]: kubelet.service: Consumed 92ms CPU time, 110.1M memory peak. Jul 12 10:12:36.316415 containerd[1615]: time="2025-07-12T10:12:36.316373633Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 10:12:36.322168 containerd[1615]: time="2025-07-12T10:12:36.322139037Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.2: active requests=0, bytes read=26018946" Jul 12 10:12:36.327657 containerd[1615]: time="2025-07-12T10:12:36.327631364Z" level=info msg="ImageCreate event name:\"sha256:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 10:12:36.335633 containerd[1615]: time="2025-07-12T10:12:36.335588992Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 10:12:36.336197 containerd[1615]: time="2025-07-12T10:12:36.336013270Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.2\" with image id \"sha256:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\", size \"27646507\" in 1.530222833s" Jul 12 10:12:36.336197 containerd[1615]: time="2025-07-12T10:12:36.336034087Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\" returns image reference \"sha256:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2\"" Jul 12 10:12:36.336464 containerd[1615]: time="2025-07-12T10:12:36.336410382Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\"" Jul 12 10:12:37.480470 containerd[1615]: time="2025-07-12T10:12:37.480437715Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 10:12:37.481064 containerd[1615]: time="2025-07-12T10:12:37.481016461Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.2: active requests=0, bytes read=20155055" Jul 12 10:12:37.481445 containerd[1615]: time="2025-07-12T10:12:37.481416640Z" level=info msg="ImageCreate event name:\"sha256:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 10:12:37.483000 containerd[1615]: time="2025-07-12T10:12:37.482799587Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 10:12:37.483284 containerd[1615]: time="2025-07-12T10:12:37.483268809Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.2\" with image id \"sha256:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\", size \"21782634\" in 1.146843401s" Jul 12 10:12:37.483315 containerd[1615]: time="2025-07-12T10:12:37.483285418Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\" returns image reference \"sha256:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b\"" Jul 12 10:12:37.483583 containerd[1615]: time="2025-07-12T10:12:37.483571949Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\"" Jul 12 10:12:38.341919 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2851517756.mount: Deactivated successfully. Jul 12 10:12:38.716315 containerd[1615]: time="2025-07-12T10:12:38.716239709Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 10:12:38.725835 containerd[1615]: time="2025-07-12T10:12:38.725810565Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.2: active requests=0, bytes read=31892746" Jul 12 10:12:38.730460 containerd[1615]: time="2025-07-12T10:12:38.730438767Z" level=info msg="ImageCreate event name:\"sha256:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 10:12:38.741371 containerd[1615]: time="2025-07-12T10:12:38.741340478Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 10:12:38.741657 containerd[1615]: time="2025-07-12T10:12:38.741558401Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.2\" with image id \"sha256:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19\", repo tag \"registry.k8s.io/kube-proxy:v1.33.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\", size \"31891765\" in 1.257869921s" Jul 12 10:12:38.741657 containerd[1615]: time="2025-07-12T10:12:38.741576471Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\" returns image reference \"sha256:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19\"" Jul 12 10:12:38.741898 containerd[1615]: time="2025-07-12T10:12:38.741884899Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jul 12 10:12:39.711882 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3444771301.mount: Deactivated successfully. Jul 12 10:12:40.518652 containerd[1615]: time="2025-07-12T10:12:40.518587224Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 10:12:40.540476 containerd[1615]: time="2025-07-12T10:12:40.540436790Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Jul 12 10:12:40.550131 containerd[1615]: time="2025-07-12T10:12:40.550103119Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 10:12:40.560893 containerd[1615]: time="2025-07-12T10:12:40.560859769Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 10:12:40.561745 containerd[1615]: time="2025-07-12T10:12:40.561322270Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.819422475s" Jul 12 10:12:40.561745 containerd[1615]: time="2025-07-12T10:12:40.561342208Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Jul 12 10:12:40.561745 containerd[1615]: time="2025-07-12T10:12:40.561691874Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 12 10:12:41.209042 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3633938943.mount: Deactivated successfully. Jul 12 10:12:41.245051 containerd[1615]: time="2025-07-12T10:12:41.245011315Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 12 10:12:41.248257 containerd[1615]: time="2025-07-12T10:12:41.248225653Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jul 12 10:12:41.248513 containerd[1615]: time="2025-07-12T10:12:41.248495172Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 12 10:12:41.249693 containerd[1615]: time="2025-07-12T10:12:41.249673977Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 12 10:12:41.250423 containerd[1615]: time="2025-07-12T10:12:41.250394762Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 688.690765ms" Jul 12 10:12:41.250451 containerd[1615]: time="2025-07-12T10:12:41.250420657Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jul 12 10:12:41.250707 containerd[1615]: time="2025-07-12T10:12:41.250693690Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jul 12 10:12:41.756818 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2929884040.mount: Deactivated successfully. Jul 12 10:12:45.667838 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jul 12 10:12:45.669888 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 10:12:46.193429 update_engine[1599]: I20250712 10:12:46.193190 1599 update_attempter.cc:509] Updating boot flags... Jul 12 10:12:46.341621 containerd[1615]: time="2025-07-12T10:12:46.341131067Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 10:12:46.362947 containerd[1615]: time="2025-07-12T10:12:46.362920081Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58247175" Jul 12 10:12:46.628911 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 10:12:46.631599 (kubelet)[2417]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 12 10:12:46.726501 containerd[1615]: time="2025-07-12T10:12:46.726457533Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 10:12:46.739815 containerd[1615]: time="2025-07-12T10:12:46.739774536Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 10:12:46.740501 containerd[1615]: time="2025-07-12T10:12:46.740287312Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 5.489577987s" Jul 12 10:12:46.740501 containerd[1615]: time="2025-07-12T10:12:46.740312283Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Jul 12 10:12:46.786451 kubelet[2417]: E0712 10:12:46.786416 2417 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 12 10:12:46.788010 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 12 10:12:46.788096 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 12 10:12:46.788459 systemd[1]: kubelet.service: Consumed 106ms CPU time, 110M memory peak. Jul 12 10:12:49.766773 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 10:12:49.766889 systemd[1]: kubelet.service: Consumed 106ms CPU time, 110M memory peak. Jul 12 10:12:49.768679 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 10:12:49.786788 systemd[1]: Reload requested from client PID 2450 ('systemctl') (unit session-9.scope)... Jul 12 10:12:49.786905 systemd[1]: Reloading... Jul 12 10:12:49.859415 zram_generator::config[2499]: No configuration found. Jul 12 10:12:49.917454 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 12 10:12:49.925804 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Jul 12 10:12:49.994500 systemd[1]: Reloading finished in 207 ms. Jul 12 10:12:50.047800 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 12 10:12:50.047859 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 12 10:12:50.048057 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 10:12:50.049871 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 10:12:50.310155 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 10:12:50.313301 (kubelet)[2560]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 12 10:12:50.358320 kubelet[2560]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 12 10:12:50.358558 kubelet[2560]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 12 10:12:50.358608 kubelet[2560]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 12 10:12:50.375054 kubelet[2560]: I0712 10:12:50.375005 2560 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 12 10:12:50.854410 kubelet[2560]: I0712 10:12:50.854243 2560 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jul 12 10:12:50.854410 kubelet[2560]: I0712 10:12:50.854263 2560 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 12 10:12:50.854410 kubelet[2560]: I0712 10:12:50.854389 2560 server.go:956] "Client rotation is on, will bootstrap in background" Jul 12 10:12:50.881320 kubelet[2560]: I0712 10:12:50.881302 2560 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 12 10:12:50.882496 kubelet[2560]: E0712 10:12:50.882069 2560 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://139.178.70.103:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 139.178.70.103:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jul 12 10:12:50.891212 kubelet[2560]: I0712 10:12:50.891201 2560 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 12 10:12:50.895618 kubelet[2560]: I0712 10:12:50.895607 2560 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 12 10:12:50.897081 kubelet[2560]: I0712 10:12:50.897065 2560 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 12 10:12:50.899437 kubelet[2560]: I0712 10:12:50.897133 2560 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 12 10:12:50.900235 kubelet[2560]: I0712 10:12:50.900227 2560 topology_manager.go:138] "Creating topology manager with none policy" Jul 12 10:12:50.900284 kubelet[2560]: I0712 10:12:50.900278 2560 container_manager_linux.go:303] "Creating device plugin manager" Jul 12 10:12:50.900408 kubelet[2560]: I0712 10:12:50.900391 2560 state_mem.go:36] "Initialized new in-memory state store" Jul 12 10:12:50.904061 kubelet[2560]: I0712 10:12:50.904049 2560 kubelet.go:480] "Attempting to sync node with API server" Jul 12 10:12:50.904134 kubelet[2560]: I0712 10:12:50.904127 2560 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 12 10:12:50.905208 kubelet[2560]: I0712 10:12:50.905199 2560 kubelet.go:386] "Adding apiserver pod source" Jul 12 10:12:50.906674 kubelet[2560]: E0712 10:12:50.906659 2560 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://139.178.70.103:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 139.178.70.103:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jul 12 10:12:50.906744 kubelet[2560]: I0712 10:12:50.906736 2560 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 12 10:12:50.914456 kubelet[2560]: I0712 10:12:50.914440 2560 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Jul 12 10:12:50.914857 kubelet[2560]: I0712 10:12:50.914848 2560 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jul 12 10:12:50.915893 kubelet[2560]: W0712 10:12:50.915360 2560 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 12 10:12:50.917892 kubelet[2560]: E0712 10:12:50.917783 2560 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://139.178.70.103:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.70.103:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jul 12 10:12:50.918869 kubelet[2560]: I0712 10:12:50.918782 2560 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 12 10:12:50.918869 kubelet[2560]: I0712 10:12:50.918816 2560 server.go:1289] "Started kubelet" Jul 12 10:12:50.921291 kubelet[2560]: I0712 10:12:50.921279 2560 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 12 10:12:50.924538 kubelet[2560]: E0712 10:12:50.922026 2560 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://139.178.70.103:6443/api/v1/namespaces/default/events\": dial tcp 139.178.70.103:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1851795d212486ed default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-12 10:12:50.918794989 +0000 UTC m=+0.602998781,LastTimestamp:2025-07-12 10:12:50.918794989 +0000 UTC m=+0.602998781,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 12 10:12:50.925503 kubelet[2560]: I0712 10:12:50.925486 2560 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jul 12 10:12:50.930217 kubelet[2560]: I0712 10:12:50.930202 2560 server.go:317] "Adding debug handlers to kubelet server" Jul 12 10:12:50.930838 kubelet[2560]: I0712 10:12:50.930822 2560 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 12 10:12:50.931185 kubelet[2560]: E0712 10:12:50.931171 2560 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 12 10:12:50.934165 kubelet[2560]: I0712 10:12:50.933677 2560 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 12 10:12:50.934165 kubelet[2560]: I0712 10:12:50.933715 2560 reconciler.go:26] "Reconciler: start to sync state" Jul 12 10:12:50.937103 kubelet[2560]: I0712 10:12:50.937059 2560 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 12 10:12:50.937298 kubelet[2560]: I0712 10:12:50.937290 2560 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 12 10:12:50.937487 kubelet[2560]: I0712 10:12:50.937479 2560 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 12 10:12:50.939124 kubelet[2560]: I0712 10:12:50.939111 2560 factory.go:223] Registration of the systemd container factory successfully Jul 12 10:12:50.939249 kubelet[2560]: I0712 10:12:50.939239 2560 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 12 10:12:50.939483 kubelet[2560]: E0712 10:12:50.939470 2560 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://139.178.70.103:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.70.103:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jul 12 10:12:50.939559 kubelet[2560]: E0712 10:12:50.939547 2560 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.103:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.103:6443: connect: connection refused" interval="200ms" Jul 12 10:12:50.941766 kubelet[2560]: I0712 10:12:50.941369 2560 factory.go:223] Registration of the containerd container factory successfully Jul 12 10:12:50.943106 kubelet[2560]: E0712 10:12:50.943076 2560 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 12 10:12:50.960975 kubelet[2560]: I0712 10:12:50.942603 2560 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jul 12 10:12:50.965581 kubelet[2560]: I0712 10:12:50.965521 2560 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jul 12 10:12:50.965581 kubelet[2560]: I0712 10:12:50.965535 2560 status_manager.go:230] "Starting to sync pod status with apiserver" Jul 12 10:12:50.965581 kubelet[2560]: I0712 10:12:50.965546 2560 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 12 10:12:50.965581 kubelet[2560]: I0712 10:12:50.965550 2560 kubelet.go:2436] "Starting kubelet main sync loop" Jul 12 10:12:50.965581 kubelet[2560]: E0712 10:12:50.965570 2560 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 12 10:12:50.968589 kubelet[2560]: E0712 10:12:50.968566 2560 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://139.178.70.103:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 139.178.70.103:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jul 12 10:12:50.968851 kubelet[2560]: I0712 10:12:50.968833 2560 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 12 10:12:50.968975 kubelet[2560]: I0712 10:12:50.968840 2560 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 12 10:12:50.968975 kubelet[2560]: I0712 10:12:50.968915 2560 state_mem.go:36] "Initialized new in-memory state store" Jul 12 10:12:50.969942 kubelet[2560]: I0712 10:12:50.969935 2560 policy_none.go:49] "None policy: Start" Jul 12 10:12:50.970000 kubelet[2560]: I0712 10:12:50.969995 2560 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 12 10:12:50.970034 kubelet[2560]: I0712 10:12:50.970030 2560 state_mem.go:35] "Initializing new in-memory state store" Jul 12 10:12:50.974089 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 12 10:12:50.986071 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 12 10:12:50.988371 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 12 10:12:50.994802 kubelet[2560]: E0712 10:12:50.994791 2560 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jul 12 10:12:50.994947 kubelet[2560]: I0712 10:12:50.994941 2560 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 12 10:12:50.994997 kubelet[2560]: I0712 10:12:50.994981 2560 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 12 10:12:50.995405 kubelet[2560]: I0712 10:12:50.995130 2560 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 12 10:12:50.995943 kubelet[2560]: E0712 10:12:50.995930 2560 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 12 10:12:50.995982 kubelet[2560]: E0712 10:12:50.995951 2560 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 12 10:12:51.073065 systemd[1]: Created slice kubepods-burstable-pod104d19ea7260ba01f9babd113918f71f.slice - libcontainer container kubepods-burstable-pod104d19ea7260ba01f9babd113918f71f.slice. Jul 12 10:12:51.084075 kubelet[2560]: E0712 10:12:51.083948 2560 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 12 10:12:51.085809 systemd[1]: Created slice kubepods-burstable-pod84b858ec27c8b2738b1d9ff9927e0dcb.slice - libcontainer container kubepods-burstable-pod84b858ec27c8b2738b1d9ff9927e0dcb.slice. Jul 12 10:12:51.093257 kubelet[2560]: E0712 10:12:51.093197 2560 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 12 10:12:51.095275 systemd[1]: Created slice kubepods-burstable-pod834ee54f1daa06092e339273649eb5ea.slice - libcontainer container kubepods-burstable-pod834ee54f1daa06092e339273649eb5ea.slice. Jul 12 10:12:51.096681 kubelet[2560]: I0712 10:12:51.096648 2560 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 12 10:12:51.096880 kubelet[2560]: E0712 10:12:51.096836 2560 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 12 10:12:51.097194 kubelet[2560]: E0712 10:12:51.097173 2560 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.70.103:6443/api/v1/nodes\": dial tcp 139.178.70.103:6443: connect: connection refused" node="localhost" Jul 12 10:12:51.134743 kubelet[2560]: I0712 10:12:51.134495 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/104d19ea7260ba01f9babd113918f71f-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"104d19ea7260ba01f9babd113918f71f\") " pod="kube-system/kube-apiserver-localhost" Jul 12 10:12:51.139909 kubelet[2560]: E0712 10:12:51.139888 2560 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.103:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.103:6443: connect: connection refused" interval="400ms" Jul 12 10:12:51.235430 kubelet[2560]: I0712 10:12:51.235307 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/104d19ea7260ba01f9babd113918f71f-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"104d19ea7260ba01f9babd113918f71f\") " pod="kube-system/kube-apiserver-localhost" Jul 12 10:12:51.235430 kubelet[2560]: I0712 10:12:51.235344 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/104d19ea7260ba01f9babd113918f71f-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"104d19ea7260ba01f9babd113918f71f\") " pod="kube-system/kube-apiserver-localhost" Jul 12 10:12:51.235430 kubelet[2560]: I0712 10:12:51.235357 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 10:12:51.235430 kubelet[2560]: I0712 10:12:51.235366 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 10:12:51.235430 kubelet[2560]: I0712 10:12:51.235375 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 10:12:51.235602 kubelet[2560]: I0712 10:12:51.235384 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/834ee54f1daa06092e339273649eb5ea-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"834ee54f1daa06092e339273649eb5ea\") " pod="kube-system/kube-scheduler-localhost" Jul 12 10:12:51.235766 kubelet[2560]: I0712 10:12:51.235392 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 10:12:51.235766 kubelet[2560]: I0712 10:12:51.235751 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 10:12:51.298937 kubelet[2560]: I0712 10:12:51.298907 2560 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 12 10:12:51.299160 kubelet[2560]: E0712 10:12:51.299140 2560 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.70.103:6443/api/v1/nodes\": dial tcp 139.178.70.103:6443: connect: connection refused" node="localhost" Jul 12 10:12:51.387049 containerd[1615]: time="2025-07-12T10:12:51.386975558Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:104d19ea7260ba01f9babd113918f71f,Namespace:kube-system,Attempt:0,}" Jul 12 10:12:51.394143 containerd[1615]: time="2025-07-12T10:12:51.394073681Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:84b858ec27c8b2738b1d9ff9927e0dcb,Namespace:kube-system,Attempt:0,}" Jul 12 10:12:51.411700 containerd[1615]: time="2025-07-12T10:12:51.411674051Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:834ee54f1daa06092e339273649eb5ea,Namespace:kube-system,Attempt:0,}" Jul 12 10:12:51.539967 containerd[1615]: time="2025-07-12T10:12:51.539914625Z" level=info msg="connecting to shim 88668c4957742830f9c88f83b1499df0f18731657b2a5226ac2911cf0dcad3b1" address="unix:///run/containerd/s/03d429d6eb7c2258ed3446d50cca9f5602a23161d945385a400c8ef0aa31a60a" namespace=k8s.io protocol=ttrpc version=3 Jul 12 10:12:51.540373 containerd[1615]: time="2025-07-12T10:12:51.540334701Z" level=info msg="connecting to shim 6619df624688fbec5f36bfa0f3567add22039b3373d91cd08c929eb7d8e3ce67" address="unix:///run/containerd/s/22e5abfd1cec92b8ccaf5fa0bcd5aa7ff6eb664ea52d158cf80029a00fb574f4" namespace=k8s.io protocol=ttrpc version=3 Jul 12 10:12:51.541421 kubelet[2560]: E0712 10:12:51.541357 2560 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.103:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.103:6443: connect: connection refused" interval="800ms" Jul 12 10:12:51.551141 containerd[1615]: time="2025-07-12T10:12:51.550531927Z" level=info msg="connecting to shim edad15e69408a068d1a058326eff3f35e29be55085283b5b13bbd63067914a36" address="unix:///run/containerd/s/6dba175ad32125f6506178a3754bd6096ff9f95351de0cc55a6672e21cd6038d" namespace=k8s.io protocol=ttrpc version=3 Jul 12 10:12:51.651501 systemd[1]: Started cri-containerd-6619df624688fbec5f36bfa0f3567add22039b3373d91cd08c929eb7d8e3ce67.scope - libcontainer container 6619df624688fbec5f36bfa0f3567add22039b3373d91cd08c929eb7d8e3ce67. Jul 12 10:12:51.652850 systemd[1]: Started cri-containerd-88668c4957742830f9c88f83b1499df0f18731657b2a5226ac2911cf0dcad3b1.scope - libcontainer container 88668c4957742830f9c88f83b1499df0f18731657b2a5226ac2911cf0dcad3b1. Jul 12 10:12:51.654243 systemd[1]: Started cri-containerd-edad15e69408a068d1a058326eff3f35e29be55085283b5b13bbd63067914a36.scope - libcontainer container edad15e69408a068d1a058326eff3f35e29be55085283b5b13bbd63067914a36. Jul 12 10:12:51.700706 kubelet[2560]: I0712 10:12:51.700687 2560 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 12 10:12:51.700887 kubelet[2560]: E0712 10:12:51.700868 2560 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.70.103:6443/api/v1/nodes\": dial tcp 139.178.70.103:6443: connect: connection refused" node="localhost" Jul 12 10:12:51.709743 containerd[1615]: time="2025-07-12T10:12:51.709717625Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:104d19ea7260ba01f9babd113918f71f,Namespace:kube-system,Attempt:0,} returns sandbox id \"edad15e69408a068d1a058326eff3f35e29be55085283b5b13bbd63067914a36\"" Jul 12 10:12:51.709817 containerd[1615]: time="2025-07-12T10:12:51.709795023Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:84b858ec27c8b2738b1d9ff9927e0dcb,Namespace:kube-system,Attempt:0,} returns sandbox id \"88668c4957742830f9c88f83b1499df0f18731657b2a5226ac2911cf0dcad3b1\"" Jul 12 10:12:51.713137 containerd[1615]: time="2025-07-12T10:12:51.713056483Z" level=info msg="CreateContainer within sandbox \"88668c4957742830f9c88f83b1499df0f18731657b2a5226ac2911cf0dcad3b1\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 12 10:12:51.713622 containerd[1615]: time="2025-07-12T10:12:51.713604867Z" level=info msg="CreateContainer within sandbox \"edad15e69408a068d1a058326eff3f35e29be55085283b5b13bbd63067914a36\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 12 10:12:51.720495 containerd[1615]: time="2025-07-12T10:12:51.720464815Z" level=info msg="Container 92fc66240ce557d792c85096cbd0aa86af78d572cb2fa4cf99a34b38141dc8b1: CDI devices from CRI Config.CDIDevices: []" Jul 12 10:12:51.722271 containerd[1615]: time="2025-07-12T10:12:51.722226358Z" level=info msg="Container 10e535345c7dd00c2f8866e9b03828f4bbadb70c52ac95d70055f58b4730981c: CDI devices from CRI Config.CDIDevices: []" Jul 12 10:12:51.725909 containerd[1615]: time="2025-07-12T10:12:51.725889499Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:834ee54f1daa06092e339273649eb5ea,Namespace:kube-system,Attempt:0,} returns sandbox id \"6619df624688fbec5f36bfa0f3567add22039b3373d91cd08c929eb7d8e3ce67\"" Jul 12 10:12:51.729196 containerd[1615]: time="2025-07-12T10:12:51.729144177Z" level=info msg="CreateContainer within sandbox \"6619df624688fbec5f36bfa0f3567add22039b3373d91cd08c929eb7d8e3ce67\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 12 10:12:51.730388 containerd[1615]: time="2025-07-12T10:12:51.730369288Z" level=info msg="CreateContainer within sandbox \"edad15e69408a068d1a058326eff3f35e29be55085283b5b13bbd63067914a36\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"10e535345c7dd00c2f8866e9b03828f4bbadb70c52ac95d70055f58b4730981c\"" Jul 12 10:12:51.730846 containerd[1615]: time="2025-07-12T10:12:51.730771405Z" level=info msg="StartContainer for \"10e535345c7dd00c2f8866e9b03828f4bbadb70c52ac95d70055f58b4730981c\"" Jul 12 10:12:51.731417 containerd[1615]: time="2025-07-12T10:12:51.731391820Z" level=info msg="connecting to shim 10e535345c7dd00c2f8866e9b03828f4bbadb70c52ac95d70055f58b4730981c" address="unix:///run/containerd/s/6dba175ad32125f6506178a3754bd6096ff9f95351de0cc55a6672e21cd6038d" protocol=ttrpc version=3 Jul 12 10:12:51.731969 containerd[1615]: time="2025-07-12T10:12:51.731950600Z" level=info msg="CreateContainer within sandbox \"88668c4957742830f9c88f83b1499df0f18731657b2a5226ac2911cf0dcad3b1\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"92fc66240ce557d792c85096cbd0aa86af78d572cb2fa4cf99a34b38141dc8b1\"" Jul 12 10:12:51.732424 containerd[1615]: time="2025-07-12T10:12:51.732389930Z" level=info msg="StartContainer for \"92fc66240ce557d792c85096cbd0aa86af78d572cb2fa4cf99a34b38141dc8b1\"" Jul 12 10:12:51.732993 containerd[1615]: time="2025-07-12T10:12:51.732981172Z" level=info msg="connecting to shim 92fc66240ce557d792c85096cbd0aa86af78d572cb2fa4cf99a34b38141dc8b1" address="unix:///run/containerd/s/03d429d6eb7c2258ed3446d50cca9f5602a23161d945385a400c8ef0aa31a60a" protocol=ttrpc version=3 Jul 12 10:12:51.734205 containerd[1615]: time="2025-07-12T10:12:51.734194741Z" level=info msg="Container 72c0fc50e3fd31a5915fe77c86470f364abd7fd905f654d7f9c514f71e17437a: CDI devices from CRI Config.CDIDevices: []" Jul 12 10:12:51.740155 containerd[1615]: time="2025-07-12T10:12:51.740132526Z" level=info msg="CreateContainer within sandbox \"6619df624688fbec5f36bfa0f3567add22039b3373d91cd08c929eb7d8e3ce67\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"72c0fc50e3fd31a5915fe77c86470f364abd7fd905f654d7f9c514f71e17437a\"" Jul 12 10:12:51.740678 containerd[1615]: time="2025-07-12T10:12:51.740667134Z" level=info msg="StartContainer for \"72c0fc50e3fd31a5915fe77c86470f364abd7fd905f654d7f9c514f71e17437a\"" Jul 12 10:12:51.743671 containerd[1615]: time="2025-07-12T10:12:51.743059600Z" level=info msg="connecting to shim 72c0fc50e3fd31a5915fe77c86470f364abd7fd905f654d7f9c514f71e17437a" address="unix:///run/containerd/s/22e5abfd1cec92b8ccaf5fa0bcd5aa7ff6eb664ea52d158cf80029a00fb574f4" protocol=ttrpc version=3 Jul 12 10:12:51.750506 systemd[1]: Started cri-containerd-10e535345c7dd00c2f8866e9b03828f4bbadb70c52ac95d70055f58b4730981c.scope - libcontainer container 10e535345c7dd00c2f8866e9b03828f4bbadb70c52ac95d70055f58b4730981c. Jul 12 10:12:51.754090 systemd[1]: Started cri-containerd-92fc66240ce557d792c85096cbd0aa86af78d572cb2fa4cf99a34b38141dc8b1.scope - libcontainer container 92fc66240ce557d792c85096cbd0aa86af78d572cb2fa4cf99a34b38141dc8b1. Jul 12 10:12:51.767534 systemd[1]: Started cri-containerd-72c0fc50e3fd31a5915fe77c86470f364abd7fd905f654d7f9c514f71e17437a.scope - libcontainer container 72c0fc50e3fd31a5915fe77c86470f364abd7fd905f654d7f9c514f71e17437a. Jul 12 10:12:51.810522 containerd[1615]: time="2025-07-12T10:12:51.810462681Z" level=info msg="StartContainer for \"10e535345c7dd00c2f8866e9b03828f4bbadb70c52ac95d70055f58b4730981c\" returns successfully" Jul 12 10:12:51.811684 kubelet[2560]: E0712 10:12:51.811459 2560 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://139.178.70.103:6443/api/v1/namespaces/default/events\": dial tcp 139.178.70.103:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1851795d212486ed default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-12 10:12:50.918794989 +0000 UTC m=+0.602998781,LastTimestamp:2025-07-12 10:12:50.918794989 +0000 UTC m=+0.602998781,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 12 10:12:51.819994 containerd[1615]: time="2025-07-12T10:12:51.819939730Z" level=info msg="StartContainer for \"72c0fc50e3fd31a5915fe77c86470f364abd7fd905f654d7f9c514f71e17437a\" returns successfully" Jul 12 10:12:51.831893 containerd[1615]: time="2025-07-12T10:12:51.831352736Z" level=info msg="StartContainer for \"92fc66240ce557d792c85096cbd0aa86af78d572cb2fa4cf99a34b38141dc8b1\" returns successfully" Jul 12 10:12:51.955183 kubelet[2560]: E0712 10:12:51.954532 2560 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://139.178.70.103:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.70.103:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jul 12 10:12:51.974080 kubelet[2560]: E0712 10:12:51.974064 2560 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 12 10:12:51.975466 kubelet[2560]: E0712 10:12:51.975456 2560 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 12 10:12:51.976549 kubelet[2560]: E0712 10:12:51.976504 2560 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 12 10:12:52.032800 kubelet[2560]: E0712 10:12:52.032775 2560 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://139.178.70.103:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 139.178.70.103:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jul 12 10:12:52.036450 kubelet[2560]: E0712 10:12:52.036432 2560 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://139.178.70.103:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.70.103:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jul 12 10:12:52.503719 kubelet[2560]: I0712 10:12:52.503556 2560 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 12 10:12:52.977915 kubelet[2560]: E0712 10:12:52.977821 2560 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 12 10:12:52.978474 kubelet[2560]: E0712 10:12:52.978436 2560 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 12 10:12:53.112300 kubelet[2560]: E0712 10:12:53.112273 2560 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jul 12 10:12:53.189543 kubelet[2560]: I0712 10:12:53.189516 2560 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 12 10:12:53.189543 kubelet[2560]: E0712 10:12:53.189544 2560 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jul 12 10:12:53.196716 kubelet[2560]: E0712 10:12:53.196657 2560 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 12 10:12:53.297065 kubelet[2560]: E0712 10:12:53.296992 2560 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 12 10:12:53.397878 kubelet[2560]: E0712 10:12:53.397847 2560 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 12 10:12:53.498927 kubelet[2560]: E0712 10:12:53.498894 2560 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 12 10:12:53.599534 kubelet[2560]: E0712 10:12:53.599433 2560 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 12 10:12:53.699971 kubelet[2560]: E0712 10:12:53.699943 2560 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 12 10:12:53.800645 kubelet[2560]: E0712 10:12:53.800614 2560 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 12 10:12:53.901439 kubelet[2560]: E0712 10:12:53.901329 2560 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 12 10:12:54.001585 kubelet[2560]: E0712 10:12:54.001561 2560 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 12 10:12:54.134393 kubelet[2560]: I0712 10:12:54.134096 2560 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 12 10:12:54.148710 kubelet[2560]: I0712 10:12:54.148688 2560 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 12 10:12:54.160149 kubelet[2560]: I0712 10:12:54.159951 2560 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 12 10:12:54.366427 kubelet[2560]: I0712 10:12:54.366109 2560 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 12 10:12:54.369278 kubelet[2560]: E0712 10:12:54.369254 2560 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jul 12 10:12:54.691208 kubelet[2560]: I0712 10:12:54.691156 2560 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 12 10:12:54.695034 kubelet[2560]: E0712 10:12:54.695011 2560 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jul 12 10:12:54.866145 systemd[1]: Reload requested from client PID 2834 ('systemctl') (unit session-9.scope)... Jul 12 10:12:54.866156 systemd[1]: Reloading... Jul 12 10:12:54.914295 kubelet[2560]: I0712 10:12:54.912651 2560 apiserver.go:52] "Watching apiserver" Jul 12 10:12:54.925422 zram_generator::config[2877]: No configuration found. Jul 12 10:12:54.939768 kubelet[2560]: I0712 10:12:54.939737 2560 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 12 10:12:55.011955 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 12 10:12:55.020646 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Jul 12 10:12:55.105027 systemd[1]: Reloading finished in 238 ms. Jul 12 10:12:55.136640 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 10:12:55.148792 systemd[1]: kubelet.service: Deactivated successfully. Jul 12 10:12:55.149107 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 10:12:55.149144 systemd[1]: kubelet.service: Consumed 769ms CPU time, 128.9M memory peak. Jul 12 10:12:55.151078 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 10:12:55.539488 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 10:12:55.545790 (kubelet)[2945]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 12 10:12:55.615199 kubelet[2945]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 12 10:12:55.615199 kubelet[2945]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 12 10:12:55.615199 kubelet[2945]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 12 10:12:55.615199 kubelet[2945]: I0712 10:12:55.614549 2945 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 12 10:12:55.619723 kubelet[2945]: I0712 10:12:55.619701 2945 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jul 12 10:12:55.619723 kubelet[2945]: I0712 10:12:55.619721 2945 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 12 10:12:55.619906 kubelet[2945]: I0712 10:12:55.619891 2945 server.go:956] "Client rotation is on, will bootstrap in background" Jul 12 10:12:55.620638 kubelet[2945]: I0712 10:12:55.620625 2945 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jul 12 10:12:55.659796 kubelet[2945]: I0712 10:12:55.659434 2945 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 12 10:12:55.661941 kubelet[2945]: I0712 10:12:55.661924 2945 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 12 10:12:55.665436 kubelet[2945]: I0712 10:12:55.664015 2945 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 12 10:12:55.665436 kubelet[2945]: I0712 10:12:55.664125 2945 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 12 10:12:55.665436 kubelet[2945]: I0712 10:12:55.664140 2945 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 12 10:12:55.665436 kubelet[2945]: I0712 10:12:55.664311 2945 topology_manager.go:138] "Creating topology manager with none policy" Jul 12 10:12:55.665764 kubelet[2945]: I0712 10:12:55.664318 2945 container_manager_linux.go:303] "Creating device plugin manager" Jul 12 10:12:55.665764 kubelet[2945]: I0712 10:12:55.664347 2945 state_mem.go:36] "Initialized new in-memory state store" Jul 12 10:12:55.665764 kubelet[2945]: I0712 10:12:55.664516 2945 kubelet.go:480] "Attempting to sync node with API server" Jul 12 10:12:55.665764 kubelet[2945]: I0712 10:12:55.664525 2945 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 12 10:12:55.665764 kubelet[2945]: I0712 10:12:55.664541 2945 kubelet.go:386] "Adding apiserver pod source" Jul 12 10:12:55.665764 kubelet[2945]: I0712 10:12:55.664565 2945 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 12 10:12:55.666548 kubelet[2945]: I0712 10:12:55.666536 2945 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Jul 12 10:12:55.667217 kubelet[2945]: I0712 10:12:55.667208 2945 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jul 12 10:12:55.670268 kubelet[2945]: I0712 10:12:55.670255 2945 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 12 10:12:55.670367 kubelet[2945]: I0712 10:12:55.670360 2945 server.go:1289] "Started kubelet" Jul 12 10:12:55.673149 kubelet[2945]: I0712 10:12:55.673134 2945 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 12 10:12:55.681654 kubelet[2945]: I0712 10:12:55.681608 2945 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jul 12 10:12:55.682712 kubelet[2945]: I0712 10:12:55.682697 2945 server.go:317] "Adding debug handlers to kubelet server" Jul 12 10:12:55.686413 kubelet[2945]: I0712 10:12:55.685181 2945 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 12 10:12:55.686413 kubelet[2945]: I0712 10:12:55.686005 2945 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 12 10:12:55.686413 kubelet[2945]: I0712 10:12:55.686136 2945 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 12 10:12:55.686413 kubelet[2945]: I0712 10:12:55.686271 2945 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 12 10:12:55.689271 kubelet[2945]: I0712 10:12:55.689253 2945 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 12 10:12:55.689464 kubelet[2945]: I0712 10:12:55.689455 2945 reconciler.go:26] "Reconciler: start to sync state" Jul 12 10:12:55.692025 kubelet[2945]: I0712 10:12:55.691236 2945 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jul 12 10:12:55.692191 kubelet[2945]: I0712 10:12:55.692179 2945 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jul 12 10:12:55.692260 kubelet[2945]: I0712 10:12:55.692252 2945 status_manager.go:230] "Starting to sync pod status with apiserver" Jul 12 10:12:55.692322 kubelet[2945]: I0712 10:12:55.692313 2945 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 12 10:12:55.692491 kubelet[2945]: I0712 10:12:55.692361 2945 kubelet.go:2436] "Starting kubelet main sync loop" Jul 12 10:12:55.692663 kubelet[2945]: E0712 10:12:55.692543 2945 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 12 10:12:55.692867 kubelet[2945]: I0712 10:12:55.692769 2945 factory.go:223] Registration of the systemd container factory successfully Jul 12 10:12:55.692867 kubelet[2945]: I0712 10:12:55.692833 2945 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 12 10:12:55.696376 kubelet[2945]: E0712 10:12:55.696349 2945 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 12 10:12:55.700414 kubelet[2945]: I0712 10:12:55.699617 2945 factory.go:223] Registration of the containerd container factory successfully Jul 12 10:12:55.734333 kubelet[2945]: I0712 10:12:55.734309 2945 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 12 10:12:55.734333 kubelet[2945]: I0712 10:12:55.734320 2945 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 12 10:12:55.734333 kubelet[2945]: I0712 10:12:55.734331 2945 state_mem.go:36] "Initialized new in-memory state store" Jul 12 10:12:55.734467 kubelet[2945]: I0712 10:12:55.734421 2945 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 12 10:12:55.734467 kubelet[2945]: I0712 10:12:55.734428 2945 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 12 10:12:55.739401 kubelet[2945]: I0712 10:12:55.739388 2945 policy_none.go:49] "None policy: Start" Jul 12 10:12:55.739431 kubelet[2945]: I0712 10:12:55.739411 2945 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 12 10:12:55.739431 kubelet[2945]: I0712 10:12:55.739419 2945 state_mem.go:35] "Initializing new in-memory state store" Jul 12 10:12:55.739492 kubelet[2945]: I0712 10:12:55.739480 2945 state_mem.go:75] "Updated machine memory state" Jul 12 10:12:55.741656 kubelet[2945]: E0712 10:12:55.741641 2945 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jul 12 10:12:55.741735 kubelet[2945]: I0712 10:12:55.741724 2945 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 12 10:12:55.741759 kubelet[2945]: I0712 10:12:55.741733 2945 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 12 10:12:55.742057 kubelet[2945]: I0712 10:12:55.741985 2945 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 12 10:12:55.745338 kubelet[2945]: E0712 10:12:55.745319 2945 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 12 10:12:55.794944 kubelet[2945]: I0712 10:12:55.793737 2945 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 12 10:12:55.794944 kubelet[2945]: I0712 10:12:55.793737 2945 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 12 10:12:55.794944 kubelet[2945]: I0712 10:12:55.793891 2945 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 12 10:12:55.797897 kubelet[2945]: E0712 10:12:55.797867 2945 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jul 12 10:12:55.798227 kubelet[2945]: E0712 10:12:55.798197 2945 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jul 12 10:12:55.798654 kubelet[2945]: E0712 10:12:55.798640 2945 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 12 10:12:55.845366 kubelet[2945]: I0712 10:12:55.845313 2945 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 12 10:12:55.853969 kubelet[2945]: I0712 10:12:55.853940 2945 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jul 12 10:12:55.854069 kubelet[2945]: I0712 10:12:55.854018 2945 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 12 10:12:55.891360 kubelet[2945]: I0712 10:12:55.891212 2945 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/104d19ea7260ba01f9babd113918f71f-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"104d19ea7260ba01f9babd113918f71f\") " pod="kube-system/kube-apiserver-localhost" Jul 12 10:12:55.891360 kubelet[2945]: I0712 10:12:55.891237 2945 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 10:12:55.891360 kubelet[2945]: I0712 10:12:55.891247 2945 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 10:12:55.891360 kubelet[2945]: I0712 10:12:55.891258 2945 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 10:12:55.891360 kubelet[2945]: I0712 10:12:55.891269 2945 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 10:12:55.891538 kubelet[2945]: I0712 10:12:55.891277 2945 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/104d19ea7260ba01f9babd113918f71f-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"104d19ea7260ba01f9babd113918f71f\") " pod="kube-system/kube-apiserver-localhost" Jul 12 10:12:55.891538 kubelet[2945]: I0712 10:12:55.891287 2945 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/104d19ea7260ba01f9babd113918f71f-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"104d19ea7260ba01f9babd113918f71f\") " pod="kube-system/kube-apiserver-localhost" Jul 12 10:12:55.891538 kubelet[2945]: I0712 10:12:55.891296 2945 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 10:12:55.891538 kubelet[2945]: I0712 10:12:55.891307 2945 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/834ee54f1daa06092e339273649eb5ea-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"834ee54f1daa06092e339273649eb5ea\") " pod="kube-system/kube-scheduler-localhost" Jul 12 10:12:56.671273 kubelet[2945]: I0712 10:12:56.671096 2945 apiserver.go:52] "Watching apiserver" Jul 12 10:12:56.689501 kubelet[2945]: I0712 10:12:56.689470 2945 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 12 10:12:56.724411 kubelet[2945]: I0712 10:12:56.723929 2945 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 12 10:12:56.724712 kubelet[2945]: I0712 10:12:56.724704 2945 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 12 10:12:56.729801 kubelet[2945]: E0712 10:12:56.729677 2945 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 12 10:12:56.729872 kubelet[2945]: E0712 10:12:56.729823 2945 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jul 12 10:12:56.743097 kubelet[2945]: I0712 10:12:56.743064 2945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.743035433 podStartE2EDuration="2.743035433s" podCreationTimestamp="2025-07-12 10:12:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 10:12:56.742901218 +0000 UTC m=+1.165828356" watchObservedRunningTime="2025-07-12 10:12:56.743035433 +0000 UTC m=+1.165962567" Jul 12 10:12:56.747987 kubelet[2945]: I0712 10:12:56.747904 2945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.747894047 podStartE2EDuration="2.747894047s" podCreationTimestamp="2025-07-12 10:12:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 10:12:56.747816023 +0000 UTC m=+1.170743167" watchObservedRunningTime="2025-07-12 10:12:56.747894047 +0000 UTC m=+1.170821183" Jul 12 10:12:56.757651 kubelet[2945]: I0712 10:12:56.757603 2945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.757590645 podStartE2EDuration="2.757590645s" podCreationTimestamp="2025-07-12 10:12:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 10:12:56.752133927 +0000 UTC m=+1.175061065" watchObservedRunningTime="2025-07-12 10:12:56.757590645 +0000 UTC m=+1.180517786" Jul 12 10:13:00.250053 kubelet[2945]: I0712 10:13:00.249963 2945 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 12 10:13:00.250537 kubelet[2945]: I0712 10:13:00.250286 2945 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 12 10:13:00.250569 containerd[1615]: time="2025-07-12T10:13:00.250185888Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 12 10:13:01.273414 systemd[1]: Created slice kubepods-besteffort-pod184968c6_dee1_484a_8f6f_a7c84d8b34ae.slice - libcontainer container kubepods-besteffort-pod184968c6_dee1_484a_8f6f_a7c84d8b34ae.slice. Jul 12 10:13:01.325349 kubelet[2945]: I0712 10:13:01.325243 2945 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/184968c6-dee1-484a-8f6f-a7c84d8b34ae-kube-proxy\") pod \"kube-proxy-9w9h8\" (UID: \"184968c6-dee1-484a-8f6f-a7c84d8b34ae\") " pod="kube-system/kube-proxy-9w9h8" Jul 12 10:13:01.325349 kubelet[2945]: I0712 10:13:01.325276 2945 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/184968c6-dee1-484a-8f6f-a7c84d8b34ae-xtables-lock\") pod \"kube-proxy-9w9h8\" (UID: \"184968c6-dee1-484a-8f6f-a7c84d8b34ae\") " pod="kube-system/kube-proxy-9w9h8" Jul 12 10:13:01.325349 kubelet[2945]: I0712 10:13:01.325289 2945 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/184968c6-dee1-484a-8f6f-a7c84d8b34ae-lib-modules\") pod \"kube-proxy-9w9h8\" (UID: \"184968c6-dee1-484a-8f6f-a7c84d8b34ae\") " pod="kube-system/kube-proxy-9w9h8" Jul 12 10:13:01.325349 kubelet[2945]: I0712 10:13:01.325303 2945 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4v7h5\" (UniqueName: \"kubernetes.io/projected/184968c6-dee1-484a-8f6f-a7c84d8b34ae-kube-api-access-4v7h5\") pod \"kube-proxy-9w9h8\" (UID: \"184968c6-dee1-484a-8f6f-a7c84d8b34ae\") " pod="kube-system/kube-proxy-9w9h8" Jul 12 10:13:01.392639 systemd[1]: Created slice kubepods-besteffort-pod001cd138_1417_47a0_b9da_091a98084c94.slice - libcontainer container kubepods-besteffort-pod001cd138_1417_47a0_b9da_091a98084c94.slice. Jul 12 10:13:01.426102 kubelet[2945]: I0712 10:13:01.425696 2945 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2r5tw\" (UniqueName: \"kubernetes.io/projected/001cd138-1417-47a0-b9da-091a98084c94-kube-api-access-2r5tw\") pod \"tigera-operator-747864d56d-hp67g\" (UID: \"001cd138-1417-47a0-b9da-091a98084c94\") " pod="tigera-operator/tigera-operator-747864d56d-hp67g" Jul 12 10:13:01.426102 kubelet[2945]: I0712 10:13:01.425759 2945 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/001cd138-1417-47a0-b9da-091a98084c94-var-lib-calico\") pod \"tigera-operator-747864d56d-hp67g\" (UID: \"001cd138-1417-47a0-b9da-091a98084c94\") " pod="tigera-operator/tigera-operator-747864d56d-hp67g" Jul 12 10:13:01.582894 containerd[1615]: time="2025-07-12T10:13:01.582803331Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9w9h8,Uid:184968c6-dee1-484a-8f6f-a7c84d8b34ae,Namespace:kube-system,Attempt:0,}" Jul 12 10:13:01.648078 containerd[1615]: time="2025-07-12T10:13:01.648021056Z" level=info msg="connecting to shim 3c4b5bbac3fe49f0bce5f490a0d9c4eb5c4a71fd0d0a69be471469d114c71735" address="unix:///run/containerd/s/a9cda3b217329ef43e731ada80f9cbf9c2180bcde78f07274099248f3bea9fdd" namespace=k8s.io protocol=ttrpc version=3 Jul 12 10:13:01.668555 systemd[1]: Started cri-containerd-3c4b5bbac3fe49f0bce5f490a0d9c4eb5c4a71fd0d0a69be471469d114c71735.scope - libcontainer container 3c4b5bbac3fe49f0bce5f490a0d9c4eb5c4a71fd0d0a69be471469d114c71735. Jul 12 10:13:01.689476 containerd[1615]: time="2025-07-12T10:13:01.689453548Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9w9h8,Uid:184968c6-dee1-484a-8f6f-a7c84d8b34ae,Namespace:kube-system,Attempt:0,} returns sandbox id \"3c4b5bbac3fe49f0bce5f490a0d9c4eb5c4a71fd0d0a69be471469d114c71735\"" Jul 12 10:13:01.693106 containerd[1615]: time="2025-07-12T10:13:01.693072749Z" level=info msg="CreateContainer within sandbox \"3c4b5bbac3fe49f0bce5f490a0d9c4eb5c4a71fd0d0a69be471469d114c71735\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 12 10:13:01.696348 containerd[1615]: time="2025-07-12T10:13:01.696318175Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-hp67g,Uid:001cd138-1417-47a0-b9da-091a98084c94,Namespace:tigera-operator,Attempt:0,}" Jul 12 10:13:01.704975 containerd[1615]: time="2025-07-12T10:13:01.704941857Z" level=info msg="Container e2571db2e7d92502a9df9831b1e5c63bbe31e2aaf57c6df80e18195110f20c65: CDI devices from CRI Config.CDIDevices: []" Jul 12 10:13:01.713412 containerd[1615]: time="2025-07-12T10:13:01.713155160Z" level=info msg="connecting to shim 6996ee67e5cbcf55edacb915f5f7483172d7d5b2a14b97960b3f6d53a3f03728" address="unix:///run/containerd/s/7590fcbcf54e3095bfd22d3d0e84f37803a12219a5d0ba41ca3c0899f3ec2adf" namespace=k8s.io protocol=ttrpc version=3 Jul 12 10:13:01.715180 containerd[1615]: time="2025-07-12T10:13:01.715163466Z" level=info msg="CreateContainer within sandbox \"3c4b5bbac3fe49f0bce5f490a0d9c4eb5c4a71fd0d0a69be471469d114c71735\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"e2571db2e7d92502a9df9831b1e5c63bbe31e2aaf57c6df80e18195110f20c65\"" Jul 12 10:13:01.717639 containerd[1615]: time="2025-07-12T10:13:01.717619524Z" level=info msg="StartContainer for \"e2571db2e7d92502a9df9831b1e5c63bbe31e2aaf57c6df80e18195110f20c65\"" Jul 12 10:13:01.719520 containerd[1615]: time="2025-07-12T10:13:01.719502972Z" level=info msg="connecting to shim e2571db2e7d92502a9df9831b1e5c63bbe31e2aaf57c6df80e18195110f20c65" address="unix:///run/containerd/s/a9cda3b217329ef43e731ada80f9cbf9c2180bcde78f07274099248f3bea9fdd" protocol=ttrpc version=3 Jul 12 10:13:01.733659 systemd[1]: Started cri-containerd-6996ee67e5cbcf55edacb915f5f7483172d7d5b2a14b97960b3f6d53a3f03728.scope - libcontainer container 6996ee67e5cbcf55edacb915f5f7483172d7d5b2a14b97960b3f6d53a3f03728. Jul 12 10:13:01.740414 systemd[1]: Started cri-containerd-e2571db2e7d92502a9df9831b1e5c63bbe31e2aaf57c6df80e18195110f20c65.scope - libcontainer container e2571db2e7d92502a9df9831b1e5c63bbe31e2aaf57c6df80e18195110f20c65. Jul 12 10:13:01.772714 containerd[1615]: time="2025-07-12T10:13:01.772681007Z" level=info msg="StartContainer for \"e2571db2e7d92502a9df9831b1e5c63bbe31e2aaf57c6df80e18195110f20c65\" returns successfully" Jul 12 10:13:01.779120 containerd[1615]: time="2025-07-12T10:13:01.778829049Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-hp67g,Uid:001cd138-1417-47a0-b9da-091a98084c94,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"6996ee67e5cbcf55edacb915f5f7483172d7d5b2a14b97960b3f6d53a3f03728\"" Jul 12 10:13:01.780846 containerd[1615]: time="2025-07-12T10:13:01.780752041Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 12 10:13:02.434384 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1539106747.mount: Deactivated successfully. Jul 12 10:13:02.742879 kubelet[2945]: I0712 10:13:02.742503 2945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-9w9h8" podStartSLOduration=1.742489372 podStartE2EDuration="1.742489372s" podCreationTimestamp="2025-07-12 10:13:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 10:13:02.742127463 +0000 UTC m=+7.165054600" watchObservedRunningTime="2025-07-12 10:13:02.742489372 +0000 UTC m=+7.165416517" Jul 12 10:13:03.120480 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount261256067.mount: Deactivated successfully. Jul 12 10:13:03.612421 containerd[1615]: time="2025-07-12T10:13:03.612371494Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 10:13:03.613507 containerd[1615]: time="2025-07-12T10:13:03.613481776Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=25056543" Jul 12 10:13:03.614813 containerd[1615]: time="2025-07-12T10:13:03.613757918Z" level=info msg="ImageCreate event name:\"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 10:13:03.617930 containerd[1615]: time="2025-07-12T10:13:03.617913160Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 10:13:03.618209 containerd[1615]: time="2025-07-12T10:13:03.618191339Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"25052538\" in 1.836918291s" Jul 12 10:13:03.618244 containerd[1615]: time="2025-07-12T10:13:03.618208480Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\"" Jul 12 10:13:03.621999 containerd[1615]: time="2025-07-12T10:13:03.621976099Z" level=info msg="CreateContainer within sandbox \"6996ee67e5cbcf55edacb915f5f7483172d7d5b2a14b97960b3f6d53a3f03728\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 12 10:13:03.625841 containerd[1615]: time="2025-07-12T10:13:03.625821928Z" level=info msg="Container d3e6c6c574706babd80081181e9745f412f68e775016a7a7ea4eb9a8c1d37cbb: CDI devices from CRI Config.CDIDevices: []" Jul 12 10:13:03.629807 containerd[1615]: time="2025-07-12T10:13:03.629786818Z" level=info msg="CreateContainer within sandbox \"6996ee67e5cbcf55edacb915f5f7483172d7d5b2a14b97960b3f6d53a3f03728\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"d3e6c6c574706babd80081181e9745f412f68e775016a7a7ea4eb9a8c1d37cbb\"" Jul 12 10:13:03.630503 containerd[1615]: time="2025-07-12T10:13:03.630481380Z" level=info msg="StartContainer for \"d3e6c6c574706babd80081181e9745f412f68e775016a7a7ea4eb9a8c1d37cbb\"" Jul 12 10:13:03.631105 containerd[1615]: time="2025-07-12T10:13:03.631088249Z" level=info msg="connecting to shim d3e6c6c574706babd80081181e9745f412f68e775016a7a7ea4eb9a8c1d37cbb" address="unix:///run/containerd/s/7590fcbcf54e3095bfd22d3d0e84f37803a12219a5d0ba41ca3c0899f3ec2adf" protocol=ttrpc version=3 Jul 12 10:13:03.651547 systemd[1]: Started cri-containerd-d3e6c6c574706babd80081181e9745f412f68e775016a7a7ea4eb9a8c1d37cbb.scope - libcontainer container d3e6c6c574706babd80081181e9745f412f68e775016a7a7ea4eb9a8c1d37cbb. Jul 12 10:13:03.673582 containerd[1615]: time="2025-07-12T10:13:03.673503636Z" level=info msg="StartContainer for \"d3e6c6c574706babd80081181e9745f412f68e775016a7a7ea4eb9a8c1d37cbb\" returns successfully" Jul 12 10:13:03.752477 kubelet[2945]: I0712 10:13:03.752438 2945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-747864d56d-hp67g" podStartSLOduration=0.91363696 podStartE2EDuration="2.752423201s" podCreationTimestamp="2025-07-12 10:13:01 +0000 UTC" firstStartedPulling="2025-07-12 10:13:01.780175798 +0000 UTC m=+6.203102929" lastFinishedPulling="2025-07-12 10:13:03.618962035 +0000 UTC m=+8.041889170" observedRunningTime="2025-07-12 10:13:03.751301404 +0000 UTC m=+8.174228547" watchObservedRunningTime="2025-07-12 10:13:03.752423201 +0000 UTC m=+8.175350354" Jul 12 10:13:09.072193 sudo[1968]: pam_unix(sudo:session): session closed for user root Jul 12 10:13:09.075160 sshd[1967]: Connection closed by 139.178.89.65 port 50006 Jul 12 10:13:09.074914 sshd-session[1964]: pam_unix(sshd:session): session closed for user core Jul 12 10:13:09.077573 systemd[1]: sshd@6-139.178.70.103:22-139.178.89.65:50006.service: Deactivated successfully. Jul 12 10:13:09.080122 systemd[1]: session-9.scope: Deactivated successfully. Jul 12 10:13:09.080380 systemd[1]: session-9.scope: Consumed 4.068s CPU time, 152.2M memory peak. Jul 12 10:13:09.083352 systemd-logind[1597]: Session 9 logged out. Waiting for processes to exit. Jul 12 10:13:09.084604 systemd-logind[1597]: Removed session 9. Jul 12 10:13:11.540419 systemd[1]: Created slice kubepods-besteffort-pod1c748e1e_8c5f_4cf1_8d2b_52d4ddc0f887.slice - libcontainer container kubepods-besteffort-pod1c748e1e_8c5f_4cf1_8d2b_52d4ddc0f887.slice. Jul 12 10:13:11.592930 kubelet[2945]: I0712 10:13:11.592907 2945 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lftkl\" (UniqueName: \"kubernetes.io/projected/1c748e1e-8c5f-4cf1-8d2b-52d4ddc0f887-kube-api-access-lftkl\") pod \"calico-typha-76dd5bbf95-l9gst\" (UID: \"1c748e1e-8c5f-4cf1-8d2b-52d4ddc0f887\") " pod="calico-system/calico-typha-76dd5bbf95-l9gst" Jul 12 10:13:11.593694 kubelet[2945]: I0712 10:13:11.593553 2945 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/1c748e1e-8c5f-4cf1-8d2b-52d4ddc0f887-typha-certs\") pod \"calico-typha-76dd5bbf95-l9gst\" (UID: \"1c748e1e-8c5f-4cf1-8d2b-52d4ddc0f887\") " pod="calico-system/calico-typha-76dd5bbf95-l9gst" Jul 12 10:13:11.593694 kubelet[2945]: I0712 10:13:11.593571 2945 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1c748e1e-8c5f-4cf1-8d2b-52d4ddc0f887-tigera-ca-bundle\") pod \"calico-typha-76dd5bbf95-l9gst\" (UID: \"1c748e1e-8c5f-4cf1-8d2b-52d4ddc0f887\") " pod="calico-system/calico-typha-76dd5bbf95-l9gst" Jul 12 10:13:11.846438 containerd[1615]: time="2025-07-12T10:13:11.845503346Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-76dd5bbf95-l9gst,Uid:1c748e1e-8c5f-4cf1-8d2b-52d4ddc0f887,Namespace:calico-system,Attempt:0,}" Jul 12 10:13:11.853893 systemd[1]: Created slice kubepods-besteffort-pod7c3baf4a_2c6f_470c_99e7_86d3f3c24a26.slice - libcontainer container kubepods-besteffort-pod7c3baf4a_2c6f_470c_99e7_86d3f3c24a26.slice. Jul 12 10:13:11.892202 containerd[1615]: time="2025-07-12T10:13:11.891807971Z" level=info msg="connecting to shim b959d344cc35e8671c883279b3e58cf504eaeeed273e24c2779c1f61d7e155d1" address="unix:///run/containerd/s/03100b6279a31c6de6c34fc3f6114f8f8bd1c911510a86df4ccb81791ec9a7b6" namespace=k8s.io protocol=ttrpc version=3 Jul 12 10:13:11.897410 kubelet[2945]: I0712 10:13:11.895661 2945 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/7c3baf4a-2c6f-470c-99e7-86d3f3c24a26-var-lib-calico\") pod \"calico-node-ppd8c\" (UID: \"7c3baf4a-2c6f-470c-99e7-86d3f3c24a26\") " pod="calico-system/calico-node-ppd8c" Jul 12 10:13:11.897410 kubelet[2945]: I0712 10:13:11.895694 2945 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/7c3baf4a-2c6f-470c-99e7-86d3f3c24a26-node-certs\") pod \"calico-node-ppd8c\" (UID: \"7c3baf4a-2c6f-470c-99e7-86d3f3c24a26\") " pod="calico-system/calico-node-ppd8c" Jul 12 10:13:11.897410 kubelet[2945]: I0712 10:13:11.895713 2945 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/7c3baf4a-2c6f-470c-99e7-86d3f3c24a26-flexvol-driver-host\") pod \"calico-node-ppd8c\" (UID: \"7c3baf4a-2c6f-470c-99e7-86d3f3c24a26\") " pod="calico-system/calico-node-ppd8c" Jul 12 10:13:11.897410 kubelet[2945]: I0712 10:13:11.895730 2945 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7c3baf4a-2c6f-470c-99e7-86d3f3c24a26-lib-modules\") pod \"calico-node-ppd8c\" (UID: \"7c3baf4a-2c6f-470c-99e7-86d3f3c24a26\") " pod="calico-system/calico-node-ppd8c" Jul 12 10:13:11.897410 kubelet[2945]: I0712 10:13:11.895746 2945 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7c3baf4a-2c6f-470c-99e7-86d3f3c24a26-tigera-ca-bundle\") pod \"calico-node-ppd8c\" (UID: \"7c3baf4a-2c6f-470c-99e7-86d3f3c24a26\") " pod="calico-system/calico-node-ppd8c" Jul 12 10:13:11.897580 kubelet[2945]: I0712 10:13:11.895760 2945 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/7c3baf4a-2c6f-470c-99e7-86d3f3c24a26-policysync\") pod \"calico-node-ppd8c\" (UID: \"7c3baf4a-2c6f-470c-99e7-86d3f3c24a26\") " pod="calico-system/calico-node-ppd8c" Jul 12 10:13:11.897580 kubelet[2945]: I0712 10:13:11.895773 2945 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/7c3baf4a-2c6f-470c-99e7-86d3f3c24a26-cni-bin-dir\") pod \"calico-node-ppd8c\" (UID: \"7c3baf4a-2c6f-470c-99e7-86d3f3c24a26\") " pod="calico-system/calico-node-ppd8c" Jul 12 10:13:11.897580 kubelet[2945]: I0712 10:13:11.895784 2945 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/7c3baf4a-2c6f-470c-99e7-86d3f3c24a26-cni-net-dir\") pod \"calico-node-ppd8c\" (UID: \"7c3baf4a-2c6f-470c-99e7-86d3f3c24a26\") " pod="calico-system/calico-node-ppd8c" Jul 12 10:13:11.897580 kubelet[2945]: I0712 10:13:11.895793 2945 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5x6rg\" (UniqueName: \"kubernetes.io/projected/7c3baf4a-2c6f-470c-99e7-86d3f3c24a26-kube-api-access-5x6rg\") pod \"calico-node-ppd8c\" (UID: \"7c3baf4a-2c6f-470c-99e7-86d3f3c24a26\") " pod="calico-system/calico-node-ppd8c" Jul 12 10:13:11.897580 kubelet[2945]: I0712 10:13:11.895802 2945 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/7c3baf4a-2c6f-470c-99e7-86d3f3c24a26-var-run-calico\") pod \"calico-node-ppd8c\" (UID: \"7c3baf4a-2c6f-470c-99e7-86d3f3c24a26\") " pod="calico-system/calico-node-ppd8c" Jul 12 10:13:11.897689 kubelet[2945]: I0712 10:13:11.895811 2945 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/7c3baf4a-2c6f-470c-99e7-86d3f3c24a26-cni-log-dir\") pod \"calico-node-ppd8c\" (UID: \"7c3baf4a-2c6f-470c-99e7-86d3f3c24a26\") " pod="calico-system/calico-node-ppd8c" Jul 12 10:13:11.897689 kubelet[2945]: I0712 10:13:11.895821 2945 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7c3baf4a-2c6f-470c-99e7-86d3f3c24a26-xtables-lock\") pod \"calico-node-ppd8c\" (UID: \"7c3baf4a-2c6f-470c-99e7-86d3f3c24a26\") " pod="calico-system/calico-node-ppd8c" Jul 12 10:13:11.915524 systemd[1]: Started cri-containerd-b959d344cc35e8671c883279b3e58cf504eaeeed273e24c2779c1f61d7e155d1.scope - libcontainer container b959d344cc35e8671c883279b3e58cf504eaeeed273e24c2779c1f61d7e155d1. Jul 12 10:13:11.953044 containerd[1615]: time="2025-07-12T10:13:11.953020356Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-76dd5bbf95-l9gst,Uid:1c748e1e-8c5f-4cf1-8d2b-52d4ddc0f887,Namespace:calico-system,Attempt:0,} returns sandbox id \"b959d344cc35e8671c883279b3e58cf504eaeeed273e24c2779c1f61d7e155d1\"" Jul 12 10:13:11.953855 containerd[1615]: time="2025-07-12T10:13:11.953817820Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 12 10:13:12.001593 kubelet[2945]: E0712 10:13:12.001322 2945 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:12.001593 kubelet[2945]: W0712 10:13:12.001342 2945 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:12.002046 kubelet[2945]: E0712 10:13:12.001678 2945 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:12.005887 kubelet[2945]: E0712 10:13:12.005873 2945 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:12.006006 kubelet[2945]: W0712 10:13:12.005930 2945 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:12.006006 kubelet[2945]: E0712 10:13:12.005944 2945 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:12.121162 kubelet[2945]: E0712 10:13:12.120950 2945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5skpj" podUID="c9d481cf-b5a0-47fe-a981-c3ce861cf9d4" Jul 12 10:13:12.158467 containerd[1615]: time="2025-07-12T10:13:12.158391029Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-ppd8c,Uid:7c3baf4a-2c6f-470c-99e7-86d3f3c24a26,Namespace:calico-system,Attempt:0,}" Jul 12 10:13:12.167688 containerd[1615]: time="2025-07-12T10:13:12.167642617Z" level=info msg="connecting to shim d2be803a67f1b7a36f9727b7bd4a8a26f8f211a20a72abd823baa05013c57a69" address="unix:///run/containerd/s/e1623bc9a6b1c6d8df7e8fc2d5bb391094ae661087b9638423add627f8e5992f" namespace=k8s.io protocol=ttrpc version=3 Jul 12 10:13:12.189164 kubelet[2945]: E0712 10:13:12.188481 2945 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:12.189164 kubelet[2945]: W0712 10:13:12.188500 2945 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:12.189164 kubelet[2945]: E0712 10:13:12.188518 2945 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:12.189164 kubelet[2945]: E0712 10:13:12.188682 2945 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:12.189164 kubelet[2945]: W0712 10:13:12.188695 2945 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:12.189164 kubelet[2945]: E0712 10:13:12.188706 2945 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:12.189164 kubelet[2945]: E0712 10:13:12.188870 2945 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:12.189164 kubelet[2945]: W0712 10:13:12.188878 2945 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:12.189164 kubelet[2945]: E0712 10:13:12.188886 2945 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:12.199353 kubelet[2945]: E0712 10:13:12.199330 2945 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:12.199494 kubelet[2945]: W0712 10:13:12.199477 2945 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:12.199557 kubelet[2945]: E0712 10:13:12.199548 2945 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:12.199776 kubelet[2945]: E0712 10:13:12.199768 2945 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:12.203309 kubelet[2945]: W0712 10:13:12.199871 2945 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:12.203309 kubelet[2945]: E0712 10:13:12.199879 2945 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:12.203309 kubelet[2945]: E0712 10:13:12.199978 2945 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:12.203309 kubelet[2945]: W0712 10:13:12.199984 2945 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:12.203309 kubelet[2945]: E0712 10:13:12.199992 2945 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:12.203309 kubelet[2945]: E0712 10:13:12.200087 2945 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:12.203309 kubelet[2945]: W0712 10:13:12.200092 2945 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:12.203309 kubelet[2945]: E0712 10:13:12.200097 2945 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:12.203309 kubelet[2945]: E0712 10:13:12.200188 2945 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:12.203309 kubelet[2945]: W0712 10:13:12.200194 2945 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:12.203530 kubelet[2945]: E0712 10:13:12.200199 2945 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:12.203530 kubelet[2945]: E0712 10:13:12.200300 2945 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:12.203530 kubelet[2945]: W0712 10:13:12.200305 2945 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:12.203530 kubelet[2945]: E0712 10:13:12.200313 2945 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:12.203530 kubelet[2945]: E0712 10:13:12.200418 2945 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:12.203530 kubelet[2945]: W0712 10:13:12.200423 2945 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:12.203530 kubelet[2945]: E0712 10:13:12.200430 2945 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:12.203530 kubelet[2945]: E0712 10:13:12.200525 2945 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:12.203530 kubelet[2945]: W0712 10:13:12.200530 2945 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:12.203530 kubelet[2945]: E0712 10:13:12.200536 2945 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:12.204121 kubelet[2945]: E0712 10:13:12.203716 2945 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:12.204121 kubelet[2945]: W0712 10:13:12.203727 2945 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:12.204121 kubelet[2945]: E0712 10:13:12.203741 2945 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:12.204121 kubelet[2945]: E0712 10:13:12.203891 2945 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:12.204121 kubelet[2945]: W0712 10:13:12.203896 2945 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:12.204121 kubelet[2945]: E0712 10:13:12.203901 2945 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:12.204288 kubelet[2945]: E0712 10:13:12.204281 2945 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:12.204322 kubelet[2945]: W0712 10:13:12.204317 2945 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:12.204367 kubelet[2945]: E0712 10:13:12.204361 2945 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:12.204528 kubelet[2945]: E0712 10:13:12.204522 2945 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:12.204567 kubelet[2945]: W0712 10:13:12.204561 2945 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:12.204636 kubelet[2945]: E0712 10:13:12.204630 2945 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:12.204883 kubelet[2945]: E0712 10:13:12.204875 2945 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:12.204936 kubelet[2945]: W0712 10:13:12.204928 2945 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:12.204987 kubelet[2945]: E0712 10:13:12.204978 2945 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:12.205135 kubelet[2945]: E0712 10:13:12.205129 2945 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:12.205183 kubelet[2945]: W0712 10:13:12.205176 2945 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:12.205241 kubelet[2945]: E0712 10:13:12.205231 2945 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:12.205375 kubelet[2945]: E0712 10:13:12.205369 2945 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:12.207431 kubelet[2945]: W0712 10:13:12.207415 2945 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:12.207496 kubelet[2945]: E0712 10:13:12.207488 2945 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:12.207636 kubelet[2945]: E0712 10:13:12.207629 2945 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:12.207691 kubelet[2945]: W0712 10:13:12.207682 2945 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:12.207741 kubelet[2945]: E0712 10:13:12.207734 2945 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:12.207880 kubelet[2945]: E0712 10:13:12.207872 2945 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:12.207922 kubelet[2945]: W0712 10:13:12.207916 2945 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:12.207973 kubelet[2945]: E0712 10:13:12.207967 2945 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:12.208306 kubelet[2945]: E0712 10:13:12.208297 2945 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:12.208358 kubelet[2945]: W0712 10:13:12.208350 2945 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:12.208488 kubelet[2945]: E0712 10:13:12.208478 2945 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:12.208556 kubelet[2945]: I0712 10:13:12.208544 2945 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/c9d481cf-b5a0-47fe-a981-c3ce861cf9d4-varrun\") pod \"csi-node-driver-5skpj\" (UID: \"c9d481cf-b5a0-47fe-a981-c3ce861cf9d4\") " pod="calico-system/csi-node-driver-5skpj" Jul 12 10:13:12.208844 kubelet[2945]: E0712 10:13:12.208835 2945 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:12.208898 kubelet[2945]: W0712 10:13:12.208890 2945 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:12.208939 kubelet[2945]: E0712 10:13:12.208933 2945 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:12.209083 kubelet[2945]: E0712 10:13:12.209076 2945 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:12.209140 kubelet[2945]: W0712 10:13:12.209131 2945 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:12.209219 kubelet[2945]: E0712 10:13:12.209209 2945 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:12.209579 kubelet[2945]: E0712 10:13:12.209431 2945 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:12.209579 kubelet[2945]: W0712 10:13:12.209440 2945 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:12.209579 kubelet[2945]: E0712 10:13:12.209446 2945 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:12.209579 kubelet[2945]: I0712 10:13:12.209513 2945 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c9d481cf-b5a0-47fe-a981-c3ce861cf9d4-kubelet-dir\") pod \"csi-node-driver-5skpj\" (UID: \"c9d481cf-b5a0-47fe-a981-c3ce861cf9d4\") " pod="calico-system/csi-node-driver-5skpj" Jul 12 10:13:12.210865 kubelet[2945]: E0712 10:13:12.210756 2945 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:12.210865 kubelet[2945]: W0712 10:13:12.210769 2945 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:12.210865 kubelet[2945]: E0712 10:13:12.210778 2945 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:12.210865 kubelet[2945]: I0712 10:13:12.210794 2945 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/c9d481cf-b5a0-47fe-a981-c3ce861cf9d4-registration-dir\") pod \"csi-node-driver-5skpj\" (UID: \"c9d481cf-b5a0-47fe-a981-c3ce861cf9d4\") " pod="calico-system/csi-node-driver-5skpj" Jul 12 10:13:12.211856 kubelet[2945]: E0712 10:13:12.211754 2945 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:12.211856 kubelet[2945]: W0712 10:13:12.211763 2945 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:12.211856 kubelet[2945]: E0712 10:13:12.211771 2945 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:12.211856 kubelet[2945]: I0712 10:13:12.211782 2945 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwlv6\" (UniqueName: \"kubernetes.io/projected/c9d481cf-b5a0-47fe-a981-c3ce861cf9d4-kube-api-access-dwlv6\") pod \"csi-node-driver-5skpj\" (UID: \"c9d481cf-b5a0-47fe-a981-c3ce861cf9d4\") " pod="calico-system/csi-node-driver-5skpj" Jul 12 10:13:12.212112 kubelet[2945]: E0712 10:13:12.211989 2945 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:12.212112 kubelet[2945]: W0712 10:13:12.211999 2945 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:12.212112 kubelet[2945]: E0712 10:13:12.212006 2945 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:12.212112 kubelet[2945]: I0712 10:13:12.212058 2945 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/c9d481cf-b5a0-47fe-a981-c3ce861cf9d4-socket-dir\") pod \"csi-node-driver-5skpj\" (UID: \"c9d481cf-b5a0-47fe-a981-c3ce861cf9d4\") " pod="calico-system/csi-node-driver-5skpj" Jul 12 10:13:12.212344 kubelet[2945]: E0712 10:13:12.212242 2945 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:12.212344 kubelet[2945]: W0712 10:13:12.212249 2945 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:12.212344 kubelet[2945]: E0712 10:13:12.212255 2945 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:12.212571 kubelet[2945]: E0712 10:13:12.212464 2945 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:12.212571 kubelet[2945]: W0712 10:13:12.212471 2945 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:12.212571 kubelet[2945]: E0712 10:13:12.212477 2945 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:12.213067 kubelet[2945]: E0712 10:13:12.212939 2945 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:12.213067 kubelet[2945]: W0712 10:13:12.212944 2945 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:12.213067 kubelet[2945]: E0712 10:13:12.212950 2945 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:12.213218 kubelet[2945]: E0712 10:13:12.213155 2945 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:12.213218 kubelet[2945]: W0712 10:13:12.213161 2945 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:12.213218 kubelet[2945]: E0712 10:13:12.213167 2945 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:12.213409 kubelet[2945]: E0712 10:13:12.213325 2945 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:12.213409 kubelet[2945]: W0712 10:13:12.213331 2945 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:12.213409 kubelet[2945]: E0712 10:13:12.213336 2945 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:12.214876 kubelet[2945]: E0712 10:13:12.214788 2945 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:12.214876 kubelet[2945]: W0712 10:13:12.214796 2945 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:12.214876 kubelet[2945]: E0712 10:13:12.214803 2945 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:12.215162 kubelet[2945]: E0712 10:13:12.215078 2945 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:12.215162 kubelet[2945]: W0712 10:13:12.215087 2945 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:12.215162 kubelet[2945]: E0712 10:13:12.215094 2945 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:12.215362 kubelet[2945]: E0712 10:13:12.215339 2945 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:12.215362 kubelet[2945]: W0712 10:13:12.215345 2945 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:12.215362 kubelet[2945]: E0712 10:13:12.215352 2945 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:12.237744 systemd[1]: Started cri-containerd-d2be803a67f1b7a36f9727b7bd4a8a26f8f211a20a72abd823baa05013c57a69.scope - libcontainer container d2be803a67f1b7a36f9727b7bd4a8a26f8f211a20a72abd823baa05013c57a69. Jul 12 10:13:12.297172 containerd[1615]: time="2025-07-12T10:13:12.297141517Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-ppd8c,Uid:7c3baf4a-2c6f-470c-99e7-86d3f3c24a26,Namespace:calico-system,Attempt:0,} returns sandbox id \"d2be803a67f1b7a36f9727b7bd4a8a26f8f211a20a72abd823baa05013c57a69\"" Jul 12 10:13:12.312638 kubelet[2945]: E0712 10:13:12.312616 2945 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:12.312638 kubelet[2945]: W0712 10:13:12.312632 2945 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:12.313449 kubelet[2945]: E0712 10:13:12.313426 2945 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:12.313606 kubelet[2945]: E0712 10:13:12.313594 2945 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:12.313606 kubelet[2945]: W0712 10:13:12.313604 2945 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:12.313679 kubelet[2945]: E0712 10:13:12.313615 2945 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:12.313759 kubelet[2945]: E0712 10:13:12.313749 2945 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:12.313759 kubelet[2945]: W0712 10:13:12.313755 2945 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:12.313897 kubelet[2945]: E0712 10:13:12.313762 2945 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:12.313966 kubelet[2945]: E0712 10:13:12.313958 2945 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:12.314058 kubelet[2945]: W0712 10:13:12.314007 2945 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:12.314058 kubelet[2945]: E0712 10:13:12.314022 2945 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:12.314256 kubelet[2945]: E0712 10:13:12.314210 2945 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:12.314256 kubelet[2945]: W0712 10:13:12.314220 2945 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:12.314256 kubelet[2945]: E0712 10:13:12.314228 2945 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:12.314503 kubelet[2945]: E0712 10:13:12.314438 2945 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:12.314503 kubelet[2945]: W0712 10:13:12.314447 2945 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:12.314503 kubelet[2945]: E0712 10:13:12.314455 2945 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:12.314662 kubelet[2945]: E0712 10:13:12.314655 2945 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:12.314822 kubelet[2945]: W0712 10:13:12.314704 2945 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:12.314822 kubelet[2945]: E0712 10:13:12.314715 2945 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:12.314903 kubelet[2945]: E0712 10:13:12.314893 2945 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:12.314929 kubelet[2945]: W0712 10:13:12.314903 2945 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:12.314929 kubelet[2945]: E0712 10:13:12.314911 2945 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:12.315051 kubelet[2945]: E0712 10:13:12.315026 2945 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:12.315051 kubelet[2945]: W0712 10:13:12.315035 2945 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:12.315051 kubelet[2945]: E0712 10:13:12.315042 2945 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:12.315213 kubelet[2945]: E0712 10:13:12.315201 2945 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:12.315213 kubelet[2945]: W0712 10:13:12.315211 2945 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:12.315288 kubelet[2945]: E0712 10:13:12.315220 2945 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:12.316669 kubelet[2945]: E0712 10:13:12.315565 2945 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:12.316669 kubelet[2945]: W0712 10:13:12.316625 2945 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:12.316669 kubelet[2945]: E0712 10:13:12.316637 2945 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:12.316804 kubelet[2945]: E0712 10:13:12.316790 2945 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:12.316804 kubelet[2945]: W0712 10:13:12.316800 2945 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:12.316862 kubelet[2945]: E0712 10:13:12.316809 2945 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:12.316980 kubelet[2945]: E0712 10:13:12.316947 2945 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:12.316980 kubelet[2945]: W0712 10:13:12.316954 2945 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:12.316980 kubelet[2945]: E0712 10:13:12.316960 2945 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:12.317575 kubelet[2945]: E0712 10:13:12.317564 2945 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:12.317575 kubelet[2945]: W0712 10:13:12.317571 2945 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:12.317741 kubelet[2945]: E0712 10:13:12.317579 2945 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:12.317851 kubelet[2945]: E0712 10:13:12.317839 2945 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:12.317851 kubelet[2945]: W0712 10:13:12.317846 2945 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:12.317905 kubelet[2945]: E0712 10:13:12.317855 2945 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:12.318481 kubelet[2945]: E0712 10:13:12.318469 2945 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:12.318481 kubelet[2945]: W0712 10:13:12.318477 2945 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:12.318628 kubelet[2945]: E0712 10:13:12.318483 2945 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:12.318929 kubelet[2945]: E0712 10:13:12.318913 2945 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:12.318929 kubelet[2945]: W0712 10:13:12.318921 2945 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:12.318929 kubelet[2945]: E0712 10:13:12.318930 2945 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:12.319441 kubelet[2945]: E0712 10:13:12.319431 2945 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:12.319441 kubelet[2945]: W0712 10:13:12.319440 2945 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:12.319537 kubelet[2945]: E0712 10:13:12.319446 2945 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:12.320439 kubelet[2945]: E0712 10:13:12.320424 2945 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:12.320439 kubelet[2945]: W0712 10:13:12.320435 2945 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:12.320586 kubelet[2945]: E0712 10:13:12.320443 2945 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:12.320586 kubelet[2945]: E0712 10:13:12.320565 2945 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:12.320586 kubelet[2945]: W0712 10:13:12.320570 2945 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:12.320586 kubelet[2945]: E0712 10:13:12.320576 2945 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:12.320823 kubelet[2945]: E0712 10:13:12.320660 2945 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:12.320823 kubelet[2945]: W0712 10:13:12.320664 2945 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:12.320823 kubelet[2945]: E0712 10:13:12.320669 2945 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:12.321455 kubelet[2945]: E0712 10:13:12.321438 2945 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:12.321455 kubelet[2945]: W0712 10:13:12.321453 2945 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:12.321557 kubelet[2945]: E0712 10:13:12.321462 2945 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:12.321591 kubelet[2945]: E0712 10:13:12.321571 2945 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:12.321591 kubelet[2945]: W0712 10:13:12.321588 2945 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:12.321669 kubelet[2945]: E0712 10:13:12.321597 2945 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:12.321894 kubelet[2945]: E0712 10:13:12.321858 2945 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:12.322238 kubelet[2945]: W0712 10:13:12.322229 2945 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:12.322275 kubelet[2945]: E0712 10:13:12.322270 2945 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:12.322494 kubelet[2945]: E0712 10:13:12.322487 2945 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:12.322562 kubelet[2945]: W0712 10:13:12.322555 2945 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:12.322606 kubelet[2945]: E0712 10:13:12.322600 2945 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:12.326296 kubelet[2945]: E0712 10:13:12.326281 2945 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:12.326467 kubelet[2945]: W0712 10:13:12.326445 2945 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:12.326544 kubelet[2945]: E0712 10:13:12.326517 2945 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:13.700820 kubelet[2945]: E0712 10:13:13.700780 2945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5skpj" podUID="c9d481cf-b5a0-47fe-a981-c3ce861cf9d4" Jul 12 10:13:13.901672 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3501514776.mount: Deactivated successfully. Jul 12 10:13:14.785955 containerd[1615]: time="2025-07-12T10:13:14.785556750Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 10:13:14.787812 containerd[1615]: time="2025-07-12T10:13:14.786840507Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=35233364" Jul 12 10:13:14.787812 containerd[1615]: time="2025-07-12T10:13:14.787253975Z" level=info msg="ImageCreate event name:\"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 10:13:14.788708 containerd[1615]: time="2025-07-12T10:13:14.788695918Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 10:13:14.789282 containerd[1615]: time="2025-07-12T10:13:14.789078409Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"35233218\" in 2.83524509s" Jul 12 10:13:14.789334 containerd[1615]: time="2025-07-12T10:13:14.789326381Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\"" Jul 12 10:13:14.790192 containerd[1615]: time="2025-07-12T10:13:14.790180257Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 12 10:13:14.800697 containerd[1615]: time="2025-07-12T10:13:14.800663333Z" level=info msg="CreateContainer within sandbox \"b959d344cc35e8671c883279b3e58cf504eaeeed273e24c2779c1f61d7e155d1\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 12 10:13:14.814613 containerd[1615]: time="2025-07-12T10:13:14.814589466Z" level=info msg="Container b96363793c5a97c147c37bf7b711c22fe05d6971df58a68f498a6542a1fe98ce: CDI devices from CRI Config.CDIDevices: []" Jul 12 10:13:14.818142 containerd[1615]: time="2025-07-12T10:13:14.818124623Z" level=info msg="CreateContainer within sandbox \"b959d344cc35e8671c883279b3e58cf504eaeeed273e24c2779c1f61d7e155d1\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"b96363793c5a97c147c37bf7b711c22fe05d6971df58a68f498a6542a1fe98ce\"" Jul 12 10:13:14.819100 containerd[1615]: time="2025-07-12T10:13:14.819077322Z" level=info msg="StartContainer for \"b96363793c5a97c147c37bf7b711c22fe05d6971df58a68f498a6542a1fe98ce\"" Jul 12 10:13:14.820672 containerd[1615]: time="2025-07-12T10:13:14.820646375Z" level=info msg="connecting to shim b96363793c5a97c147c37bf7b711c22fe05d6971df58a68f498a6542a1fe98ce" address="unix:///run/containerd/s/03100b6279a31c6de6c34fc3f6114f8f8bd1c911510a86df4ccb81791ec9a7b6" protocol=ttrpc version=3 Jul 12 10:13:14.847596 systemd[1]: Started cri-containerd-b96363793c5a97c147c37bf7b711c22fe05d6971df58a68f498a6542a1fe98ce.scope - libcontainer container b96363793c5a97c147c37bf7b711c22fe05d6971df58a68f498a6542a1fe98ce. Jul 12 10:13:14.902414 containerd[1615]: time="2025-07-12T10:13:14.901983641Z" level=info msg="StartContainer for \"b96363793c5a97c147c37bf7b711c22fe05d6971df58a68f498a6542a1fe98ce\" returns successfully" Jul 12 10:13:15.692796 kubelet[2945]: E0712 10:13:15.692738 2945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5skpj" podUID="c9d481cf-b5a0-47fe-a981-c3ce861cf9d4" Jul 12 10:13:15.781079 kubelet[2945]: I0712 10:13:15.780809 2945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-76dd5bbf95-l9gst" podStartSLOduration=1.944578084 podStartE2EDuration="4.780797921s" podCreationTimestamp="2025-07-12 10:13:11 +0000 UTC" firstStartedPulling="2025-07-12 10:13:11.953681823 +0000 UTC m=+16.376608957" lastFinishedPulling="2025-07-12 10:13:14.789901653 +0000 UTC m=+19.212828794" observedRunningTime="2025-07-12 10:13:15.7806388 +0000 UTC m=+20.203565937" watchObservedRunningTime="2025-07-12 10:13:15.780797921 +0000 UTC m=+20.203725059" Jul 12 10:13:15.829168 kubelet[2945]: E0712 10:13:15.828933 2945 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:15.829168 kubelet[2945]: W0712 10:13:15.828951 2945 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:15.832569 kubelet[2945]: E0712 10:13:15.832515 2945 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:15.832904 kubelet[2945]: E0712 10:13:15.832893 2945 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:15.832904 kubelet[2945]: W0712 10:13:15.832902 2945 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:15.832958 kubelet[2945]: E0712 10:13:15.832911 2945 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:15.833070 kubelet[2945]: E0712 10:13:15.833058 2945 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:15.833070 kubelet[2945]: W0712 10:13:15.833064 2945 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:15.833070 kubelet[2945]: E0712 10:13:15.833069 2945 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:15.833326 kubelet[2945]: E0712 10:13:15.833316 2945 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:15.833326 kubelet[2945]: W0712 10:13:15.833323 2945 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:15.836480 kubelet[2945]: E0712 10:13:15.833329 2945 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:15.836480 kubelet[2945]: E0712 10:13:15.833519 2945 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:15.836480 kubelet[2945]: W0712 10:13:15.833524 2945 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:15.836480 kubelet[2945]: E0712 10:13:15.833529 2945 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:15.836480 kubelet[2945]: E0712 10:13:15.833695 2945 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:15.836480 kubelet[2945]: W0712 10:13:15.833699 2945 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:15.836480 kubelet[2945]: E0712 10:13:15.833704 2945 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:15.836480 kubelet[2945]: E0712 10:13:15.833868 2945 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:15.836480 kubelet[2945]: W0712 10:13:15.833874 2945 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:15.836480 kubelet[2945]: E0712 10:13:15.833879 2945 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:15.836640 kubelet[2945]: E0712 10:13:15.834215 2945 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:15.836640 kubelet[2945]: W0712 10:13:15.834220 2945 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:15.836640 kubelet[2945]: E0712 10:13:15.834225 2945 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:15.836640 kubelet[2945]: E0712 10:13:15.835285 2945 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:15.836640 kubelet[2945]: W0712 10:13:15.835290 2945 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:15.836640 kubelet[2945]: E0712 10:13:15.835296 2945 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:15.836640 kubelet[2945]: E0712 10:13:15.835696 2945 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:15.836640 kubelet[2945]: W0712 10:13:15.835740 2945 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:15.836640 kubelet[2945]: E0712 10:13:15.835871 2945 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:15.836640 kubelet[2945]: E0712 10:13:15.836120 2945 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:15.836798 kubelet[2945]: W0712 10:13:15.836124 2945 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:15.836798 kubelet[2945]: E0712 10:13:15.836129 2945 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:15.836798 kubelet[2945]: E0712 10:13:15.836237 2945 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:15.836798 kubelet[2945]: W0712 10:13:15.836241 2945 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:15.836798 kubelet[2945]: E0712 10:13:15.836259 2945 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:15.836798 kubelet[2945]: E0712 10:13:15.836760 2945 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:15.836798 kubelet[2945]: W0712 10:13:15.836764 2945 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:15.836798 kubelet[2945]: E0712 10:13:15.836769 2945 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:15.836973 kubelet[2945]: E0712 10:13:15.836848 2945 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:15.836973 kubelet[2945]: W0712 10:13:15.836852 2945 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:15.836973 kubelet[2945]: E0712 10:13:15.836856 2945 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:15.836973 kubelet[2945]: E0712 10:13:15.836931 2945 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:15.836973 kubelet[2945]: W0712 10:13:15.836935 2945 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:15.836973 kubelet[2945]: E0712 10:13:15.836939 2945 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:15.843217 kubelet[2945]: E0712 10:13:15.843190 2945 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:15.843217 kubelet[2945]: W0712 10:13:15.843198 2945 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:15.843217 kubelet[2945]: E0712 10:13:15.843206 2945 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:15.843410 kubelet[2945]: E0712 10:13:15.843368 2945 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:15.843410 kubelet[2945]: W0712 10:13:15.843374 2945 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:15.843410 kubelet[2945]: E0712 10:13:15.843379 2945 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:15.843587 kubelet[2945]: E0712 10:13:15.843551 2945 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:15.843587 kubelet[2945]: W0712 10:13:15.843557 2945 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:15.843587 kubelet[2945]: E0712 10:13:15.843562 2945 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:15.844437 kubelet[2945]: E0712 10:13:15.844424 2945 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:15.844437 kubelet[2945]: W0712 10:13:15.844434 2945 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:15.844503 kubelet[2945]: E0712 10:13:15.844440 2945 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:15.844577 kubelet[2945]: E0712 10:13:15.844520 2945 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:15.844577 kubelet[2945]: W0712 10:13:15.844524 2945 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:15.844577 kubelet[2945]: E0712 10:13:15.844529 2945 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:15.844681 kubelet[2945]: E0712 10:13:15.844675 2945 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:15.844715 kubelet[2945]: W0712 10:13:15.844709 2945 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:15.844754 kubelet[2945]: E0712 10:13:15.844742 2945 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:15.844897 kubelet[2945]: E0712 10:13:15.844861 2945 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:15.844897 kubelet[2945]: W0712 10:13:15.844867 2945 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:15.844897 kubelet[2945]: E0712 10:13:15.844872 2945 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:15.845044 kubelet[2945]: E0712 10:13:15.845014 2945 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:15.845044 kubelet[2945]: W0712 10:13:15.845019 2945 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:15.845044 kubelet[2945]: E0712 10:13:15.845024 2945 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:15.845207 kubelet[2945]: E0712 10:13:15.845158 2945 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:15.845207 kubelet[2945]: W0712 10:13:15.845164 2945 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:15.845207 kubelet[2945]: E0712 10:13:15.845168 2945 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:15.845410 kubelet[2945]: E0712 10:13:15.845318 2945 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:15.845410 kubelet[2945]: W0712 10:13:15.845324 2945 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:15.845410 kubelet[2945]: E0712 10:13:15.845328 2945 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:15.845471 kubelet[2945]: E0712 10:13:15.845459 2945 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:15.845471 kubelet[2945]: W0712 10:13:15.845464 2945 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:15.845471 kubelet[2945]: E0712 10:13:15.845470 2945 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:15.845595 kubelet[2945]: E0712 10:13:15.845585 2945 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:15.845595 kubelet[2945]: W0712 10:13:15.845591 2945 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:15.845635 kubelet[2945]: E0712 10:13:15.845596 2945 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:15.845789 kubelet[2945]: E0712 10:13:15.845777 2945 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:15.845789 kubelet[2945]: W0712 10:13:15.845784 2945 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:15.845878 kubelet[2945]: E0712 10:13:15.845791 2945 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:15.845968 kubelet[2945]: E0712 10:13:15.845960 2945 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:15.845968 kubelet[2945]: W0712 10:13:15.845966 2945 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:15.846027 kubelet[2945]: E0712 10:13:15.845971 2945 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:15.846250 kubelet[2945]: E0712 10:13:15.846240 2945 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:15.846250 kubelet[2945]: W0712 10:13:15.846246 2945 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:15.846298 kubelet[2945]: E0712 10:13:15.846251 2945 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:15.846330 kubelet[2945]: E0712 10:13:15.846321 2945 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:15.846330 kubelet[2945]: W0712 10:13:15.846327 2945 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:15.846425 kubelet[2945]: E0712 10:13:15.846332 2945 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:15.846425 kubelet[2945]: E0712 10:13:15.846424 2945 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:15.846471 kubelet[2945]: W0712 10:13:15.846429 2945 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:15.846471 kubelet[2945]: E0712 10:13:15.846433 2945 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:15.846604 kubelet[2945]: E0712 10:13:15.846594 2945 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:15.846604 kubelet[2945]: W0712 10:13:15.846603 2945 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:15.846646 kubelet[2945]: E0712 10:13:15.846608 2945 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:15.944002 containerd[1615]: time="2025-07-12T10:13:15.943921513Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 10:13:15.945745 containerd[1615]: time="2025-07-12T10:13:15.945702482Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4446956" Jul 12 10:13:15.946340 containerd[1615]: time="2025-07-12T10:13:15.946232052Z" level=info msg="ImageCreate event name:\"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 10:13:15.947303 containerd[1615]: time="2025-07-12T10:13:15.947283671Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 10:13:15.948004 containerd[1615]: time="2025-07-12T10:13:15.947807650Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5939619\" in 1.157576964s" Jul 12 10:13:15.948004 containerd[1615]: time="2025-07-12T10:13:15.947825489Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\"" Jul 12 10:13:15.951135 containerd[1615]: time="2025-07-12T10:13:15.950966925Z" level=info msg="CreateContainer within sandbox \"d2be803a67f1b7a36f9727b7bd4a8a26f8f211a20a72abd823baa05013c57a69\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 12 10:13:15.973769 containerd[1615]: time="2025-07-12T10:13:15.973744394Z" level=info msg="Container 4bf64ae3a6686cbafd48317e29e33ca8c265b902e6f850366e57993a86e802e9: CDI devices from CRI Config.CDIDevices: []" Jul 12 10:13:15.980878 containerd[1615]: time="2025-07-12T10:13:15.980797346Z" level=info msg="CreateContainer within sandbox \"d2be803a67f1b7a36f9727b7bd4a8a26f8f211a20a72abd823baa05013c57a69\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"4bf64ae3a6686cbafd48317e29e33ca8c265b902e6f850366e57993a86e802e9\"" Jul 12 10:13:15.981957 containerd[1615]: time="2025-07-12T10:13:15.981843453Z" level=info msg="StartContainer for \"4bf64ae3a6686cbafd48317e29e33ca8c265b902e6f850366e57993a86e802e9\"" Jul 12 10:13:15.983026 containerd[1615]: time="2025-07-12T10:13:15.982993063Z" level=info msg="connecting to shim 4bf64ae3a6686cbafd48317e29e33ca8c265b902e6f850366e57993a86e802e9" address="unix:///run/containerd/s/e1623bc9a6b1c6d8df7e8fc2d5bb391094ae661087b9638423add627f8e5992f" protocol=ttrpc version=3 Jul 12 10:13:16.002520 systemd[1]: Started cri-containerd-4bf64ae3a6686cbafd48317e29e33ca8c265b902e6f850366e57993a86e802e9.scope - libcontainer container 4bf64ae3a6686cbafd48317e29e33ca8c265b902e6f850366e57993a86e802e9. Jul 12 10:13:16.031065 containerd[1615]: time="2025-07-12T10:13:16.030991898Z" level=info msg="StartContainer for \"4bf64ae3a6686cbafd48317e29e33ca8c265b902e6f850366e57993a86e802e9\" returns successfully" Jul 12 10:13:16.039104 systemd[1]: cri-containerd-4bf64ae3a6686cbafd48317e29e33ca8c265b902e6f850366e57993a86e802e9.scope: Deactivated successfully. Jul 12 10:13:16.049512 containerd[1615]: time="2025-07-12T10:13:16.049394792Z" level=info msg="received exit event container_id:\"4bf64ae3a6686cbafd48317e29e33ca8c265b902e6f850366e57993a86e802e9\" id:\"4bf64ae3a6686cbafd48317e29e33ca8c265b902e6f850366e57993a86e802e9\" pid:3618 exited_at:{seconds:1752315196 nanos:42254669}" Jul 12 10:13:16.055070 containerd[1615]: time="2025-07-12T10:13:16.055026281Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4bf64ae3a6686cbafd48317e29e33ca8c265b902e6f850366e57993a86e802e9\" id:\"4bf64ae3a6686cbafd48317e29e33ca8c265b902e6f850366e57993a86e802e9\" pid:3618 exited_at:{seconds:1752315196 nanos:42254669}" Jul 12 10:13:16.069615 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4bf64ae3a6686cbafd48317e29e33ca8c265b902e6f850366e57993a86e802e9-rootfs.mount: Deactivated successfully. Jul 12 10:13:16.779573 containerd[1615]: time="2025-07-12T10:13:16.779494856Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 12 10:13:16.795423 kubelet[2945]: I0712 10:13:16.795225 2945 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 12 10:13:17.693096 kubelet[2945]: E0712 10:13:17.693043 2945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5skpj" podUID="c9d481cf-b5a0-47fe-a981-c3ce861cf9d4" Jul 12 10:13:19.694524 kubelet[2945]: E0712 10:13:19.694462 2945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5skpj" podUID="c9d481cf-b5a0-47fe-a981-c3ce861cf9d4" Jul 12 10:13:20.170412 containerd[1615]: time="2025-07-12T10:13:20.170222088Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 10:13:20.170982 containerd[1615]: time="2025-07-12T10:13:20.170961410Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=70436221" Jul 12 10:13:20.171539 containerd[1615]: time="2025-07-12T10:13:20.171510644Z" level=info msg="ImageCreate event name:\"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 10:13:20.173237 containerd[1615]: time="2025-07-12T10:13:20.173210372Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 10:13:20.174195 containerd[1615]: time="2025-07-12T10:13:20.174171619Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"71928924\" in 3.39465148s" Jul 12 10:13:20.174258 containerd[1615]: time="2025-07-12T10:13:20.174198325Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\"" Jul 12 10:13:20.176811 containerd[1615]: time="2025-07-12T10:13:20.176663731Z" level=info msg="CreateContainer within sandbox \"d2be803a67f1b7a36f9727b7bd4a8a26f8f211a20a72abd823baa05013c57a69\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 12 10:13:20.182287 containerd[1615]: time="2025-07-12T10:13:20.182209168Z" level=info msg="Container 98ebfe5c21eed8ed0859d54cb0c71f8cfa1465584b1a9625bf2469118e808c56: CDI devices from CRI Config.CDIDevices: []" Jul 12 10:13:20.185511 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount972676695.mount: Deactivated successfully. Jul 12 10:13:20.205203 containerd[1615]: time="2025-07-12T10:13:20.205145127Z" level=info msg="CreateContainer within sandbox \"d2be803a67f1b7a36f9727b7bd4a8a26f8f211a20a72abd823baa05013c57a69\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"98ebfe5c21eed8ed0859d54cb0c71f8cfa1465584b1a9625bf2469118e808c56\"" Jul 12 10:13:20.205845 containerd[1615]: time="2025-07-12T10:13:20.205663957Z" level=info msg="StartContainer for \"98ebfe5c21eed8ed0859d54cb0c71f8cfa1465584b1a9625bf2469118e808c56\"" Jul 12 10:13:20.207195 containerd[1615]: time="2025-07-12T10:13:20.207177271Z" level=info msg="connecting to shim 98ebfe5c21eed8ed0859d54cb0c71f8cfa1465584b1a9625bf2469118e808c56" address="unix:///run/containerd/s/e1623bc9a6b1c6d8df7e8fc2d5bb391094ae661087b9638423add627f8e5992f" protocol=ttrpc version=3 Jul 12 10:13:20.228494 systemd[1]: Started cri-containerd-98ebfe5c21eed8ed0859d54cb0c71f8cfa1465584b1a9625bf2469118e808c56.scope - libcontainer container 98ebfe5c21eed8ed0859d54cb0c71f8cfa1465584b1a9625bf2469118e808c56. Jul 12 10:13:20.260741 containerd[1615]: time="2025-07-12T10:13:20.260717079Z" level=info msg="StartContainer for \"98ebfe5c21eed8ed0859d54cb0c71f8cfa1465584b1a9625bf2469118e808c56\" returns successfully" Jul 12 10:13:21.693744 kubelet[2945]: E0712 10:13:21.693486 2945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5skpj" podUID="c9d481cf-b5a0-47fe-a981-c3ce861cf9d4" Jul 12 10:13:22.233917 systemd[1]: cri-containerd-98ebfe5c21eed8ed0859d54cb0c71f8cfa1465584b1a9625bf2469118e808c56.scope: Deactivated successfully. Jul 12 10:13:22.234317 containerd[1615]: time="2025-07-12T10:13:22.234277126Z" level=info msg="received exit event container_id:\"98ebfe5c21eed8ed0859d54cb0c71f8cfa1465584b1a9625bf2469118e808c56\" id:\"98ebfe5c21eed8ed0859d54cb0c71f8cfa1465584b1a9625bf2469118e808c56\" pid:3676 exited_at:{seconds:1752315202 nanos:234085129}" Jul 12 10:13:22.234389 systemd[1]: cri-containerd-98ebfe5c21eed8ed0859d54cb0c71f8cfa1465584b1a9625bf2469118e808c56.scope: Consumed 312ms CPU time, 167M memory peak, 2M read from disk, 171.2M written to disk. Jul 12 10:13:22.241321 containerd[1615]: time="2025-07-12T10:13:22.241294123Z" level=info msg="TaskExit event in podsandbox handler container_id:\"98ebfe5c21eed8ed0859d54cb0c71f8cfa1465584b1a9625bf2469118e808c56\" id:\"98ebfe5c21eed8ed0859d54cb0c71f8cfa1465584b1a9625bf2469118e808c56\" pid:3676 exited_at:{seconds:1752315202 nanos:234085129}" Jul 12 10:13:22.334731 kubelet[2945]: I0712 10:13:22.334711 2945 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 12 10:13:22.342345 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-98ebfe5c21eed8ed0859d54cb0c71f8cfa1465584b1a9625bf2469118e808c56-rootfs.mount: Deactivated successfully. Jul 12 10:13:22.430324 systemd[1]: Created slice kubepods-besteffort-podc16afb24_c7cf_4e3b_9137_2bb6ce5d9b37.slice - libcontainer container kubepods-besteffort-podc16afb24_c7cf_4e3b_9137_2bb6ce5d9b37.slice. Jul 12 10:13:22.440716 systemd[1]: Created slice kubepods-besteffort-pod8e287385_4882_4403_b563_f15bc4f0d568.slice - libcontainer container kubepods-besteffort-pod8e287385_4882_4403_b563_f15bc4f0d568.slice. Jul 12 10:13:22.453429 systemd[1]: Created slice kubepods-besteffort-pod67e71e20_1f27_40b9_8850_6954851c7496.slice - libcontainer container kubepods-besteffort-pod67e71e20_1f27_40b9_8850_6954851c7496.slice. Jul 12 10:13:22.463617 systemd[1]: Created slice kubepods-burstable-podc92b6e91_8a72_4673_931d_6e08cd5a1419.slice - libcontainer container kubepods-burstable-podc92b6e91_8a72_4673_931d_6e08cd5a1419.slice. Jul 12 10:13:22.471318 systemd[1]: Created slice kubepods-besteffort-podf018b885_cd4e_4448_9579_6bb3b0f05831.slice - libcontainer container kubepods-besteffort-podf018b885_cd4e_4448_9579_6bb3b0f05831.slice. Jul 12 10:13:22.480148 systemd[1]: Created slice kubepods-burstable-poda3916880_a410_45ec_874b_6fc524a9fcbc.slice - libcontainer container kubepods-burstable-poda3916880_a410_45ec_874b_6fc524a9fcbc.slice. Jul 12 10:13:22.485690 kubelet[2945]: I0712 10:13:22.485618 2945 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/c16afb24-c7cf-4e3b-9137-2bb6ce5d9b37-goldmane-key-pair\") pod \"goldmane-768f4c5c69-x88vq\" (UID: \"c16afb24-c7cf-4e3b-9137-2bb6ce5d9b37\") " pod="calico-system/goldmane-768f4c5c69-x88vq" Jul 12 10:13:22.485856 kubelet[2945]: I0712 10:13:22.485813 2945 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-685jp\" (UniqueName: \"kubernetes.io/projected/c16afb24-c7cf-4e3b-9137-2bb6ce5d9b37-kube-api-access-685jp\") pod \"goldmane-768f4c5c69-x88vq\" (UID: \"c16afb24-c7cf-4e3b-9137-2bb6ce5d9b37\") " pod="calico-system/goldmane-768f4c5c69-x88vq" Jul 12 10:13:22.485856 kubelet[2945]: I0712 10:13:22.485829 2945 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/39a544b0-4d2b-4974-8209-28c325f83665-tigera-ca-bundle\") pod \"calico-kube-controllers-69cd65bdbc-6w6g6\" (UID: \"39a544b0-4d2b-4974-8209-28c325f83665\") " pod="calico-system/calico-kube-controllers-69cd65bdbc-6w6g6" Jul 12 10:13:22.485856 kubelet[2945]: I0712 10:13:22.485841 2945 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lt7sc\" (UniqueName: \"kubernetes.io/projected/39a544b0-4d2b-4974-8209-28c325f83665-kube-api-access-lt7sc\") pod \"calico-kube-controllers-69cd65bdbc-6w6g6\" (UID: \"39a544b0-4d2b-4974-8209-28c325f83665\") " pod="calico-system/calico-kube-controllers-69cd65bdbc-6w6g6" Jul 12 10:13:22.486033 kubelet[2945]: I0712 10:13:22.485965 2945 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c92b6e91-8a72-4673-931d-6e08cd5a1419-config-volume\") pod \"coredns-674b8bbfcf-wlj2s\" (UID: \"c92b6e91-8a72-4673-931d-6e08cd5a1419\") " pod="kube-system/coredns-674b8bbfcf-wlj2s" Jul 12 10:13:22.486033 kubelet[2945]: I0712 10:13:22.485980 2945 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6p76\" (UniqueName: \"kubernetes.io/projected/67e71e20-1f27-40b9-8850-6954851c7496-kube-api-access-z6p76\") pod \"whisker-684cfb6fcc-jqpxx\" (UID: \"67e71e20-1f27-40b9-8850-6954851c7496\") " pod="calico-system/whisker-684cfb6fcc-jqpxx" Jul 12 10:13:22.486033 kubelet[2945]: I0712 10:13:22.485993 2945 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a3916880-a410-45ec-874b-6fc524a9fcbc-config-volume\") pod \"coredns-674b8bbfcf-gmrqc\" (UID: \"a3916880-a410-45ec-874b-6fc524a9fcbc\") " pod="kube-system/coredns-674b8bbfcf-gmrqc" Jul 12 10:13:22.486033 kubelet[2945]: I0712 10:13:22.486005 2945 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c16afb24-c7cf-4e3b-9137-2bb6ce5d9b37-config\") pod \"goldmane-768f4c5c69-x88vq\" (UID: \"c16afb24-c7cf-4e3b-9137-2bb6ce5d9b37\") " pod="calico-system/goldmane-768f4c5c69-x88vq" Jul 12 10:13:22.486033 kubelet[2945]: I0712 10:13:22.486017 2945 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/8e287385-4882-4403-b563-f15bc4f0d568-calico-apiserver-certs\") pod \"calico-apiserver-ddbdff869-9bsjp\" (UID: \"8e287385-4882-4403-b563-f15bc4f0d568\") " pod="calico-apiserver/calico-apiserver-ddbdff869-9bsjp" Jul 12 10:13:22.486135 kubelet[2945]: I0712 10:13:22.486038 2945 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/67e71e20-1f27-40b9-8850-6954851c7496-whisker-backend-key-pair\") pod \"whisker-684cfb6fcc-jqpxx\" (UID: \"67e71e20-1f27-40b9-8850-6954851c7496\") " pod="calico-system/whisker-684cfb6fcc-jqpxx" Jul 12 10:13:22.486135 kubelet[2945]: I0712 10:13:22.486067 2945 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c16afb24-c7cf-4e3b-9137-2bb6ce5d9b37-goldmane-ca-bundle\") pod \"goldmane-768f4c5c69-x88vq\" (UID: \"c16afb24-c7cf-4e3b-9137-2bb6ce5d9b37\") " pod="calico-system/goldmane-768f4c5c69-x88vq" Jul 12 10:13:22.486135 kubelet[2945]: I0712 10:13:22.486079 2945 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hw6pj\" (UniqueName: \"kubernetes.io/projected/8e287385-4882-4403-b563-f15bc4f0d568-kube-api-access-hw6pj\") pod \"calico-apiserver-ddbdff869-9bsjp\" (UID: \"8e287385-4882-4403-b563-f15bc4f0d568\") " pod="calico-apiserver/calico-apiserver-ddbdff869-9bsjp" Jul 12 10:13:22.486135 kubelet[2945]: I0712 10:13:22.486089 2945 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpjns\" (UniqueName: \"kubernetes.io/projected/a3916880-a410-45ec-874b-6fc524a9fcbc-kube-api-access-dpjns\") pod \"coredns-674b8bbfcf-gmrqc\" (UID: \"a3916880-a410-45ec-874b-6fc524a9fcbc\") " pod="kube-system/coredns-674b8bbfcf-gmrqc" Jul 12 10:13:22.486135 kubelet[2945]: I0712 10:13:22.486111 2945 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85vmg\" (UniqueName: \"kubernetes.io/projected/c92b6e91-8a72-4673-931d-6e08cd5a1419-kube-api-access-85vmg\") pod \"coredns-674b8bbfcf-wlj2s\" (UID: \"c92b6e91-8a72-4673-931d-6e08cd5a1419\") " pod="kube-system/coredns-674b8bbfcf-wlj2s" Jul 12 10:13:22.486245 kubelet[2945]: I0712 10:13:22.486121 2945 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/67e71e20-1f27-40b9-8850-6954851c7496-whisker-ca-bundle\") pod \"whisker-684cfb6fcc-jqpxx\" (UID: \"67e71e20-1f27-40b9-8850-6954851c7496\") " pod="calico-system/whisker-684cfb6fcc-jqpxx" Jul 12 10:13:22.486245 kubelet[2945]: I0712 10:13:22.486132 2945 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f018b885-cd4e-4448-9579-6bb3b0f05831-calico-apiserver-certs\") pod \"calico-apiserver-ddbdff869-x7wvt\" (UID: \"f018b885-cd4e-4448-9579-6bb3b0f05831\") " pod="calico-apiserver/calico-apiserver-ddbdff869-x7wvt" Jul 12 10:13:22.486245 kubelet[2945]: I0712 10:13:22.486144 2945 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlw85\" (UniqueName: \"kubernetes.io/projected/f018b885-cd4e-4448-9579-6bb3b0f05831-kube-api-access-mlw85\") pod \"calico-apiserver-ddbdff869-x7wvt\" (UID: \"f018b885-cd4e-4448-9579-6bb3b0f05831\") " pod="calico-apiserver/calico-apiserver-ddbdff869-x7wvt" Jul 12 10:13:22.488152 systemd[1]: Created slice kubepods-besteffort-pod39a544b0_4d2b_4974_8209_28c325f83665.slice - libcontainer container kubepods-besteffort-pod39a544b0_4d2b_4974_8209_28c325f83665.slice. Jul 12 10:13:22.733959 containerd[1615]: time="2025-07-12T10:13:22.733926458Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-x88vq,Uid:c16afb24-c7cf-4e3b-9137-2bb6ce5d9b37,Namespace:calico-system,Attempt:0,}" Jul 12 10:13:22.751112 containerd[1615]: time="2025-07-12T10:13:22.750850513Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-ddbdff869-9bsjp,Uid:8e287385-4882-4403-b563-f15bc4f0d568,Namespace:calico-apiserver,Attempt:0,}" Jul 12 10:13:22.765044 containerd[1615]: time="2025-07-12T10:13:22.765021588Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-684cfb6fcc-jqpxx,Uid:67e71e20-1f27-40b9-8850-6954851c7496,Namespace:calico-system,Attempt:0,}" Jul 12 10:13:22.784200 containerd[1615]: time="2025-07-12T10:13:22.784139917Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-ddbdff869-x7wvt,Uid:f018b885-cd4e-4448-9579-6bb3b0f05831,Namespace:calico-apiserver,Attempt:0,}" Jul 12 10:13:22.787705 containerd[1615]: time="2025-07-12T10:13:22.786941665Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-gmrqc,Uid:a3916880-a410-45ec-874b-6fc524a9fcbc,Namespace:kube-system,Attempt:0,}" Jul 12 10:13:22.788421 containerd[1615]: time="2025-07-12T10:13:22.788394976Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-wlj2s,Uid:c92b6e91-8a72-4673-931d-6e08cd5a1419,Namespace:kube-system,Attempt:0,}" Jul 12 10:13:22.793236 containerd[1615]: time="2025-07-12T10:13:22.793213844Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-69cd65bdbc-6w6g6,Uid:39a544b0-4d2b-4974-8209-28c325f83665,Namespace:calico-system,Attempt:0,}" Jul 12 10:13:22.876314 containerd[1615]: time="2025-07-12T10:13:22.876274255Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 12 10:13:23.096430 containerd[1615]: time="2025-07-12T10:13:23.096160033Z" level=error msg="Failed to destroy network for sandbox \"df0022d9272fcc7e4877310c891ddaba9cfc7c191b8a051c90c8f7b00b147996\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 10:13:23.096814 containerd[1615]: time="2025-07-12T10:13:23.096764795Z" level=error msg="Failed to destroy network for sandbox \"cd47eecc620eb8f84e93d7d1ad5befc0a6418dfac3423097f4f0c66cfa5681e4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 10:13:23.100145 containerd[1615]: time="2025-07-12T10:13:23.100099412Z" level=error msg="Failed to destroy network for sandbox \"5dd800df667cc09b8bef12ef1d3cc3daab059590569efbcbe37894a005d1cbf4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 10:13:23.101863 containerd[1615]: time="2025-07-12T10:13:23.101804751Z" level=error msg="Failed to destroy network for sandbox \"091ba30aa9f070c69615d29e20ab2791b7fced8cf84cb13a9173a5181a4316dd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 10:13:23.102102 containerd[1615]: time="2025-07-12T10:13:23.102063860Z" level=error msg="Failed to destroy network for sandbox \"ebfb322cc2099d888f7527c10834e507048c48771536a0f1f9f3398965b4fe9e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 10:13:23.105233 containerd[1615]: time="2025-07-12T10:13:23.105204193Z" level=error msg="Failed to destroy network for sandbox \"9ecd679b3a548c64c1ec614b3b902b7b590bf8050e71a67de130350d795644d9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 10:13:23.105443 containerd[1615]: time="2025-07-12T10:13:23.105373671Z" level=error msg="Failed to destroy network for sandbox \"e56e2ac48bcb38da935b21279ee7771f0e06dff5d3f885db8544f94c8537c4bf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 10:13:23.106966 containerd[1615]: time="2025-07-12T10:13:23.106876447Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-684cfb6fcc-jqpxx,Uid:67e71e20-1f27-40b9-8850-6954851c7496,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"df0022d9272fcc7e4877310c891ddaba9cfc7c191b8a051c90c8f7b00b147996\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 10:13:23.115716 containerd[1615]: time="2025-07-12T10:13:23.115461186Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-x88vq,Uid:c16afb24-c7cf-4e3b-9137-2bb6ce5d9b37,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"cd47eecc620eb8f84e93d7d1ad5befc0a6418dfac3423097f4f0c66cfa5681e4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 10:13:23.116701 containerd[1615]: time="2025-07-12T10:13:23.116665571Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-ddbdff869-x7wvt,Uid:f018b885-cd4e-4448-9579-6bb3b0f05831,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5dd800df667cc09b8bef12ef1d3cc3daab059590569efbcbe37894a005d1cbf4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 10:13:23.116988 containerd[1615]: time="2025-07-12T10:13:23.116973424Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-69cd65bdbc-6w6g6,Uid:39a544b0-4d2b-4974-8209-28c325f83665,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"091ba30aa9f070c69615d29e20ab2791b7fced8cf84cb13a9173a5181a4316dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 10:13:23.117289 containerd[1615]: time="2025-07-12T10:13:23.117274543Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-wlj2s,Uid:c92b6e91-8a72-4673-931d-6e08cd5a1419,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ebfb322cc2099d888f7527c10834e507048c48771536a0f1f9f3398965b4fe9e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 10:13:23.117608 containerd[1615]: time="2025-07-12T10:13:23.117570901Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-gmrqc,Uid:a3916880-a410-45ec-874b-6fc524a9fcbc,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ecd679b3a548c64c1ec614b3b902b7b590bf8050e71a67de130350d795644d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 10:13:23.117899 containerd[1615]: time="2025-07-12T10:13:23.117864659Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-ddbdff869-9bsjp,Uid:8e287385-4882-4403-b563-f15bc4f0d568,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e56e2ac48bcb38da935b21279ee7771f0e06dff5d3f885db8544f94c8537c4bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 10:13:23.126355 kubelet[2945]: E0712 10:13:23.126195 2945 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df0022d9272fcc7e4877310c891ddaba9cfc7c191b8a051c90c8f7b00b147996\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 10:13:23.126355 kubelet[2945]: E0712 10:13:23.126241 2945 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"091ba30aa9f070c69615d29e20ab2791b7fced8cf84cb13a9173a5181a4316dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 10:13:23.126355 kubelet[2945]: E0712 10:13:23.126266 2945 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df0022d9272fcc7e4877310c891ddaba9cfc7c191b8a051c90c8f7b00b147996\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-684cfb6fcc-jqpxx" Jul 12 10:13:23.126355 kubelet[2945]: E0712 10:13:23.126266 2945 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"091ba30aa9f070c69615d29e20ab2791b7fced8cf84cb13a9173a5181a4316dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-69cd65bdbc-6w6g6" Jul 12 10:13:23.126736 kubelet[2945]: E0712 10:13:23.126281 2945 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"091ba30aa9f070c69615d29e20ab2791b7fced8cf84cb13a9173a5181a4316dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-69cd65bdbc-6w6g6" Jul 12 10:13:23.126736 kubelet[2945]: E0712 10:13:23.126281 2945 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df0022d9272fcc7e4877310c891ddaba9cfc7c191b8a051c90c8f7b00b147996\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-684cfb6fcc-jqpxx" Jul 12 10:13:23.128919 kubelet[2945]: E0712 10:13:23.128782 2945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-69cd65bdbc-6w6g6_calico-system(39a544b0-4d2b-4974-8209-28c325f83665)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-69cd65bdbc-6w6g6_calico-system(39a544b0-4d2b-4974-8209-28c325f83665)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"091ba30aa9f070c69615d29e20ab2791b7fced8cf84cb13a9173a5181a4316dd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-69cd65bdbc-6w6g6" podUID="39a544b0-4d2b-4974-8209-28c325f83665" Jul 12 10:13:23.128919 kubelet[2945]: E0712 10:13:23.128782 2945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-684cfb6fcc-jqpxx_calico-system(67e71e20-1f27-40b9-8850-6954851c7496)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-684cfb6fcc-jqpxx_calico-system(67e71e20-1f27-40b9-8850-6954851c7496)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"df0022d9272fcc7e4877310c891ddaba9cfc7c191b8a051c90c8f7b00b147996\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-684cfb6fcc-jqpxx" podUID="67e71e20-1f27-40b9-8850-6954851c7496" Jul 12 10:13:23.128919 kubelet[2945]: E0712 10:13:23.128848 2945 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ebfb322cc2099d888f7527c10834e507048c48771536a0f1f9f3398965b4fe9e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 10:13:23.129097 kubelet[2945]: E0712 10:13:23.128870 2945 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ebfb322cc2099d888f7527c10834e507048c48771536a0f1f9f3398965b4fe9e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-wlj2s" Jul 12 10:13:23.129097 kubelet[2945]: E0712 10:13:23.128882 2945 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ebfb322cc2099d888f7527c10834e507048c48771536a0f1f9f3398965b4fe9e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-wlj2s" Jul 12 10:13:23.129097 kubelet[2945]: E0712 10:13:23.128904 2945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-wlj2s_kube-system(c92b6e91-8a72-4673-931d-6e08cd5a1419)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-wlj2s_kube-system(c92b6e91-8a72-4673-931d-6e08cd5a1419)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ebfb322cc2099d888f7527c10834e507048c48771536a0f1f9f3398965b4fe9e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-wlj2s" podUID="c92b6e91-8a72-4673-931d-6e08cd5a1419" Jul 12 10:13:23.129174 kubelet[2945]: E0712 10:13:23.128924 2945 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ecd679b3a548c64c1ec614b3b902b7b590bf8050e71a67de130350d795644d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 10:13:23.129174 kubelet[2945]: E0712 10:13:23.128935 2945 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ecd679b3a548c64c1ec614b3b902b7b590bf8050e71a67de130350d795644d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-gmrqc" Jul 12 10:13:23.129174 kubelet[2945]: E0712 10:13:23.128950 2945 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ecd679b3a548c64c1ec614b3b902b7b590bf8050e71a67de130350d795644d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-gmrqc" Jul 12 10:13:23.129823 kubelet[2945]: E0712 10:13:23.128964 2945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-gmrqc_kube-system(a3916880-a410-45ec-874b-6fc524a9fcbc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-gmrqc_kube-system(a3916880-a410-45ec-874b-6fc524a9fcbc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9ecd679b3a548c64c1ec614b3b902b7b590bf8050e71a67de130350d795644d9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-gmrqc" podUID="a3916880-a410-45ec-874b-6fc524a9fcbc" Jul 12 10:13:23.129823 kubelet[2945]: E0712 10:13:23.126228 2945 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5dd800df667cc09b8bef12ef1d3cc3daab059590569efbcbe37894a005d1cbf4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 10:13:23.129823 kubelet[2945]: E0712 10:13:23.128982 2945 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5dd800df667cc09b8bef12ef1d3cc3daab059590569efbcbe37894a005d1cbf4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-ddbdff869-x7wvt" Jul 12 10:13:23.129917 kubelet[2945]: E0712 10:13:23.128990 2945 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5dd800df667cc09b8bef12ef1d3cc3daab059590569efbcbe37894a005d1cbf4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-ddbdff869-x7wvt" Jul 12 10:13:23.129917 kubelet[2945]: E0712 10:13:23.129007 2945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-ddbdff869-x7wvt_calico-apiserver(f018b885-cd4e-4448-9579-6bb3b0f05831)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-ddbdff869-x7wvt_calico-apiserver(f018b885-cd4e-4448-9579-6bb3b0f05831)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5dd800df667cc09b8bef12ef1d3cc3daab059590569efbcbe37894a005d1cbf4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-ddbdff869-x7wvt" podUID="f018b885-cd4e-4448-9579-6bb3b0f05831" Jul 12 10:13:23.129917 kubelet[2945]: E0712 10:13:23.129019 2945 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e56e2ac48bcb38da935b21279ee7771f0e06dff5d3f885db8544f94c8537c4bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 10:13:23.129994 kubelet[2945]: E0712 10:13:23.129030 2945 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e56e2ac48bcb38da935b21279ee7771f0e06dff5d3f885db8544f94c8537c4bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-ddbdff869-9bsjp" Jul 12 10:13:23.129994 kubelet[2945]: E0712 10:13:23.129040 2945 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e56e2ac48bcb38da935b21279ee7771f0e06dff5d3f885db8544f94c8537c4bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-ddbdff869-9bsjp" Jul 12 10:13:23.129994 kubelet[2945]: E0712 10:13:23.129059 2945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-ddbdff869-9bsjp_calico-apiserver(8e287385-4882-4403-b563-f15bc4f0d568)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-ddbdff869-9bsjp_calico-apiserver(8e287385-4882-4403-b563-f15bc4f0d568)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e56e2ac48bcb38da935b21279ee7771f0e06dff5d3f885db8544f94c8537c4bf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-ddbdff869-9bsjp" podUID="8e287385-4882-4403-b563-f15bc4f0d568" Jul 12 10:13:23.132458 kubelet[2945]: E0712 10:13:23.126209 2945 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cd47eecc620eb8f84e93d7d1ad5befc0a6418dfac3423097f4f0c66cfa5681e4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 10:13:23.132458 kubelet[2945]: E0712 10:13:23.129088 2945 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cd47eecc620eb8f84e93d7d1ad5befc0a6418dfac3423097f4f0c66cfa5681e4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-x88vq" Jul 12 10:13:23.132458 kubelet[2945]: E0712 10:13:23.129101 2945 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cd47eecc620eb8f84e93d7d1ad5befc0a6418dfac3423097f4f0c66cfa5681e4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-x88vq" Jul 12 10:13:23.132547 kubelet[2945]: E0712 10:13:23.129125 2945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-768f4c5c69-x88vq_calico-system(c16afb24-c7cf-4e3b-9137-2bb6ce5d9b37)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-768f4c5c69-x88vq_calico-system(c16afb24-c7cf-4e3b-9137-2bb6ce5d9b37)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cd47eecc620eb8f84e93d7d1ad5befc0a6418dfac3423097f4f0c66cfa5681e4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-x88vq" podUID="c16afb24-c7cf-4e3b-9137-2bb6ce5d9b37" Jul 12 10:13:23.715525 systemd[1]: Created slice kubepods-besteffort-podc9d481cf_b5a0_47fe_a981_c3ce861cf9d4.slice - libcontainer container kubepods-besteffort-podc9d481cf_b5a0_47fe_a981_c3ce861cf9d4.slice. Jul 12 10:13:23.717158 containerd[1615]: time="2025-07-12T10:13:23.717138456Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5skpj,Uid:c9d481cf-b5a0-47fe-a981-c3ce861cf9d4,Namespace:calico-system,Attempt:0,}" Jul 12 10:13:23.752420 containerd[1615]: time="2025-07-12T10:13:23.752364466Z" level=error msg="Failed to destroy network for sandbox \"9f995453997fb56bb607bf758c2b85405439cd1afd723c6c9f73de241a61f5fc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 10:13:23.753709 systemd[1]: run-netns-cni\x2da7e4352d\x2d460f\x2d78bb\x2d6aac\x2da5c2786eb4f5.mount: Deactivated successfully. Jul 12 10:13:23.755852 containerd[1615]: time="2025-07-12T10:13:23.755630294Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5skpj,Uid:c9d481cf-b5a0-47fe-a981-c3ce861cf9d4,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9f995453997fb56bb607bf758c2b85405439cd1afd723c6c9f73de241a61f5fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 10:13:23.756249 kubelet[2945]: E0712 10:13:23.756222 2945 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9f995453997fb56bb607bf758c2b85405439cd1afd723c6c9f73de241a61f5fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 10:13:23.756302 kubelet[2945]: E0712 10:13:23.756262 2945 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9f995453997fb56bb607bf758c2b85405439cd1afd723c6c9f73de241a61f5fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-5skpj" Jul 12 10:13:23.756486 kubelet[2945]: E0712 10:13:23.756371 2945 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9f995453997fb56bb607bf758c2b85405439cd1afd723c6c9f73de241a61f5fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-5skpj" Jul 12 10:13:23.756536 kubelet[2945]: E0712 10:13:23.756516 2945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-5skpj_calico-system(c9d481cf-b5a0-47fe-a981-c3ce861cf9d4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-5skpj_calico-system(c9d481cf-b5a0-47fe-a981-c3ce861cf9d4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9f995453997fb56bb607bf758c2b85405439cd1afd723c6c9f73de241a61f5fc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-5skpj" podUID="c9d481cf-b5a0-47fe-a981-c3ce861cf9d4" Jul 12 10:13:27.570296 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount168108714.mount: Deactivated successfully. Jul 12 10:13:27.621809 containerd[1615]: time="2025-07-12T10:13:27.615441298Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 10:13:27.623528 containerd[1615]: time="2025-07-12T10:13:27.622998026Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=158500163" Jul 12 10:13:27.623528 containerd[1615]: time="2025-07-12T10:13:27.623302081Z" level=info msg="ImageCreate event name:\"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 10:13:27.624490 containerd[1615]: time="2025-07-12T10:13:27.624464540Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 10:13:27.624844 containerd[1615]: time="2025-07-12T10:13:27.624649958Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"158500025\" in 4.748334583s" Jul 12 10:13:27.624844 containerd[1615]: time="2025-07-12T10:13:27.624667290Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\"" Jul 12 10:13:27.643352 containerd[1615]: time="2025-07-12T10:13:27.643324277Z" level=info msg="CreateContainer within sandbox \"d2be803a67f1b7a36f9727b7bd4a8a26f8f211a20a72abd823baa05013c57a69\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 12 10:13:27.664195 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3549686272.mount: Deactivated successfully. Jul 12 10:13:27.664314 containerd[1615]: time="2025-07-12T10:13:27.664291435Z" level=info msg="Container 479e01e5cdb5db9e53d13c70f602154ab0c3d70f9300796591572fe312c685de: CDI devices from CRI Config.CDIDevices: []" Jul 12 10:13:27.677357 containerd[1615]: time="2025-07-12T10:13:27.677304764Z" level=info msg="CreateContainer within sandbox \"d2be803a67f1b7a36f9727b7bd4a8a26f8f211a20a72abd823baa05013c57a69\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"479e01e5cdb5db9e53d13c70f602154ab0c3d70f9300796591572fe312c685de\"" Jul 12 10:13:27.678638 containerd[1615]: time="2025-07-12T10:13:27.678471127Z" level=info msg="StartContainer for \"479e01e5cdb5db9e53d13c70f602154ab0c3d70f9300796591572fe312c685de\"" Jul 12 10:13:27.681167 containerd[1615]: time="2025-07-12T10:13:27.681148223Z" level=info msg="connecting to shim 479e01e5cdb5db9e53d13c70f602154ab0c3d70f9300796591572fe312c685de" address="unix:///run/containerd/s/e1623bc9a6b1c6d8df7e8fc2d5bb391094ae661087b9638423add627f8e5992f" protocol=ttrpc version=3 Jul 12 10:13:27.779516 systemd[1]: Started cri-containerd-479e01e5cdb5db9e53d13c70f602154ab0c3d70f9300796591572fe312c685de.scope - libcontainer container 479e01e5cdb5db9e53d13c70f602154ab0c3d70f9300796591572fe312c685de. Jul 12 10:13:27.835464 containerd[1615]: time="2025-07-12T10:13:27.835363880Z" level=info msg="StartContainer for \"479e01e5cdb5db9e53d13c70f602154ab0c3d70f9300796591572fe312c685de\" returns successfully" Jul 12 10:13:27.899412 kubelet[2945]: I0712 10:13:27.897664 2945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-ppd8c" podStartSLOduration=1.5722257960000001 podStartE2EDuration="16.897653564s" podCreationTimestamp="2025-07-12 10:13:11 +0000 UTC" firstStartedPulling="2025-07-12 10:13:12.299653995 +0000 UTC m=+16.722581126" lastFinishedPulling="2025-07-12 10:13:27.62508176 +0000 UTC m=+32.048008894" observedRunningTime="2025-07-12 10:13:27.897234237 +0000 UTC m=+32.320161379" watchObservedRunningTime="2025-07-12 10:13:27.897653564 +0000 UTC m=+32.320580702" Jul 12 10:13:28.155936 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 12 10:13:28.179459 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 12 10:13:28.212769 containerd[1615]: time="2025-07-12T10:13:28.212740917Z" level=info msg="TaskExit event in podsandbox handler container_id:\"479e01e5cdb5db9e53d13c70f602154ab0c3d70f9300796591572fe312c685de\" id:\"06ccdea201ad80ab966935325ff41765664aa109dbcf3c3e99cda28be726835f\" pid:3989 exit_status:1 exited_at:{seconds:1752315208 nanos:200483960}" Jul 12 10:13:28.431182 kubelet[2945]: I0712 10:13:28.430887 2945 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/67e71e20-1f27-40b9-8850-6954851c7496-whisker-backend-key-pair\") pod \"67e71e20-1f27-40b9-8850-6954851c7496\" (UID: \"67e71e20-1f27-40b9-8850-6954851c7496\") " Jul 12 10:13:28.431430 kubelet[2945]: I0712 10:13:28.431420 2945 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/67e71e20-1f27-40b9-8850-6954851c7496-whisker-ca-bundle\") pod \"67e71e20-1f27-40b9-8850-6954851c7496\" (UID: \"67e71e20-1f27-40b9-8850-6954851c7496\") " Jul 12 10:13:28.431655 kubelet[2945]: I0712 10:13:28.431494 2945 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z6p76\" (UniqueName: \"kubernetes.io/projected/67e71e20-1f27-40b9-8850-6954851c7496-kube-api-access-z6p76\") pod \"67e71e20-1f27-40b9-8850-6954851c7496\" (UID: \"67e71e20-1f27-40b9-8850-6954851c7496\") " Jul 12 10:13:28.433190 kubelet[2945]: I0712 10:13:28.433117 2945 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/67e71e20-1f27-40b9-8850-6954851c7496-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "67e71e20-1f27-40b9-8850-6954851c7496" (UID: "67e71e20-1f27-40b9-8850-6954851c7496"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 12 10:13:28.438336 kubelet[2945]: I0712 10:13:28.438316 2945 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67e71e20-1f27-40b9-8850-6954851c7496-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "67e71e20-1f27-40b9-8850-6954851c7496" (UID: "67e71e20-1f27-40b9-8850-6954851c7496"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 12 10:13:28.438444 kubelet[2945]: I0712 10:13:28.438302 2945 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67e71e20-1f27-40b9-8850-6954851c7496-kube-api-access-z6p76" (OuterVolumeSpecName: "kube-api-access-z6p76") pod "67e71e20-1f27-40b9-8850-6954851c7496" (UID: "67e71e20-1f27-40b9-8850-6954851c7496"). InnerVolumeSpecName "kube-api-access-z6p76". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 12 10:13:28.532159 kubelet[2945]: I0712 10:13:28.532065 2945 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-z6p76\" (UniqueName: \"kubernetes.io/projected/67e71e20-1f27-40b9-8850-6954851c7496-kube-api-access-z6p76\") on node \"localhost\" DevicePath \"\"" Jul 12 10:13:28.532159 kubelet[2945]: I0712 10:13:28.532087 2945 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/67e71e20-1f27-40b9-8850-6954851c7496-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Jul 12 10:13:28.532159 kubelet[2945]: I0712 10:13:28.532093 2945 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/67e71e20-1f27-40b9-8850-6954851c7496-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jul 12 10:13:28.572140 systemd[1]: var-lib-kubelet-pods-67e71e20\x2d1f27\x2d40b9\x2d8850\x2d6954851c7496-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 12 10:13:28.572510 systemd[1]: var-lib-kubelet-pods-67e71e20\x2d1f27\x2d40b9\x2d8850\x2d6954851c7496-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dz6p76.mount: Deactivated successfully. Jul 12 10:13:28.874235 systemd[1]: Removed slice kubepods-besteffort-pod67e71e20_1f27_40b9_8850_6954851c7496.slice - libcontainer container kubepods-besteffort-pod67e71e20_1f27_40b9_8850_6954851c7496.slice. Jul 12 10:13:28.962384 systemd[1]: Created slice kubepods-besteffort-podb7c7c0d6_659a_40e4_afb4_64fac5a597c9.slice - libcontainer container kubepods-besteffort-podb7c7c0d6_659a_40e4_afb4_64fac5a597c9.slice. Jul 12 10:13:28.964999 containerd[1615]: time="2025-07-12T10:13:28.964656989Z" level=info msg="TaskExit event in podsandbox handler container_id:\"479e01e5cdb5db9e53d13c70f602154ab0c3d70f9300796591572fe312c685de\" id:\"66f4d06652a3ef68c86c3ffb8550f99134b7f701f646f18da9c428294f9b645e\" pid:4050 exit_status:1 exited_at:{seconds:1752315208 nanos:964449728}" Jul 12 10:13:29.035903 kubelet[2945]: I0712 10:13:29.035810 2945 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/b7c7c0d6-659a-40e4-afb4-64fac5a597c9-whisker-backend-key-pair\") pod \"whisker-68dd487c95-2shq2\" (UID: \"b7c7c0d6-659a-40e4-afb4-64fac5a597c9\") " pod="calico-system/whisker-68dd487c95-2shq2" Jul 12 10:13:29.035903 kubelet[2945]: I0712 10:13:29.035836 2945 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b7c7c0d6-659a-40e4-afb4-64fac5a597c9-whisker-ca-bundle\") pod \"whisker-68dd487c95-2shq2\" (UID: \"b7c7c0d6-659a-40e4-afb4-64fac5a597c9\") " pod="calico-system/whisker-68dd487c95-2shq2" Jul 12 10:13:29.035903 kubelet[2945]: I0712 10:13:29.035863 2945 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfhxb\" (UniqueName: \"kubernetes.io/projected/b7c7c0d6-659a-40e4-afb4-64fac5a597c9-kube-api-access-qfhxb\") pod \"whisker-68dd487c95-2shq2\" (UID: \"b7c7c0d6-659a-40e4-afb4-64fac5a597c9\") " pod="calico-system/whisker-68dd487c95-2shq2" Jul 12 10:13:29.266064 containerd[1615]: time="2025-07-12T10:13:29.265978466Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-68dd487c95-2shq2,Uid:b7c7c0d6-659a-40e4-afb4-64fac5a597c9,Namespace:calico-system,Attempt:0,}" Jul 12 10:13:29.706641 kubelet[2945]: I0712 10:13:29.706612 2945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67e71e20-1f27-40b9-8850-6954851c7496" path="/var/lib/kubelet/pods/67e71e20-1f27-40b9-8850-6954851c7496/volumes" Jul 12 10:13:29.799589 systemd-networkd[1543]: calia3ae53d6598: Link UP Jul 12 10:13:29.800357 systemd-networkd[1543]: calia3ae53d6598: Gained carrier Jul 12 10:13:29.852525 containerd[1615]: 2025-07-12 10:13:29.287 [INFO][4070] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 12 10:13:29.852525 containerd[1615]: 2025-07-12 10:13:29.325 [INFO][4070] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--68dd487c95--2shq2-eth0 whisker-68dd487c95- calico-system b7c7c0d6-659a-40e4-afb4-64fac5a597c9 872 0 2025-07-12 10:13:28 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:68dd487c95 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-68dd487c95-2shq2 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calia3ae53d6598 [] [] }} ContainerID="fee3e3665c85e1da1e3cd51af10136de1fb17513e2f318068e86f8221a20a7db" Namespace="calico-system" Pod="whisker-68dd487c95-2shq2" WorkloadEndpoint="localhost-k8s-whisker--68dd487c95--2shq2-" Jul 12 10:13:29.852525 containerd[1615]: 2025-07-12 10:13:29.326 [INFO][4070] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fee3e3665c85e1da1e3cd51af10136de1fb17513e2f318068e86f8221a20a7db" Namespace="calico-system" Pod="whisker-68dd487c95-2shq2" WorkloadEndpoint="localhost-k8s-whisker--68dd487c95--2shq2-eth0" Jul 12 10:13:29.852525 containerd[1615]: 2025-07-12 10:13:29.711 [INFO][4077] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fee3e3665c85e1da1e3cd51af10136de1fb17513e2f318068e86f8221a20a7db" HandleID="k8s-pod-network.fee3e3665c85e1da1e3cd51af10136de1fb17513e2f318068e86f8221a20a7db" Workload="localhost-k8s-whisker--68dd487c95--2shq2-eth0" Jul 12 10:13:29.855228 containerd[1615]: 2025-07-12 10:13:29.715 [INFO][4077] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="fee3e3665c85e1da1e3cd51af10136de1fb17513e2f318068e86f8221a20a7db" HandleID="k8s-pod-network.fee3e3665c85e1da1e3cd51af10136de1fb17513e2f318068e86f8221a20a7db" Workload="localhost-k8s-whisker--68dd487c95--2shq2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000335aa0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-68dd487c95-2shq2", "timestamp":"2025-07-12 10:13:29.711563987 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 12 10:13:29.855228 containerd[1615]: 2025-07-12 10:13:29.715 [INFO][4077] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 10:13:29.855228 containerd[1615]: 2025-07-12 10:13:29.723 [INFO][4077] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 10:13:29.855228 containerd[1615]: 2025-07-12 10:13:29.725 [INFO][4077] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 12 10:13:29.855228 containerd[1615]: 2025-07-12 10:13:29.742 [INFO][4077] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.fee3e3665c85e1da1e3cd51af10136de1fb17513e2f318068e86f8221a20a7db" host="localhost" Jul 12 10:13:29.855228 containerd[1615]: 2025-07-12 10:13:29.758 [INFO][4077] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 12 10:13:29.855228 containerd[1615]: 2025-07-12 10:13:29.764 [INFO][4077] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 12 10:13:29.855228 containerd[1615]: 2025-07-12 10:13:29.765 [INFO][4077] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 12 10:13:29.855228 containerd[1615]: 2025-07-12 10:13:29.767 [INFO][4077] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 12 10:13:29.855228 containerd[1615]: 2025-07-12 10:13:29.767 [INFO][4077] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.fee3e3665c85e1da1e3cd51af10136de1fb17513e2f318068e86f8221a20a7db" host="localhost" Jul 12 10:13:29.857089 containerd[1615]: 2025-07-12 10:13:29.767 [INFO][4077] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.fee3e3665c85e1da1e3cd51af10136de1fb17513e2f318068e86f8221a20a7db Jul 12 10:13:29.857089 containerd[1615]: 2025-07-12 10:13:29.774 [INFO][4077] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.fee3e3665c85e1da1e3cd51af10136de1fb17513e2f318068e86f8221a20a7db" host="localhost" Jul 12 10:13:29.857089 containerd[1615]: 2025-07-12 10:13:29.777 [INFO][4077] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.fee3e3665c85e1da1e3cd51af10136de1fb17513e2f318068e86f8221a20a7db" host="localhost" Jul 12 10:13:29.857089 containerd[1615]: 2025-07-12 10:13:29.777 [INFO][4077] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.fee3e3665c85e1da1e3cd51af10136de1fb17513e2f318068e86f8221a20a7db" host="localhost" Jul 12 10:13:29.857089 containerd[1615]: 2025-07-12 10:13:29.777 [INFO][4077] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 10:13:29.857089 containerd[1615]: 2025-07-12 10:13:29.777 [INFO][4077] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="fee3e3665c85e1da1e3cd51af10136de1fb17513e2f318068e86f8221a20a7db" HandleID="k8s-pod-network.fee3e3665c85e1da1e3cd51af10136de1fb17513e2f318068e86f8221a20a7db" Workload="localhost-k8s-whisker--68dd487c95--2shq2-eth0" Jul 12 10:13:29.867081 containerd[1615]: 2025-07-12 10:13:29.779 [INFO][4070] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fee3e3665c85e1da1e3cd51af10136de1fb17513e2f318068e86f8221a20a7db" Namespace="calico-system" Pod="whisker-68dd487c95-2shq2" WorkloadEndpoint="localhost-k8s-whisker--68dd487c95--2shq2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--68dd487c95--2shq2-eth0", GenerateName:"whisker-68dd487c95-", Namespace:"calico-system", SelfLink:"", UID:"b7c7c0d6-659a-40e4-afb4-64fac5a597c9", ResourceVersion:"872", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 10, 13, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"68dd487c95", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-68dd487c95-2shq2", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calia3ae53d6598", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 10:13:29.867081 containerd[1615]: 2025-07-12 10:13:29.780 [INFO][4070] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="fee3e3665c85e1da1e3cd51af10136de1fb17513e2f318068e86f8221a20a7db" Namespace="calico-system" Pod="whisker-68dd487c95-2shq2" WorkloadEndpoint="localhost-k8s-whisker--68dd487c95--2shq2-eth0" Jul 12 10:13:29.867155 containerd[1615]: 2025-07-12 10:13:29.780 [INFO][4070] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia3ae53d6598 ContainerID="fee3e3665c85e1da1e3cd51af10136de1fb17513e2f318068e86f8221a20a7db" Namespace="calico-system" Pod="whisker-68dd487c95-2shq2" WorkloadEndpoint="localhost-k8s-whisker--68dd487c95--2shq2-eth0" Jul 12 10:13:29.867155 containerd[1615]: 2025-07-12 10:13:29.809 [INFO][4070] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fee3e3665c85e1da1e3cd51af10136de1fb17513e2f318068e86f8221a20a7db" Namespace="calico-system" Pod="whisker-68dd487c95-2shq2" WorkloadEndpoint="localhost-k8s-whisker--68dd487c95--2shq2-eth0" Jul 12 10:13:29.867188 containerd[1615]: 2025-07-12 10:13:29.810 [INFO][4070] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fee3e3665c85e1da1e3cd51af10136de1fb17513e2f318068e86f8221a20a7db" Namespace="calico-system" Pod="whisker-68dd487c95-2shq2" WorkloadEndpoint="localhost-k8s-whisker--68dd487c95--2shq2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--68dd487c95--2shq2-eth0", GenerateName:"whisker-68dd487c95-", Namespace:"calico-system", SelfLink:"", UID:"b7c7c0d6-659a-40e4-afb4-64fac5a597c9", ResourceVersion:"872", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 10, 13, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"68dd487c95", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fee3e3665c85e1da1e3cd51af10136de1fb17513e2f318068e86f8221a20a7db", Pod:"whisker-68dd487c95-2shq2", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calia3ae53d6598", MAC:"b2:56:eb:84:7b:0e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 10:13:29.867230 containerd[1615]: 2025-07-12 10:13:29.833 [INFO][4070] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fee3e3665c85e1da1e3cd51af10136de1fb17513e2f318068e86f8221a20a7db" Namespace="calico-system" Pod="whisker-68dd487c95-2shq2" WorkloadEndpoint="localhost-k8s-whisker--68dd487c95--2shq2-eth0" Jul 12 10:13:29.950760 containerd[1615]: time="2025-07-12T10:13:29.950733100Z" level=info msg="connecting to shim fee3e3665c85e1da1e3cd51af10136de1fb17513e2f318068e86f8221a20a7db" address="unix:///run/containerd/s/5b0e7ccd77f7448e1ceaec4095edfc580c0bc9a4e8cc6482aee08ea49c5291c0" namespace=k8s.io protocol=ttrpc version=3 Jul 12 10:13:29.973543 systemd[1]: Started cri-containerd-fee3e3665c85e1da1e3cd51af10136de1fb17513e2f318068e86f8221a20a7db.scope - libcontainer container fee3e3665c85e1da1e3cd51af10136de1fb17513e2f318068e86f8221a20a7db. Jul 12 10:13:29.986733 systemd-resolved[1491]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 12 10:13:30.014567 containerd[1615]: time="2025-07-12T10:13:30.014543297Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-68dd487c95-2shq2,Uid:b7c7c0d6-659a-40e4-afb4-64fac5a597c9,Namespace:calico-system,Attempt:0,} returns sandbox id \"fee3e3665c85e1da1e3cd51af10136de1fb17513e2f318068e86f8221a20a7db\"" Jul 12 10:13:30.018600 containerd[1615]: time="2025-07-12T10:13:30.018510472Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 12 10:13:31.184138 containerd[1615]: time="2025-07-12T10:13:31.184106751Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 10:13:31.185567 containerd[1615]: time="2025-07-12T10:13:31.185551097Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4661207" Jul 12 10:13:31.185856 containerd[1615]: time="2025-07-12T10:13:31.185841992Z" level=info msg="ImageCreate event name:\"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 10:13:31.187123 containerd[1615]: time="2025-07-12T10:13:31.187106385Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 10:13:31.187502 containerd[1615]: time="2025-07-12T10:13:31.187429775Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"6153902\" in 1.168901687s" Jul 12 10:13:31.187502 containerd[1615]: time="2025-07-12T10:13:31.187448644Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\"" Jul 12 10:13:31.193077 containerd[1615]: time="2025-07-12T10:13:31.193058602Z" level=info msg="CreateContainer within sandbox \"fee3e3665c85e1da1e3cd51af10136de1fb17513e2f318068e86f8221a20a7db\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 12 10:13:31.197534 containerd[1615]: time="2025-07-12T10:13:31.197516383Z" level=info msg="Container 10132abf11f47a8f91b0b3fb9b180e925281347414b7f58dd30ede3e3fd8b780: CDI devices from CRI Config.CDIDevices: []" Jul 12 10:13:31.200107 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1026153940.mount: Deactivated successfully. Jul 12 10:13:31.202362 containerd[1615]: time="2025-07-12T10:13:31.202340347Z" level=info msg="CreateContainer within sandbox \"fee3e3665c85e1da1e3cd51af10136de1fb17513e2f318068e86f8221a20a7db\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"10132abf11f47a8f91b0b3fb9b180e925281347414b7f58dd30ede3e3fd8b780\"" Jul 12 10:13:31.203291 containerd[1615]: time="2025-07-12T10:13:31.203274382Z" level=info msg="StartContainer for \"10132abf11f47a8f91b0b3fb9b180e925281347414b7f58dd30ede3e3fd8b780\"" Jul 12 10:13:31.204687 containerd[1615]: time="2025-07-12T10:13:31.203845288Z" level=info msg="connecting to shim 10132abf11f47a8f91b0b3fb9b180e925281347414b7f58dd30ede3e3fd8b780" address="unix:///run/containerd/s/5b0e7ccd77f7448e1ceaec4095edfc580c0bc9a4e8cc6482aee08ea49c5291c0" protocol=ttrpc version=3 Jul 12 10:13:31.222493 systemd[1]: Started cri-containerd-10132abf11f47a8f91b0b3fb9b180e925281347414b7f58dd30ede3e3fd8b780.scope - libcontainer container 10132abf11f47a8f91b0b3fb9b180e925281347414b7f58dd30ede3e3fd8b780. Jul 12 10:13:31.254374 containerd[1615]: time="2025-07-12T10:13:31.254340756Z" level=info msg="StartContainer for \"10132abf11f47a8f91b0b3fb9b180e925281347414b7f58dd30ede3e3fd8b780\" returns successfully" Jul 12 10:13:31.255509 containerd[1615]: time="2025-07-12T10:13:31.255308350Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 12 10:13:31.297509 systemd-networkd[1543]: calia3ae53d6598: Gained IPv6LL Jul 12 10:13:32.958013 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3647189198.mount: Deactivated successfully. Jul 12 10:13:33.184918 containerd[1615]: time="2025-07-12T10:13:33.184818791Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 10:13:33.185370 containerd[1615]: time="2025-07-12T10:13:33.185348256Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=33083477" Jul 12 10:13:33.186079 containerd[1615]: time="2025-07-12T10:13:33.185764953Z" level=info msg="ImageCreate event name:\"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 10:13:33.187151 containerd[1615]: time="2025-07-12T10:13:33.187102785Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 10:13:33.187731 containerd[1615]: time="2025-07-12T10:13:33.187633463Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"33083307\" in 1.932306423s" Jul 12 10:13:33.187731 containerd[1615]: time="2025-07-12T10:13:33.187657043Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\"" Jul 12 10:13:33.190447 containerd[1615]: time="2025-07-12T10:13:33.190426763Z" level=info msg="CreateContainer within sandbox \"fee3e3665c85e1da1e3cd51af10136de1fb17513e2f318068e86f8221a20a7db\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 12 10:13:33.198866 containerd[1615]: time="2025-07-12T10:13:33.198449791Z" level=info msg="Container a20d587c2f5567e9f21c3463a2abc3bdb61456ca5a799f1558ebafac7f3324b7: CDI devices from CRI Config.CDIDevices: []" Jul 12 10:13:33.201853 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3308609569.mount: Deactivated successfully. Jul 12 10:13:33.206234 containerd[1615]: time="2025-07-12T10:13:33.206210901Z" level=info msg="CreateContainer within sandbox \"fee3e3665c85e1da1e3cd51af10136de1fb17513e2f318068e86f8221a20a7db\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"a20d587c2f5567e9f21c3463a2abc3bdb61456ca5a799f1558ebafac7f3324b7\"" Jul 12 10:13:33.206906 containerd[1615]: time="2025-07-12T10:13:33.206862381Z" level=info msg="StartContainer for \"a20d587c2f5567e9f21c3463a2abc3bdb61456ca5a799f1558ebafac7f3324b7\"" Jul 12 10:13:33.208137 containerd[1615]: time="2025-07-12T10:13:33.208092919Z" level=info msg="connecting to shim a20d587c2f5567e9f21c3463a2abc3bdb61456ca5a799f1558ebafac7f3324b7" address="unix:///run/containerd/s/5b0e7ccd77f7448e1ceaec4095edfc580c0bc9a4e8cc6482aee08ea49c5291c0" protocol=ttrpc version=3 Jul 12 10:13:33.227567 systemd[1]: Started cri-containerd-a20d587c2f5567e9f21c3463a2abc3bdb61456ca5a799f1558ebafac7f3324b7.scope - libcontainer container a20d587c2f5567e9f21c3463a2abc3bdb61456ca5a799f1558ebafac7f3324b7. Jul 12 10:13:33.265777 containerd[1615]: time="2025-07-12T10:13:33.265701394Z" level=info msg="StartContainer for \"a20d587c2f5567e9f21c3463a2abc3bdb61456ca5a799f1558ebafac7f3324b7\" returns successfully" Jul 12 10:13:33.765023 containerd[1615]: time="2025-07-12T10:13:33.764868401Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-wlj2s,Uid:c92b6e91-8a72-4673-931d-6e08cd5a1419,Namespace:kube-system,Attempt:0,}" Jul 12 10:13:33.827848 systemd-networkd[1543]: cali8ea4576f333: Link UP Jul 12 10:13:33.829188 systemd-networkd[1543]: cali8ea4576f333: Gained carrier Jul 12 10:13:33.843450 containerd[1615]: 2025-07-12 10:13:33.781 [INFO][4376] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 12 10:13:33.843450 containerd[1615]: 2025-07-12 10:13:33.788 [INFO][4376] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--wlj2s-eth0 coredns-674b8bbfcf- kube-system c92b6e91-8a72-4673-931d-6e08cd5a1419 803 0 2025-07-12 10:13:01 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-wlj2s eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali8ea4576f333 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="46a26e973d1ccd6cec87b8642bdb687afa2407408816f15a1bb5b184dc388ac1" Namespace="kube-system" Pod="coredns-674b8bbfcf-wlj2s" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--wlj2s-" Jul 12 10:13:33.843450 containerd[1615]: 2025-07-12 10:13:33.788 [INFO][4376] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="46a26e973d1ccd6cec87b8642bdb687afa2407408816f15a1bb5b184dc388ac1" Namespace="kube-system" Pod="coredns-674b8bbfcf-wlj2s" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--wlj2s-eth0" Jul 12 10:13:33.843450 containerd[1615]: 2025-07-12 10:13:33.803 [INFO][4388] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="46a26e973d1ccd6cec87b8642bdb687afa2407408816f15a1bb5b184dc388ac1" HandleID="k8s-pod-network.46a26e973d1ccd6cec87b8642bdb687afa2407408816f15a1bb5b184dc388ac1" Workload="localhost-k8s-coredns--674b8bbfcf--wlj2s-eth0" Jul 12 10:13:33.843618 containerd[1615]: 2025-07-12 10:13:33.803 [INFO][4388] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="46a26e973d1ccd6cec87b8642bdb687afa2407408816f15a1bb5b184dc388ac1" HandleID="k8s-pod-network.46a26e973d1ccd6cec87b8642bdb687afa2407408816f15a1bb5b184dc388ac1" Workload="localhost-k8s-coredns--674b8bbfcf--wlj2s-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f170), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-wlj2s", "timestamp":"2025-07-12 10:13:33.803098952 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 12 10:13:33.843618 containerd[1615]: 2025-07-12 10:13:33.803 [INFO][4388] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 10:13:33.843618 containerd[1615]: 2025-07-12 10:13:33.803 [INFO][4388] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 10:13:33.843618 containerd[1615]: 2025-07-12 10:13:33.803 [INFO][4388] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 12 10:13:33.843618 containerd[1615]: 2025-07-12 10:13:33.808 [INFO][4388] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.46a26e973d1ccd6cec87b8642bdb687afa2407408816f15a1bb5b184dc388ac1" host="localhost" Jul 12 10:13:33.843618 containerd[1615]: 2025-07-12 10:13:33.811 [INFO][4388] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 12 10:13:33.843618 containerd[1615]: 2025-07-12 10:13:33.815 [INFO][4388] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 12 10:13:33.843618 containerd[1615]: 2025-07-12 10:13:33.816 [INFO][4388] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 12 10:13:33.843618 containerd[1615]: 2025-07-12 10:13:33.817 [INFO][4388] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 12 10:13:33.843618 containerd[1615]: 2025-07-12 10:13:33.817 [INFO][4388] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.46a26e973d1ccd6cec87b8642bdb687afa2407408816f15a1bb5b184dc388ac1" host="localhost" Jul 12 10:13:33.844434 containerd[1615]: 2025-07-12 10:13:33.818 [INFO][4388] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.46a26e973d1ccd6cec87b8642bdb687afa2407408816f15a1bb5b184dc388ac1 Jul 12 10:13:33.844434 containerd[1615]: 2025-07-12 10:13:33.820 [INFO][4388] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.46a26e973d1ccd6cec87b8642bdb687afa2407408816f15a1bb5b184dc388ac1" host="localhost" Jul 12 10:13:33.844434 containerd[1615]: 2025-07-12 10:13:33.823 [INFO][4388] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.46a26e973d1ccd6cec87b8642bdb687afa2407408816f15a1bb5b184dc388ac1" host="localhost" Jul 12 10:13:33.844434 containerd[1615]: 2025-07-12 10:13:33.823 [INFO][4388] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.46a26e973d1ccd6cec87b8642bdb687afa2407408816f15a1bb5b184dc388ac1" host="localhost" Jul 12 10:13:33.844434 containerd[1615]: 2025-07-12 10:13:33.823 [INFO][4388] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 10:13:33.844434 containerd[1615]: 2025-07-12 10:13:33.823 [INFO][4388] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="46a26e973d1ccd6cec87b8642bdb687afa2407408816f15a1bb5b184dc388ac1" HandleID="k8s-pod-network.46a26e973d1ccd6cec87b8642bdb687afa2407408816f15a1bb5b184dc388ac1" Workload="localhost-k8s-coredns--674b8bbfcf--wlj2s-eth0" Jul 12 10:13:33.844569 containerd[1615]: 2025-07-12 10:13:33.825 [INFO][4376] cni-plugin/k8s.go 418: Populated endpoint ContainerID="46a26e973d1ccd6cec87b8642bdb687afa2407408816f15a1bb5b184dc388ac1" Namespace="kube-system" Pod="coredns-674b8bbfcf-wlj2s" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--wlj2s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--wlj2s-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"c92b6e91-8a72-4673-931d-6e08cd5a1419", ResourceVersion:"803", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 10, 13, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-wlj2s", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8ea4576f333", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 10:13:33.845556 containerd[1615]: 2025-07-12 10:13:33.825 [INFO][4376] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="46a26e973d1ccd6cec87b8642bdb687afa2407408816f15a1bb5b184dc388ac1" Namespace="kube-system" Pod="coredns-674b8bbfcf-wlj2s" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--wlj2s-eth0" Jul 12 10:13:33.845556 containerd[1615]: 2025-07-12 10:13:33.825 [INFO][4376] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8ea4576f333 ContainerID="46a26e973d1ccd6cec87b8642bdb687afa2407408816f15a1bb5b184dc388ac1" Namespace="kube-system" Pod="coredns-674b8bbfcf-wlj2s" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--wlj2s-eth0" Jul 12 10:13:33.845556 containerd[1615]: 2025-07-12 10:13:33.829 [INFO][4376] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="46a26e973d1ccd6cec87b8642bdb687afa2407408816f15a1bb5b184dc388ac1" Namespace="kube-system" Pod="coredns-674b8bbfcf-wlj2s" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--wlj2s-eth0" Jul 12 10:13:33.845611 containerd[1615]: 2025-07-12 10:13:33.829 [INFO][4376] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="46a26e973d1ccd6cec87b8642bdb687afa2407408816f15a1bb5b184dc388ac1" Namespace="kube-system" Pod="coredns-674b8bbfcf-wlj2s" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--wlj2s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--wlj2s-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"c92b6e91-8a72-4673-931d-6e08cd5a1419", ResourceVersion:"803", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 10, 13, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"46a26e973d1ccd6cec87b8642bdb687afa2407408816f15a1bb5b184dc388ac1", Pod:"coredns-674b8bbfcf-wlj2s", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8ea4576f333", MAC:"b2:41:cc:d5:42:59", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 10:13:33.845611 containerd[1615]: 2025-07-12 10:13:33.840 [INFO][4376] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="46a26e973d1ccd6cec87b8642bdb687afa2407408816f15a1bb5b184dc388ac1" Namespace="kube-system" Pod="coredns-674b8bbfcf-wlj2s" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--wlj2s-eth0" Jul 12 10:13:33.861760 containerd[1615]: time="2025-07-12T10:13:33.861727385Z" level=info msg="connecting to shim 46a26e973d1ccd6cec87b8642bdb687afa2407408816f15a1bb5b184dc388ac1" address="unix:///run/containerd/s/442c7a29f715d44e63d1f19fd4993143f5e8be4bd326fed85f81d073cace7001" namespace=k8s.io protocol=ttrpc version=3 Jul 12 10:13:33.885480 systemd[1]: Started cri-containerd-46a26e973d1ccd6cec87b8642bdb687afa2407408816f15a1bb5b184dc388ac1.scope - libcontainer container 46a26e973d1ccd6cec87b8642bdb687afa2407408816f15a1bb5b184dc388ac1. Jul 12 10:13:33.893544 systemd-resolved[1491]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 12 10:13:33.931523 containerd[1615]: time="2025-07-12T10:13:33.931497231Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-wlj2s,Uid:c92b6e91-8a72-4673-931d-6e08cd5a1419,Namespace:kube-system,Attempt:0,} returns sandbox id \"46a26e973d1ccd6cec87b8642bdb687afa2407408816f15a1bb5b184dc388ac1\"" Jul 12 10:13:33.941813 containerd[1615]: time="2025-07-12T10:13:33.941792323Z" level=info msg="CreateContainer within sandbox \"46a26e973d1ccd6cec87b8642bdb687afa2407408816f15a1bb5b184dc388ac1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 12 10:13:33.955079 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount627279290.mount: Deactivated successfully. Jul 12 10:13:33.956589 containerd[1615]: time="2025-07-12T10:13:33.955520111Z" level=info msg="Container c22f7eab6796c8cdcba795d892dac05ed4b14d3935e0626d02fa6216248161f1: CDI devices from CRI Config.CDIDevices: []" Jul 12 10:13:33.958432 containerd[1615]: time="2025-07-12T10:13:33.958394951Z" level=info msg="CreateContainer within sandbox \"46a26e973d1ccd6cec87b8642bdb687afa2407408816f15a1bb5b184dc388ac1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c22f7eab6796c8cdcba795d892dac05ed4b14d3935e0626d02fa6216248161f1\"" Jul 12 10:13:33.959328 containerd[1615]: time="2025-07-12T10:13:33.958903658Z" level=info msg="StartContainer for \"c22f7eab6796c8cdcba795d892dac05ed4b14d3935e0626d02fa6216248161f1\"" Jul 12 10:13:33.959461 containerd[1615]: time="2025-07-12T10:13:33.959450146Z" level=info msg="connecting to shim c22f7eab6796c8cdcba795d892dac05ed4b14d3935e0626d02fa6216248161f1" address="unix:///run/containerd/s/442c7a29f715d44e63d1f19fd4993143f5e8be4bd326fed85f81d073cace7001" protocol=ttrpc version=3 Jul 12 10:13:33.968883 kubelet[2945]: I0712 10:13:33.968845 2945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-68dd487c95-2shq2" podStartSLOduration=2.798329068 podStartE2EDuration="5.968829783s" podCreationTimestamp="2025-07-12 10:13:28 +0000 UTC" firstStartedPulling="2025-07-12 10:13:30.018127493 +0000 UTC m=+34.441054624" lastFinishedPulling="2025-07-12 10:13:33.1886282 +0000 UTC m=+37.611555339" observedRunningTime="2025-07-12 10:13:33.964227576 +0000 UTC m=+38.387154710" watchObservedRunningTime="2025-07-12 10:13:33.968829783 +0000 UTC m=+38.391756926" Jul 12 10:13:33.980534 systemd[1]: Started cri-containerd-c22f7eab6796c8cdcba795d892dac05ed4b14d3935e0626d02fa6216248161f1.scope - libcontainer container c22f7eab6796c8cdcba795d892dac05ed4b14d3935e0626d02fa6216248161f1. Jul 12 10:13:34.008416 containerd[1615]: time="2025-07-12T10:13:34.008345225Z" level=info msg="StartContainer for \"c22f7eab6796c8cdcba795d892dac05ed4b14d3935e0626d02fa6216248161f1\" returns successfully" Jul 12 10:13:34.693884 containerd[1615]: time="2025-07-12T10:13:34.693818678Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-ddbdff869-9bsjp,Uid:8e287385-4882-4403-b563-f15bc4f0d568,Namespace:calico-apiserver,Attempt:0,}" Jul 12 10:13:34.694667 containerd[1615]: time="2025-07-12T10:13:34.694315496Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-69cd65bdbc-6w6g6,Uid:39a544b0-4d2b-4974-8209-28c325f83665,Namespace:calico-system,Attempt:0,}" Jul 12 10:13:34.771244 systemd-networkd[1543]: cali816224d8db8: Link UP Jul 12 10:13:34.771351 systemd-networkd[1543]: cali816224d8db8: Gained carrier Jul 12 10:13:34.782986 containerd[1615]: 2025-07-12 10:13:34.719 [INFO][4502] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 12 10:13:34.782986 containerd[1615]: 2025-07-12 10:13:34.728 [INFO][4502] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--69cd65bdbc--6w6g6-eth0 calico-kube-controllers-69cd65bdbc- calico-system 39a544b0-4d2b-4974-8209-28c325f83665 805 0 2025-07-12 10:13:12 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:69cd65bdbc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-69cd65bdbc-6w6g6 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali816224d8db8 [] [] }} ContainerID="20c5b97abe64d66d51943abd9f84c2c18250bcc043e5804b5e9ad04aacafeda8" Namespace="calico-system" Pod="calico-kube-controllers-69cd65bdbc-6w6g6" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--69cd65bdbc--6w6g6-" Jul 12 10:13:34.782986 containerd[1615]: 2025-07-12 10:13:34.728 [INFO][4502] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="20c5b97abe64d66d51943abd9f84c2c18250bcc043e5804b5e9ad04aacafeda8" Namespace="calico-system" Pod="calico-kube-controllers-69cd65bdbc-6w6g6" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--69cd65bdbc--6w6g6-eth0" Jul 12 10:13:34.782986 containerd[1615]: 2025-07-12 10:13:34.748 [INFO][4527] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="20c5b97abe64d66d51943abd9f84c2c18250bcc043e5804b5e9ad04aacafeda8" HandleID="k8s-pod-network.20c5b97abe64d66d51943abd9f84c2c18250bcc043e5804b5e9ad04aacafeda8" Workload="localhost-k8s-calico--kube--controllers--69cd65bdbc--6w6g6-eth0" Jul 12 10:13:34.782986 containerd[1615]: 2025-07-12 10:13:34.748 [INFO][4527] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="20c5b97abe64d66d51943abd9f84c2c18250bcc043e5804b5e9ad04aacafeda8" HandleID="k8s-pod-network.20c5b97abe64d66d51943abd9f84c2c18250bcc043e5804b5e9ad04aacafeda8" Workload="localhost-k8s-calico--kube--controllers--69cd65bdbc--6w6g6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00032d4b0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-69cd65bdbc-6w6g6", "timestamp":"2025-07-12 10:13:34.748618942 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 12 10:13:34.782986 containerd[1615]: 2025-07-12 10:13:34.748 [INFO][4527] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 10:13:34.782986 containerd[1615]: 2025-07-12 10:13:34.748 [INFO][4527] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 10:13:34.782986 containerd[1615]: 2025-07-12 10:13:34.748 [INFO][4527] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 12 10:13:34.782986 containerd[1615]: 2025-07-12 10:13:34.753 [INFO][4527] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.20c5b97abe64d66d51943abd9f84c2c18250bcc043e5804b5e9ad04aacafeda8" host="localhost" Jul 12 10:13:34.782986 containerd[1615]: 2025-07-12 10:13:34.756 [INFO][4527] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 12 10:13:34.782986 containerd[1615]: 2025-07-12 10:13:34.758 [INFO][4527] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 12 10:13:34.782986 containerd[1615]: 2025-07-12 10:13:34.759 [INFO][4527] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 12 10:13:34.782986 containerd[1615]: 2025-07-12 10:13:34.761 [INFO][4527] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 12 10:13:34.782986 containerd[1615]: 2025-07-12 10:13:34.761 [INFO][4527] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.20c5b97abe64d66d51943abd9f84c2c18250bcc043e5804b5e9ad04aacafeda8" host="localhost" Jul 12 10:13:34.782986 containerd[1615]: 2025-07-12 10:13:34.762 [INFO][4527] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.20c5b97abe64d66d51943abd9f84c2c18250bcc043e5804b5e9ad04aacafeda8 Jul 12 10:13:34.782986 containerd[1615]: 2025-07-12 10:13:34.763 [INFO][4527] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.20c5b97abe64d66d51943abd9f84c2c18250bcc043e5804b5e9ad04aacafeda8" host="localhost" Jul 12 10:13:34.782986 containerd[1615]: 2025-07-12 10:13:34.766 [INFO][4527] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.20c5b97abe64d66d51943abd9f84c2c18250bcc043e5804b5e9ad04aacafeda8" host="localhost" Jul 12 10:13:34.782986 containerd[1615]: 2025-07-12 10:13:34.766 [INFO][4527] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.20c5b97abe64d66d51943abd9f84c2c18250bcc043e5804b5e9ad04aacafeda8" host="localhost" Jul 12 10:13:34.782986 containerd[1615]: 2025-07-12 10:13:34.766 [INFO][4527] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 10:13:34.782986 containerd[1615]: 2025-07-12 10:13:34.766 [INFO][4527] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="20c5b97abe64d66d51943abd9f84c2c18250bcc043e5804b5e9ad04aacafeda8" HandleID="k8s-pod-network.20c5b97abe64d66d51943abd9f84c2c18250bcc043e5804b5e9ad04aacafeda8" Workload="localhost-k8s-calico--kube--controllers--69cd65bdbc--6w6g6-eth0" Jul 12 10:13:34.785244 containerd[1615]: 2025-07-12 10:13:34.769 [INFO][4502] cni-plugin/k8s.go 418: Populated endpoint ContainerID="20c5b97abe64d66d51943abd9f84c2c18250bcc043e5804b5e9ad04aacafeda8" Namespace="calico-system" Pod="calico-kube-controllers-69cd65bdbc-6w6g6" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--69cd65bdbc--6w6g6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--69cd65bdbc--6w6g6-eth0", GenerateName:"calico-kube-controllers-69cd65bdbc-", Namespace:"calico-system", SelfLink:"", UID:"39a544b0-4d2b-4974-8209-28c325f83665", ResourceVersion:"805", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 10, 13, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"69cd65bdbc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-69cd65bdbc-6w6g6", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali816224d8db8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 10:13:34.785244 containerd[1615]: 2025-07-12 10:13:34.769 [INFO][4502] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="20c5b97abe64d66d51943abd9f84c2c18250bcc043e5804b5e9ad04aacafeda8" Namespace="calico-system" Pod="calico-kube-controllers-69cd65bdbc-6w6g6" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--69cd65bdbc--6w6g6-eth0" Jul 12 10:13:34.785244 containerd[1615]: 2025-07-12 10:13:34.769 [INFO][4502] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali816224d8db8 ContainerID="20c5b97abe64d66d51943abd9f84c2c18250bcc043e5804b5e9ad04aacafeda8" Namespace="calico-system" Pod="calico-kube-controllers-69cd65bdbc-6w6g6" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--69cd65bdbc--6w6g6-eth0" Jul 12 10:13:34.785244 containerd[1615]: 2025-07-12 10:13:34.774 [INFO][4502] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="20c5b97abe64d66d51943abd9f84c2c18250bcc043e5804b5e9ad04aacafeda8" Namespace="calico-system" Pod="calico-kube-controllers-69cd65bdbc-6w6g6" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--69cd65bdbc--6w6g6-eth0" Jul 12 10:13:34.785244 containerd[1615]: 2025-07-12 10:13:34.774 [INFO][4502] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="20c5b97abe64d66d51943abd9f84c2c18250bcc043e5804b5e9ad04aacafeda8" Namespace="calico-system" Pod="calico-kube-controllers-69cd65bdbc-6w6g6" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--69cd65bdbc--6w6g6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--69cd65bdbc--6w6g6-eth0", GenerateName:"calico-kube-controllers-69cd65bdbc-", Namespace:"calico-system", SelfLink:"", UID:"39a544b0-4d2b-4974-8209-28c325f83665", ResourceVersion:"805", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 10, 13, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"69cd65bdbc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"20c5b97abe64d66d51943abd9f84c2c18250bcc043e5804b5e9ad04aacafeda8", Pod:"calico-kube-controllers-69cd65bdbc-6w6g6", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali816224d8db8", MAC:"b6:cc:71:00:6f:1c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 10:13:34.785244 containerd[1615]: 2025-07-12 10:13:34.781 [INFO][4502] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="20c5b97abe64d66d51943abd9f84c2c18250bcc043e5804b5e9ad04aacafeda8" Namespace="calico-system" Pod="calico-kube-controllers-69cd65bdbc-6w6g6" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--69cd65bdbc--6w6g6-eth0" Jul 12 10:13:34.805391 containerd[1615]: time="2025-07-12T10:13:34.805361242Z" level=info msg="connecting to shim 20c5b97abe64d66d51943abd9f84c2c18250bcc043e5804b5e9ad04aacafeda8" address="unix:///run/containerd/s/f1b3038c1949d537efa833745434743cc56fcc25b3b1b9be4400b6388cf6cb8b" namespace=k8s.io protocol=ttrpc version=3 Jul 12 10:13:34.826549 systemd[1]: Started cri-containerd-20c5b97abe64d66d51943abd9f84c2c18250bcc043e5804b5e9ad04aacafeda8.scope - libcontainer container 20c5b97abe64d66d51943abd9f84c2c18250bcc043e5804b5e9ad04aacafeda8. Jul 12 10:13:34.833553 systemd-resolved[1491]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 12 10:13:34.858811 containerd[1615]: time="2025-07-12T10:13:34.858717979Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-69cd65bdbc-6w6g6,Uid:39a544b0-4d2b-4974-8209-28c325f83665,Namespace:calico-system,Attempt:0,} returns sandbox id \"20c5b97abe64d66d51943abd9f84c2c18250bcc043e5804b5e9ad04aacafeda8\"" Jul 12 10:13:34.860796 containerd[1615]: time="2025-07-12T10:13:34.860777929Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 12 10:13:34.877510 systemd-networkd[1543]: cali0354d5a5d6d: Link UP Jul 12 10:13:34.877966 systemd-networkd[1543]: cali0354d5a5d6d: Gained carrier Jul 12 10:13:34.889141 containerd[1615]: 2025-07-12 10:13:34.719 [INFO][4500] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 12 10:13:34.889141 containerd[1615]: 2025-07-12 10:13:34.726 [INFO][4500] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--ddbdff869--9bsjp-eth0 calico-apiserver-ddbdff869- calico-apiserver 8e287385-4882-4403-b563-f15bc4f0d568 804 0 2025-07-12 10:13:09 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:ddbdff869 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-ddbdff869-9bsjp eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali0354d5a5d6d [] [] }} ContainerID="3c522c156e0db86b752debfcfdad0d6bdebf409c74ff5916a98443b93c7cee58" Namespace="calico-apiserver" Pod="calico-apiserver-ddbdff869-9bsjp" WorkloadEndpoint="localhost-k8s-calico--apiserver--ddbdff869--9bsjp-" Jul 12 10:13:34.889141 containerd[1615]: 2025-07-12 10:13:34.726 [INFO][4500] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3c522c156e0db86b752debfcfdad0d6bdebf409c74ff5916a98443b93c7cee58" Namespace="calico-apiserver" Pod="calico-apiserver-ddbdff869-9bsjp" WorkloadEndpoint="localhost-k8s-calico--apiserver--ddbdff869--9bsjp-eth0" Jul 12 10:13:34.889141 containerd[1615]: 2025-07-12 10:13:34.754 [INFO][4525] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3c522c156e0db86b752debfcfdad0d6bdebf409c74ff5916a98443b93c7cee58" HandleID="k8s-pod-network.3c522c156e0db86b752debfcfdad0d6bdebf409c74ff5916a98443b93c7cee58" Workload="localhost-k8s-calico--apiserver--ddbdff869--9bsjp-eth0" Jul 12 10:13:34.889141 containerd[1615]: 2025-07-12 10:13:34.754 [INFO][4525] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3c522c156e0db86b752debfcfdad0d6bdebf409c74ff5916a98443b93c7cee58" HandleID="k8s-pod-network.3c522c156e0db86b752debfcfdad0d6bdebf409c74ff5916a98443b93c7cee58" Workload="localhost-k8s-calico--apiserver--ddbdff869--9bsjp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d58d0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-ddbdff869-9bsjp", "timestamp":"2025-07-12 10:13:34.754758824 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 12 10:13:34.889141 containerd[1615]: 2025-07-12 10:13:34.754 [INFO][4525] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 10:13:34.889141 containerd[1615]: 2025-07-12 10:13:34.766 [INFO][4525] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 10:13:34.889141 containerd[1615]: 2025-07-12 10:13:34.766 [INFO][4525] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 12 10:13:34.889141 containerd[1615]: 2025-07-12 10:13:34.854 [INFO][4525] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3c522c156e0db86b752debfcfdad0d6bdebf409c74ff5916a98443b93c7cee58" host="localhost" Jul 12 10:13:34.889141 containerd[1615]: 2025-07-12 10:13:34.861 [INFO][4525] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 12 10:13:34.889141 containerd[1615]: 2025-07-12 10:13:34.866 [INFO][4525] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 12 10:13:34.889141 containerd[1615]: 2025-07-12 10:13:34.867 [INFO][4525] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 12 10:13:34.889141 containerd[1615]: 2025-07-12 10:13:34.868 [INFO][4525] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 12 10:13:34.889141 containerd[1615]: 2025-07-12 10:13:34.868 [INFO][4525] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3c522c156e0db86b752debfcfdad0d6bdebf409c74ff5916a98443b93c7cee58" host="localhost" Jul 12 10:13:34.889141 containerd[1615]: 2025-07-12 10:13:34.869 [INFO][4525] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.3c522c156e0db86b752debfcfdad0d6bdebf409c74ff5916a98443b93c7cee58 Jul 12 10:13:34.889141 containerd[1615]: 2025-07-12 10:13:34.871 [INFO][4525] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3c522c156e0db86b752debfcfdad0d6bdebf409c74ff5916a98443b93c7cee58" host="localhost" Jul 12 10:13:34.889141 containerd[1615]: 2025-07-12 10:13:34.873 [INFO][4525] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.3c522c156e0db86b752debfcfdad0d6bdebf409c74ff5916a98443b93c7cee58" host="localhost" Jul 12 10:13:34.889141 containerd[1615]: 2025-07-12 10:13:34.873 [INFO][4525] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.3c522c156e0db86b752debfcfdad0d6bdebf409c74ff5916a98443b93c7cee58" host="localhost" Jul 12 10:13:34.889141 containerd[1615]: 2025-07-12 10:13:34.873 [INFO][4525] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 10:13:34.889141 containerd[1615]: 2025-07-12 10:13:34.873 [INFO][4525] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="3c522c156e0db86b752debfcfdad0d6bdebf409c74ff5916a98443b93c7cee58" HandleID="k8s-pod-network.3c522c156e0db86b752debfcfdad0d6bdebf409c74ff5916a98443b93c7cee58" Workload="localhost-k8s-calico--apiserver--ddbdff869--9bsjp-eth0" Jul 12 10:13:34.890386 containerd[1615]: 2025-07-12 10:13:34.874 [INFO][4500] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3c522c156e0db86b752debfcfdad0d6bdebf409c74ff5916a98443b93c7cee58" Namespace="calico-apiserver" Pod="calico-apiserver-ddbdff869-9bsjp" WorkloadEndpoint="localhost-k8s-calico--apiserver--ddbdff869--9bsjp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--ddbdff869--9bsjp-eth0", GenerateName:"calico-apiserver-ddbdff869-", Namespace:"calico-apiserver", SelfLink:"", UID:"8e287385-4882-4403-b563-f15bc4f0d568", ResourceVersion:"804", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 10, 13, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"ddbdff869", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-ddbdff869-9bsjp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0354d5a5d6d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 10:13:34.890386 containerd[1615]: 2025-07-12 10:13:34.874 [INFO][4500] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="3c522c156e0db86b752debfcfdad0d6bdebf409c74ff5916a98443b93c7cee58" Namespace="calico-apiserver" Pod="calico-apiserver-ddbdff869-9bsjp" WorkloadEndpoint="localhost-k8s-calico--apiserver--ddbdff869--9bsjp-eth0" Jul 12 10:13:34.890386 containerd[1615]: 2025-07-12 10:13:34.875 [INFO][4500] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0354d5a5d6d ContainerID="3c522c156e0db86b752debfcfdad0d6bdebf409c74ff5916a98443b93c7cee58" Namespace="calico-apiserver" Pod="calico-apiserver-ddbdff869-9bsjp" WorkloadEndpoint="localhost-k8s-calico--apiserver--ddbdff869--9bsjp-eth0" Jul 12 10:13:34.890386 containerd[1615]: 2025-07-12 10:13:34.878 [INFO][4500] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3c522c156e0db86b752debfcfdad0d6bdebf409c74ff5916a98443b93c7cee58" Namespace="calico-apiserver" Pod="calico-apiserver-ddbdff869-9bsjp" WorkloadEndpoint="localhost-k8s-calico--apiserver--ddbdff869--9bsjp-eth0" Jul 12 10:13:34.890386 containerd[1615]: 2025-07-12 10:13:34.878 [INFO][4500] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3c522c156e0db86b752debfcfdad0d6bdebf409c74ff5916a98443b93c7cee58" Namespace="calico-apiserver" Pod="calico-apiserver-ddbdff869-9bsjp" WorkloadEndpoint="localhost-k8s-calico--apiserver--ddbdff869--9bsjp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--ddbdff869--9bsjp-eth0", GenerateName:"calico-apiserver-ddbdff869-", Namespace:"calico-apiserver", SelfLink:"", UID:"8e287385-4882-4403-b563-f15bc4f0d568", ResourceVersion:"804", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 10, 13, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"ddbdff869", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3c522c156e0db86b752debfcfdad0d6bdebf409c74ff5916a98443b93c7cee58", Pod:"calico-apiserver-ddbdff869-9bsjp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0354d5a5d6d", MAC:"42:4d:ec:ec:da:0a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 10:13:34.890386 containerd[1615]: 2025-07-12 10:13:34.886 [INFO][4500] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3c522c156e0db86b752debfcfdad0d6bdebf409c74ff5916a98443b93c7cee58" Namespace="calico-apiserver" Pod="calico-apiserver-ddbdff869-9bsjp" WorkloadEndpoint="localhost-k8s-calico--apiserver--ddbdff869--9bsjp-eth0" Jul 12 10:13:34.901258 containerd[1615]: time="2025-07-12T10:13:34.901215568Z" level=info msg="connecting to shim 3c522c156e0db86b752debfcfdad0d6bdebf409c74ff5916a98443b93c7cee58" address="unix:///run/containerd/s/b720c9d88484c5d426ac14fda4a61bf1f966d61e375454188a656ec1884277c8" namespace=k8s.io protocol=ttrpc version=3 Jul 12 10:13:34.920571 systemd[1]: Started cri-containerd-3c522c156e0db86b752debfcfdad0d6bdebf409c74ff5916a98443b93c7cee58.scope - libcontainer container 3c522c156e0db86b752debfcfdad0d6bdebf409c74ff5916a98443b93c7cee58. Jul 12 10:13:34.922025 kubelet[2945]: I0712 10:13:34.921996 2945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-wlj2s" podStartSLOduration=33.921982637 podStartE2EDuration="33.921982637s" podCreationTimestamp="2025-07-12 10:13:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 10:13:34.920242264 +0000 UTC m=+39.343169401" watchObservedRunningTime="2025-07-12 10:13:34.921982637 +0000 UTC m=+39.344909771" Jul 12 10:13:34.936012 systemd-resolved[1491]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 12 10:13:34.961212 containerd[1615]: time="2025-07-12T10:13:34.961034367Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-ddbdff869-9bsjp,Uid:8e287385-4882-4403-b563-f15bc4f0d568,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"3c522c156e0db86b752debfcfdad0d6bdebf409c74ff5916a98443b93c7cee58\"" Jul 12 10:13:35.137550 systemd-networkd[1543]: cali8ea4576f333: Gained IPv6LL Jul 12 10:13:35.701366 containerd[1615]: time="2025-07-12T10:13:35.701338522Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-ddbdff869-x7wvt,Uid:f018b885-cd4e-4448-9579-6bb3b0f05831,Namespace:calico-apiserver,Attempt:0,}" Jul 12 10:13:35.702450 containerd[1615]: time="2025-07-12T10:13:35.701631487Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-x88vq,Uid:c16afb24-c7cf-4e3b-9137-2bb6ce5d9b37,Namespace:calico-system,Attempt:0,}" Jul 12 10:13:35.810680 systemd-networkd[1543]: caliee0e9e5b0e4: Link UP Jul 12 10:13:35.811159 systemd-networkd[1543]: caliee0e9e5b0e4: Gained carrier Jul 12 10:13:35.823892 containerd[1615]: 2025-07-12 10:13:35.727 [INFO][4667] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 12 10:13:35.823892 containerd[1615]: 2025-07-12 10:13:35.738 [INFO][4667] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--ddbdff869--x7wvt-eth0 calico-apiserver-ddbdff869- calico-apiserver f018b885-cd4e-4448-9579-6bb3b0f05831 806 0 2025-07-12 10:13:09 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:ddbdff869 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-ddbdff869-x7wvt eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] caliee0e9e5b0e4 [] [] }} ContainerID="703cdd2c96d581ad304ee18f238bf54e438bae7f306b82c8eb763b3da07e0a64" Namespace="calico-apiserver" Pod="calico-apiserver-ddbdff869-x7wvt" WorkloadEndpoint="localhost-k8s-calico--apiserver--ddbdff869--x7wvt-" Jul 12 10:13:35.823892 containerd[1615]: 2025-07-12 10:13:35.738 [INFO][4667] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="703cdd2c96d581ad304ee18f238bf54e438bae7f306b82c8eb763b3da07e0a64" Namespace="calico-apiserver" Pod="calico-apiserver-ddbdff869-x7wvt" WorkloadEndpoint="localhost-k8s-calico--apiserver--ddbdff869--x7wvt-eth0" Jul 12 10:13:35.823892 containerd[1615]: 2025-07-12 10:13:35.765 [INFO][4691] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="703cdd2c96d581ad304ee18f238bf54e438bae7f306b82c8eb763b3da07e0a64" HandleID="k8s-pod-network.703cdd2c96d581ad304ee18f238bf54e438bae7f306b82c8eb763b3da07e0a64" Workload="localhost-k8s-calico--apiserver--ddbdff869--x7wvt-eth0" Jul 12 10:13:35.823892 containerd[1615]: 2025-07-12 10:13:35.765 [INFO][4691] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="703cdd2c96d581ad304ee18f238bf54e438bae7f306b82c8eb763b3da07e0a64" HandleID="k8s-pod-network.703cdd2c96d581ad304ee18f238bf54e438bae7f306b82c8eb763b3da07e0a64" Workload="localhost-k8s-calico--apiserver--ddbdff869--x7wvt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c7100), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-ddbdff869-x7wvt", "timestamp":"2025-07-12 10:13:35.765064078 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 12 10:13:35.823892 containerd[1615]: 2025-07-12 10:13:35.765 [INFO][4691] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 10:13:35.823892 containerd[1615]: 2025-07-12 10:13:35.765 [INFO][4691] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 10:13:35.823892 containerd[1615]: 2025-07-12 10:13:35.765 [INFO][4691] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 12 10:13:35.823892 containerd[1615]: 2025-07-12 10:13:35.774 [INFO][4691] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.703cdd2c96d581ad304ee18f238bf54e438bae7f306b82c8eb763b3da07e0a64" host="localhost" Jul 12 10:13:35.823892 containerd[1615]: 2025-07-12 10:13:35.779 [INFO][4691] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 12 10:13:35.823892 containerd[1615]: 2025-07-12 10:13:35.786 [INFO][4691] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 12 10:13:35.823892 containerd[1615]: 2025-07-12 10:13:35.788 [INFO][4691] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 12 10:13:35.823892 containerd[1615]: 2025-07-12 10:13:35.790 [INFO][4691] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 12 10:13:35.823892 containerd[1615]: 2025-07-12 10:13:35.790 [INFO][4691] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.703cdd2c96d581ad304ee18f238bf54e438bae7f306b82c8eb763b3da07e0a64" host="localhost" Jul 12 10:13:35.823892 containerd[1615]: 2025-07-12 10:13:35.792 [INFO][4691] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.703cdd2c96d581ad304ee18f238bf54e438bae7f306b82c8eb763b3da07e0a64 Jul 12 10:13:35.823892 containerd[1615]: 2025-07-12 10:13:35.795 [INFO][4691] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.703cdd2c96d581ad304ee18f238bf54e438bae7f306b82c8eb763b3da07e0a64" host="localhost" Jul 12 10:13:35.823892 containerd[1615]: 2025-07-12 10:13:35.801 [INFO][4691] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.703cdd2c96d581ad304ee18f238bf54e438bae7f306b82c8eb763b3da07e0a64" host="localhost" Jul 12 10:13:35.823892 containerd[1615]: 2025-07-12 10:13:35.801 [INFO][4691] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.703cdd2c96d581ad304ee18f238bf54e438bae7f306b82c8eb763b3da07e0a64" host="localhost" Jul 12 10:13:35.823892 containerd[1615]: 2025-07-12 10:13:35.802 [INFO][4691] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 10:13:35.823892 containerd[1615]: 2025-07-12 10:13:35.802 [INFO][4691] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="703cdd2c96d581ad304ee18f238bf54e438bae7f306b82c8eb763b3da07e0a64" HandleID="k8s-pod-network.703cdd2c96d581ad304ee18f238bf54e438bae7f306b82c8eb763b3da07e0a64" Workload="localhost-k8s-calico--apiserver--ddbdff869--x7wvt-eth0" Jul 12 10:13:35.826475 containerd[1615]: 2025-07-12 10:13:35.806 [INFO][4667] cni-plugin/k8s.go 418: Populated endpoint ContainerID="703cdd2c96d581ad304ee18f238bf54e438bae7f306b82c8eb763b3da07e0a64" Namespace="calico-apiserver" Pod="calico-apiserver-ddbdff869-x7wvt" WorkloadEndpoint="localhost-k8s-calico--apiserver--ddbdff869--x7wvt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--ddbdff869--x7wvt-eth0", GenerateName:"calico-apiserver-ddbdff869-", Namespace:"calico-apiserver", SelfLink:"", UID:"f018b885-cd4e-4448-9579-6bb3b0f05831", ResourceVersion:"806", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 10, 13, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"ddbdff869", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-ddbdff869-x7wvt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliee0e9e5b0e4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 10:13:35.826475 containerd[1615]: 2025-07-12 10:13:35.806 [INFO][4667] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="703cdd2c96d581ad304ee18f238bf54e438bae7f306b82c8eb763b3da07e0a64" Namespace="calico-apiserver" Pod="calico-apiserver-ddbdff869-x7wvt" WorkloadEndpoint="localhost-k8s-calico--apiserver--ddbdff869--x7wvt-eth0" Jul 12 10:13:35.826475 containerd[1615]: 2025-07-12 10:13:35.806 [INFO][4667] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliee0e9e5b0e4 ContainerID="703cdd2c96d581ad304ee18f238bf54e438bae7f306b82c8eb763b3da07e0a64" Namespace="calico-apiserver" Pod="calico-apiserver-ddbdff869-x7wvt" WorkloadEndpoint="localhost-k8s-calico--apiserver--ddbdff869--x7wvt-eth0" Jul 12 10:13:35.826475 containerd[1615]: 2025-07-12 10:13:35.813 [INFO][4667] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="703cdd2c96d581ad304ee18f238bf54e438bae7f306b82c8eb763b3da07e0a64" Namespace="calico-apiserver" Pod="calico-apiserver-ddbdff869-x7wvt" WorkloadEndpoint="localhost-k8s-calico--apiserver--ddbdff869--x7wvt-eth0" Jul 12 10:13:35.826475 containerd[1615]: 2025-07-12 10:13:35.813 [INFO][4667] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="703cdd2c96d581ad304ee18f238bf54e438bae7f306b82c8eb763b3da07e0a64" Namespace="calico-apiserver" Pod="calico-apiserver-ddbdff869-x7wvt" WorkloadEndpoint="localhost-k8s-calico--apiserver--ddbdff869--x7wvt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--ddbdff869--x7wvt-eth0", GenerateName:"calico-apiserver-ddbdff869-", Namespace:"calico-apiserver", SelfLink:"", UID:"f018b885-cd4e-4448-9579-6bb3b0f05831", ResourceVersion:"806", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 10, 13, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"ddbdff869", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"703cdd2c96d581ad304ee18f238bf54e438bae7f306b82c8eb763b3da07e0a64", Pod:"calico-apiserver-ddbdff869-x7wvt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliee0e9e5b0e4", MAC:"d6:5d:12:2f:ef:15", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 10:13:35.826475 containerd[1615]: 2025-07-12 10:13:35.822 [INFO][4667] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="703cdd2c96d581ad304ee18f238bf54e438bae7f306b82c8eb763b3da07e0a64" Namespace="calico-apiserver" Pod="calico-apiserver-ddbdff869-x7wvt" WorkloadEndpoint="localhost-k8s-calico--apiserver--ddbdff869--x7wvt-eth0" Jul 12 10:13:35.835879 containerd[1615]: time="2025-07-12T10:13:35.835836203Z" level=info msg="connecting to shim 703cdd2c96d581ad304ee18f238bf54e438bae7f306b82c8eb763b3da07e0a64" address="unix:///run/containerd/s/15abe9293e4aec2d6eaa96b0f981acd91341d025beb8acf4d063b935d898ef62" namespace=k8s.io protocol=ttrpc version=3 Jul 12 10:13:35.856524 systemd[1]: Started cri-containerd-703cdd2c96d581ad304ee18f238bf54e438bae7f306b82c8eb763b3da07e0a64.scope - libcontainer container 703cdd2c96d581ad304ee18f238bf54e438bae7f306b82c8eb763b3da07e0a64. Jul 12 10:13:35.865847 systemd-resolved[1491]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 12 10:13:35.909687 systemd-networkd[1543]: calia82e2283634: Link UP Jul 12 10:13:35.911154 containerd[1615]: time="2025-07-12T10:13:35.910952543Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-ddbdff869-x7wvt,Uid:f018b885-cd4e-4448-9579-6bb3b0f05831,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"703cdd2c96d581ad304ee18f238bf54e438bae7f306b82c8eb763b3da07e0a64\"" Jul 12 10:13:35.911370 systemd-networkd[1543]: calia82e2283634: Gained carrier Jul 12 10:13:35.945379 containerd[1615]: 2025-07-12 10:13:35.727 [INFO][4664] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 12 10:13:35.945379 containerd[1615]: 2025-07-12 10:13:35.738 [INFO][4664] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--768f4c5c69--x88vq-eth0 goldmane-768f4c5c69- calico-system c16afb24-c7cf-4e3b-9137-2bb6ce5d9b37 796 0 2025-07-12 10:13:11 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:768f4c5c69 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-768f4c5c69-x88vq eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calia82e2283634 [] [] }} ContainerID="c240116403376cf31febd4024d7965f78487dd7c1c09ba4f1c0dcb0253c15844" Namespace="calico-system" Pod="goldmane-768f4c5c69-x88vq" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--x88vq-" Jul 12 10:13:35.945379 containerd[1615]: 2025-07-12 10:13:35.739 [INFO][4664] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c240116403376cf31febd4024d7965f78487dd7c1c09ba4f1c0dcb0253c15844" Namespace="calico-system" Pod="goldmane-768f4c5c69-x88vq" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--x88vq-eth0" Jul 12 10:13:35.945379 containerd[1615]: 2025-07-12 10:13:35.770 [INFO][4696] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c240116403376cf31febd4024d7965f78487dd7c1c09ba4f1c0dcb0253c15844" HandleID="k8s-pod-network.c240116403376cf31febd4024d7965f78487dd7c1c09ba4f1c0dcb0253c15844" Workload="localhost-k8s-goldmane--768f4c5c69--x88vq-eth0" Jul 12 10:13:35.945379 containerd[1615]: 2025-07-12 10:13:35.770 [INFO][4696] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c240116403376cf31febd4024d7965f78487dd7c1c09ba4f1c0dcb0253c15844" HandleID="k8s-pod-network.c240116403376cf31febd4024d7965f78487dd7c1c09ba4f1c0dcb0253c15844" Workload="localhost-k8s-goldmane--768f4c5c69--x88vq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d4fc0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-768f4c5c69-x88vq", "timestamp":"2025-07-12 10:13:35.770611618 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 12 10:13:35.945379 containerd[1615]: 2025-07-12 10:13:35.770 [INFO][4696] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 10:13:35.945379 containerd[1615]: 2025-07-12 10:13:35.802 [INFO][4696] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 10:13:35.945379 containerd[1615]: 2025-07-12 10:13:35.802 [INFO][4696] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 12 10:13:35.945379 containerd[1615]: 2025-07-12 10:13:35.874 [INFO][4696] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c240116403376cf31febd4024d7965f78487dd7c1c09ba4f1c0dcb0253c15844" host="localhost" Jul 12 10:13:35.945379 containerd[1615]: 2025-07-12 10:13:35.879 [INFO][4696] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 12 10:13:35.945379 containerd[1615]: 2025-07-12 10:13:35.883 [INFO][4696] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 12 10:13:35.945379 containerd[1615]: 2025-07-12 10:13:35.884 [INFO][4696] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 12 10:13:35.945379 containerd[1615]: 2025-07-12 10:13:35.885 [INFO][4696] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 12 10:13:35.945379 containerd[1615]: 2025-07-12 10:13:35.885 [INFO][4696] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c240116403376cf31febd4024d7965f78487dd7c1c09ba4f1c0dcb0253c15844" host="localhost" Jul 12 10:13:35.945379 containerd[1615]: 2025-07-12 10:13:35.886 [INFO][4696] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c240116403376cf31febd4024d7965f78487dd7c1c09ba4f1c0dcb0253c15844 Jul 12 10:13:35.945379 containerd[1615]: 2025-07-12 10:13:35.891 [INFO][4696] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c240116403376cf31febd4024d7965f78487dd7c1c09ba4f1c0dcb0253c15844" host="localhost" Jul 12 10:13:35.945379 containerd[1615]: 2025-07-12 10:13:35.899 [INFO][4696] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.c240116403376cf31febd4024d7965f78487dd7c1c09ba4f1c0dcb0253c15844" host="localhost" Jul 12 10:13:35.945379 containerd[1615]: 2025-07-12 10:13:35.899 [INFO][4696] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.c240116403376cf31febd4024d7965f78487dd7c1c09ba4f1c0dcb0253c15844" host="localhost" Jul 12 10:13:35.945379 containerd[1615]: 2025-07-12 10:13:35.899 [INFO][4696] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 10:13:35.945379 containerd[1615]: 2025-07-12 10:13:35.900 [INFO][4696] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="c240116403376cf31febd4024d7965f78487dd7c1c09ba4f1c0dcb0253c15844" HandleID="k8s-pod-network.c240116403376cf31febd4024d7965f78487dd7c1c09ba4f1c0dcb0253c15844" Workload="localhost-k8s-goldmane--768f4c5c69--x88vq-eth0" Jul 12 10:13:35.947789 containerd[1615]: 2025-07-12 10:13:35.903 [INFO][4664] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c240116403376cf31febd4024d7965f78487dd7c1c09ba4f1c0dcb0253c15844" Namespace="calico-system" Pod="goldmane-768f4c5c69-x88vq" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--x88vq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--768f4c5c69--x88vq-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"c16afb24-c7cf-4e3b-9137-2bb6ce5d9b37", ResourceVersion:"796", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 10, 13, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-768f4c5c69-x88vq", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calia82e2283634", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 10:13:35.947789 containerd[1615]: 2025-07-12 10:13:35.904 [INFO][4664] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="c240116403376cf31febd4024d7965f78487dd7c1c09ba4f1c0dcb0253c15844" Namespace="calico-system" Pod="goldmane-768f4c5c69-x88vq" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--x88vq-eth0" Jul 12 10:13:35.947789 containerd[1615]: 2025-07-12 10:13:35.905 [INFO][4664] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia82e2283634 ContainerID="c240116403376cf31febd4024d7965f78487dd7c1c09ba4f1c0dcb0253c15844" Namespace="calico-system" Pod="goldmane-768f4c5c69-x88vq" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--x88vq-eth0" Jul 12 10:13:35.947789 containerd[1615]: 2025-07-12 10:13:35.916 [INFO][4664] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c240116403376cf31febd4024d7965f78487dd7c1c09ba4f1c0dcb0253c15844" Namespace="calico-system" Pod="goldmane-768f4c5c69-x88vq" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--x88vq-eth0" Jul 12 10:13:35.947789 containerd[1615]: 2025-07-12 10:13:35.918 [INFO][4664] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c240116403376cf31febd4024d7965f78487dd7c1c09ba4f1c0dcb0253c15844" Namespace="calico-system" Pod="goldmane-768f4c5c69-x88vq" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--x88vq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--768f4c5c69--x88vq-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"c16afb24-c7cf-4e3b-9137-2bb6ce5d9b37", ResourceVersion:"796", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 10, 13, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c240116403376cf31febd4024d7965f78487dd7c1c09ba4f1c0dcb0253c15844", Pod:"goldmane-768f4c5c69-x88vq", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calia82e2283634", MAC:"ee:e7:83:a2:22:b0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 10:13:35.947789 containerd[1615]: 2025-07-12 10:13:35.936 [INFO][4664] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c240116403376cf31febd4024d7965f78487dd7c1c09ba4f1c0dcb0253c15844" Namespace="calico-system" Pod="goldmane-768f4c5c69-x88vq" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--x88vq-eth0" Jul 12 10:13:35.979761 containerd[1615]: time="2025-07-12T10:13:35.979611581Z" level=info msg="connecting to shim c240116403376cf31febd4024d7965f78487dd7c1c09ba4f1c0dcb0253c15844" address="unix:///run/containerd/s/f25b54b2065831bbee0ca7daf35cf9acfc1ecc18160ca884d33012c2c47bc21b" namespace=k8s.io protocol=ttrpc version=3 Jul 12 10:13:36.003080 systemd[1]: Started cri-containerd-c240116403376cf31febd4024d7965f78487dd7c1c09ba4f1c0dcb0253c15844.scope - libcontainer container c240116403376cf31febd4024d7965f78487dd7c1c09ba4f1c0dcb0253c15844. Jul 12 10:13:36.039099 systemd-resolved[1491]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 12 10:13:36.094728 containerd[1615]: time="2025-07-12T10:13:36.094610303Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-x88vq,Uid:c16afb24-c7cf-4e3b-9137-2bb6ce5d9b37,Namespace:calico-system,Attempt:0,} returns sandbox id \"c240116403376cf31febd4024d7965f78487dd7c1c09ba4f1c0dcb0253c15844\"" Jul 12 10:13:36.162487 systemd-networkd[1543]: cali0354d5a5d6d: Gained IPv6LL Jul 12 10:13:36.225473 systemd-networkd[1543]: cali816224d8db8: Gained IPv6LL Jul 12 10:13:36.693268 containerd[1615]: time="2025-07-12T10:13:36.693226945Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5skpj,Uid:c9d481cf-b5a0-47fe-a981-c3ce861cf9d4,Namespace:calico-system,Attempt:0,}" Jul 12 10:13:37.058710 systemd-networkd[1543]: calia82e2283634: Gained IPv6LL Jul 12 10:13:37.104961 systemd-networkd[1543]: caliab89c4ade0f: Link UP Jul 12 10:13:37.105875 systemd-networkd[1543]: caliab89c4ade0f: Gained carrier Jul 12 10:13:37.120094 containerd[1615]: 2025-07-12 10:13:37.018 [INFO][4834] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 12 10:13:37.120094 containerd[1615]: 2025-07-12 10:13:37.031 [INFO][4834] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--5skpj-eth0 csi-node-driver- calico-system c9d481cf-b5a0-47fe-a981-c3ce861cf9d4 689 0 2025-07-12 10:13:12 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:8967bcb6f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-5skpj eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] caliab89c4ade0f [] [] }} ContainerID="6bb3ab75372195941ba7dec219e1c01deb9e5250f7ba185d92aa924b181e90fe" Namespace="calico-system" Pod="csi-node-driver-5skpj" WorkloadEndpoint="localhost-k8s-csi--node--driver--5skpj-" Jul 12 10:13:37.120094 containerd[1615]: 2025-07-12 10:13:37.031 [INFO][4834] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6bb3ab75372195941ba7dec219e1c01deb9e5250f7ba185d92aa924b181e90fe" Namespace="calico-system" Pod="csi-node-driver-5skpj" WorkloadEndpoint="localhost-k8s-csi--node--driver--5skpj-eth0" Jul 12 10:13:37.120094 containerd[1615]: 2025-07-12 10:13:37.061 [INFO][4850] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6bb3ab75372195941ba7dec219e1c01deb9e5250f7ba185d92aa924b181e90fe" HandleID="k8s-pod-network.6bb3ab75372195941ba7dec219e1c01deb9e5250f7ba185d92aa924b181e90fe" Workload="localhost-k8s-csi--node--driver--5skpj-eth0" Jul 12 10:13:37.120094 containerd[1615]: 2025-07-12 10:13:37.061 [INFO][4850] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6bb3ab75372195941ba7dec219e1c01deb9e5250f7ba185d92aa924b181e90fe" HandleID="k8s-pod-network.6bb3ab75372195941ba7dec219e1c01deb9e5250f7ba185d92aa924b181e90fe" Workload="localhost-k8s-csi--node--driver--5skpj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c5be0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-5skpj", "timestamp":"2025-07-12 10:13:37.061816641 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 12 10:13:37.120094 containerd[1615]: 2025-07-12 10:13:37.061 [INFO][4850] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 10:13:37.120094 containerd[1615]: 2025-07-12 10:13:37.062 [INFO][4850] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 10:13:37.120094 containerd[1615]: 2025-07-12 10:13:37.062 [INFO][4850] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 12 10:13:37.120094 containerd[1615]: 2025-07-12 10:13:37.067 [INFO][4850] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6bb3ab75372195941ba7dec219e1c01deb9e5250f7ba185d92aa924b181e90fe" host="localhost" Jul 12 10:13:37.120094 containerd[1615]: 2025-07-12 10:13:37.074 [INFO][4850] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 12 10:13:37.120094 containerd[1615]: 2025-07-12 10:13:37.088 [INFO][4850] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 12 10:13:37.120094 containerd[1615]: 2025-07-12 10:13:37.089 [INFO][4850] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 12 10:13:37.120094 containerd[1615]: 2025-07-12 10:13:37.090 [INFO][4850] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 12 10:13:37.120094 containerd[1615]: 2025-07-12 10:13:37.090 [INFO][4850] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6bb3ab75372195941ba7dec219e1c01deb9e5250f7ba185d92aa924b181e90fe" host="localhost" Jul 12 10:13:37.120094 containerd[1615]: 2025-07-12 10:13:37.093 [INFO][4850] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.6bb3ab75372195941ba7dec219e1c01deb9e5250f7ba185d92aa924b181e90fe Jul 12 10:13:37.120094 containerd[1615]: 2025-07-12 10:13:37.096 [INFO][4850] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6bb3ab75372195941ba7dec219e1c01deb9e5250f7ba185d92aa924b181e90fe" host="localhost" Jul 12 10:13:37.120094 containerd[1615]: 2025-07-12 10:13:37.099 [INFO][4850] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.6bb3ab75372195941ba7dec219e1c01deb9e5250f7ba185d92aa924b181e90fe" host="localhost" Jul 12 10:13:37.120094 containerd[1615]: 2025-07-12 10:13:37.099 [INFO][4850] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.6bb3ab75372195941ba7dec219e1c01deb9e5250f7ba185d92aa924b181e90fe" host="localhost" Jul 12 10:13:37.120094 containerd[1615]: 2025-07-12 10:13:37.100 [INFO][4850] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 10:13:37.120094 containerd[1615]: 2025-07-12 10:13:37.100 [INFO][4850] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="6bb3ab75372195941ba7dec219e1c01deb9e5250f7ba185d92aa924b181e90fe" HandleID="k8s-pod-network.6bb3ab75372195941ba7dec219e1c01deb9e5250f7ba185d92aa924b181e90fe" Workload="localhost-k8s-csi--node--driver--5skpj-eth0" Jul 12 10:13:37.121134 containerd[1615]: 2025-07-12 10:13:37.101 [INFO][4834] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6bb3ab75372195941ba7dec219e1c01deb9e5250f7ba185d92aa924b181e90fe" Namespace="calico-system" Pod="csi-node-driver-5skpj" WorkloadEndpoint="localhost-k8s-csi--node--driver--5skpj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--5skpj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c9d481cf-b5a0-47fe-a981-c3ce861cf9d4", ResourceVersion:"689", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 10, 13, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-5skpj", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliab89c4ade0f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 10:13:37.121134 containerd[1615]: 2025-07-12 10:13:37.101 [INFO][4834] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="6bb3ab75372195941ba7dec219e1c01deb9e5250f7ba185d92aa924b181e90fe" Namespace="calico-system" Pod="csi-node-driver-5skpj" WorkloadEndpoint="localhost-k8s-csi--node--driver--5skpj-eth0" Jul 12 10:13:37.121134 containerd[1615]: 2025-07-12 10:13:37.101 [INFO][4834] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliab89c4ade0f ContainerID="6bb3ab75372195941ba7dec219e1c01deb9e5250f7ba185d92aa924b181e90fe" Namespace="calico-system" Pod="csi-node-driver-5skpj" WorkloadEndpoint="localhost-k8s-csi--node--driver--5skpj-eth0" Jul 12 10:13:37.121134 containerd[1615]: 2025-07-12 10:13:37.106 [INFO][4834] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6bb3ab75372195941ba7dec219e1c01deb9e5250f7ba185d92aa924b181e90fe" Namespace="calico-system" Pod="csi-node-driver-5skpj" WorkloadEndpoint="localhost-k8s-csi--node--driver--5skpj-eth0" Jul 12 10:13:37.121134 containerd[1615]: 2025-07-12 10:13:37.106 [INFO][4834] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6bb3ab75372195941ba7dec219e1c01deb9e5250f7ba185d92aa924b181e90fe" Namespace="calico-system" Pod="csi-node-driver-5skpj" WorkloadEndpoint="localhost-k8s-csi--node--driver--5skpj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--5skpj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c9d481cf-b5a0-47fe-a981-c3ce861cf9d4", ResourceVersion:"689", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 10, 13, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6bb3ab75372195941ba7dec219e1c01deb9e5250f7ba185d92aa924b181e90fe", Pod:"csi-node-driver-5skpj", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliab89c4ade0f", MAC:"36:b4:1a:97:64:79", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 10:13:37.121134 containerd[1615]: 2025-07-12 10:13:37.116 [INFO][4834] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6bb3ab75372195941ba7dec219e1c01deb9e5250f7ba185d92aa924b181e90fe" Namespace="calico-system" Pod="csi-node-driver-5skpj" WorkloadEndpoint="localhost-k8s-csi--node--driver--5skpj-eth0" Jul 12 10:13:37.156555 containerd[1615]: time="2025-07-12T10:13:37.156295106Z" level=info msg="connecting to shim 6bb3ab75372195941ba7dec219e1c01deb9e5250f7ba185d92aa924b181e90fe" address="unix:///run/containerd/s/6e3e20bca984f7d3725aafde0335a4a144569a40958e33c6cff593ea46632587" namespace=k8s.io protocol=ttrpc version=3 Jul 12 10:13:37.177613 systemd[1]: Started cri-containerd-6bb3ab75372195941ba7dec219e1c01deb9e5250f7ba185d92aa924b181e90fe.scope - libcontainer container 6bb3ab75372195941ba7dec219e1c01deb9e5250f7ba185d92aa924b181e90fe. Jul 12 10:13:37.188178 systemd-resolved[1491]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 12 10:13:37.199194 containerd[1615]: time="2025-07-12T10:13:37.199130907Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5skpj,Uid:c9d481cf-b5a0-47fe-a981-c3ce861cf9d4,Namespace:calico-system,Attempt:0,} returns sandbox id \"6bb3ab75372195941ba7dec219e1c01deb9e5250f7ba185d92aa924b181e90fe\"" Jul 12 10:13:37.633511 systemd-networkd[1543]: caliee0e9e5b0e4: Gained IPv6LL Jul 12 10:13:37.811838 containerd[1615]: time="2025-07-12T10:13:37.811806793Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 10:13:37.814763 containerd[1615]: time="2025-07-12T10:13:37.814744709Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=51276688" Jul 12 10:13:37.816097 containerd[1615]: time="2025-07-12T10:13:37.816059172Z" level=info msg="ImageCreate event name:\"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 10:13:37.817839 containerd[1615]: time="2025-07-12T10:13:37.817820420Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 10:13:37.818228 containerd[1615]: time="2025-07-12T10:13:37.818141727Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"52769359\" in 2.957269295s" Jul 12 10:13:37.818299 containerd[1615]: time="2025-07-12T10:13:37.818287019Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\"" Jul 12 10:13:37.844113 containerd[1615]: time="2025-07-12T10:13:37.844087251Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 12 10:13:37.851975 containerd[1615]: time="2025-07-12T10:13:37.851955879Z" level=info msg="CreateContainer within sandbox \"20c5b97abe64d66d51943abd9f84c2c18250bcc043e5804b5e9ad04aacafeda8\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 12 10:13:37.854801 containerd[1615]: time="2025-07-12T10:13:37.854783733Z" level=info msg="Container af69f0b8966db47dfb44e9a6a2b63cbd2090a155159cb1e52c796e2ca56d7855: CDI devices from CRI Config.CDIDevices: []" Jul 12 10:13:37.859687 containerd[1615]: time="2025-07-12T10:13:37.859660380Z" level=info msg="CreateContainer within sandbox \"20c5b97abe64d66d51943abd9f84c2c18250bcc043e5804b5e9ad04aacafeda8\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"af69f0b8966db47dfb44e9a6a2b63cbd2090a155159cb1e52c796e2ca56d7855\"" Jul 12 10:13:37.860074 containerd[1615]: time="2025-07-12T10:13:37.859985428Z" level=info msg="StartContainer for \"af69f0b8966db47dfb44e9a6a2b63cbd2090a155159cb1e52c796e2ca56d7855\"" Jul 12 10:13:37.861494 containerd[1615]: time="2025-07-12T10:13:37.861465976Z" level=info msg="connecting to shim af69f0b8966db47dfb44e9a6a2b63cbd2090a155159cb1e52c796e2ca56d7855" address="unix:///run/containerd/s/f1b3038c1949d537efa833745434743cc56fcc25b3b1b9be4400b6388cf6cb8b" protocol=ttrpc version=3 Jul 12 10:13:37.876497 systemd[1]: Started cri-containerd-af69f0b8966db47dfb44e9a6a2b63cbd2090a155159cb1e52c796e2ca56d7855.scope - libcontainer container af69f0b8966db47dfb44e9a6a2b63cbd2090a155159cb1e52c796e2ca56d7855. Jul 12 10:13:37.921726 containerd[1615]: time="2025-07-12T10:13:37.921638755Z" level=info msg="StartContainer for \"af69f0b8966db47dfb44e9a6a2b63cbd2090a155159cb1e52c796e2ca56d7855\" returns successfully" Jul 12 10:13:37.952875 kubelet[2945]: I0712 10:13:37.952833 2945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-69cd65bdbc-6w6g6" podStartSLOduration=22.993805383 podStartE2EDuration="25.952817254s" podCreationTimestamp="2025-07-12 10:13:12 +0000 UTC" firstStartedPulling="2025-07-12 10:13:34.860552348 +0000 UTC m=+39.283479479" lastFinishedPulling="2025-07-12 10:13:37.819564213 +0000 UTC m=+42.242491350" observedRunningTime="2025-07-12 10:13:37.951211357 +0000 UTC m=+42.374138499" watchObservedRunningTime="2025-07-12 10:13:37.952817254 +0000 UTC m=+42.375744397" Jul 12 10:13:38.002462 containerd[1615]: time="2025-07-12T10:13:38.002438929Z" level=info msg="TaskExit event in podsandbox handler container_id:\"af69f0b8966db47dfb44e9a6a2b63cbd2090a155159cb1e52c796e2ca56d7855\" id:\"fd94a521cd36faef5ea6716dbb51652ab44d0a201c3b81ffdb531ad2b4559e83\" pid:4985 exited_at:{seconds:1752315218 nanos:2210880}" Jul 12 10:13:38.693936 containerd[1615]: time="2025-07-12T10:13:38.693821940Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-gmrqc,Uid:a3916880-a410-45ec-874b-6fc524a9fcbc,Namespace:kube-system,Attempt:0,}" Jul 12 10:13:38.731422 kubelet[2945]: I0712 10:13:38.731373 2945 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 12 10:13:38.820346 systemd-networkd[1543]: caliae5d468535f: Link UP Jul 12 10:13:38.821352 systemd-networkd[1543]: caliae5d468535f: Gained carrier Jul 12 10:13:38.831844 containerd[1615]: 2025-07-12 10:13:38.720 [INFO][5015] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 12 10:13:38.831844 containerd[1615]: 2025-07-12 10:13:38.732 [INFO][5015] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--gmrqc-eth0 coredns-674b8bbfcf- kube-system a3916880-a410-45ec-874b-6fc524a9fcbc 807 0 2025-07-12 10:13:01 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-gmrqc eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] caliae5d468535f [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="13e7d44352e51e8848b647ea95ec95631bf31ef70bf1c8bf0b8cd81172ecfa5c" Namespace="kube-system" Pod="coredns-674b8bbfcf-gmrqc" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--gmrqc-" Jul 12 10:13:38.831844 containerd[1615]: 2025-07-12 10:13:38.732 [INFO][5015] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="13e7d44352e51e8848b647ea95ec95631bf31ef70bf1c8bf0b8cd81172ecfa5c" Namespace="kube-system" Pod="coredns-674b8bbfcf-gmrqc" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--gmrqc-eth0" Jul 12 10:13:38.831844 containerd[1615]: 2025-07-12 10:13:38.779 [INFO][5027] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="13e7d44352e51e8848b647ea95ec95631bf31ef70bf1c8bf0b8cd81172ecfa5c" HandleID="k8s-pod-network.13e7d44352e51e8848b647ea95ec95631bf31ef70bf1c8bf0b8cd81172ecfa5c" Workload="localhost-k8s-coredns--674b8bbfcf--gmrqc-eth0" Jul 12 10:13:38.831844 containerd[1615]: 2025-07-12 10:13:38.779 [INFO][5027] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="13e7d44352e51e8848b647ea95ec95631bf31ef70bf1c8bf0b8cd81172ecfa5c" HandleID="k8s-pod-network.13e7d44352e51e8848b647ea95ec95631bf31ef70bf1c8bf0b8cd81172ecfa5c" Workload="localhost-k8s-coredns--674b8bbfcf--gmrqc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003064f0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-gmrqc", "timestamp":"2025-07-12 10:13:38.779466183 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 12 10:13:38.831844 containerd[1615]: 2025-07-12 10:13:38.779 [INFO][5027] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 10:13:38.831844 containerd[1615]: 2025-07-12 10:13:38.779 [INFO][5027] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 10:13:38.831844 containerd[1615]: 2025-07-12 10:13:38.779 [INFO][5027] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 12 10:13:38.831844 containerd[1615]: 2025-07-12 10:13:38.791 [INFO][5027] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.13e7d44352e51e8848b647ea95ec95631bf31ef70bf1c8bf0b8cd81172ecfa5c" host="localhost" Jul 12 10:13:38.831844 containerd[1615]: 2025-07-12 10:13:38.795 [INFO][5027] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 12 10:13:38.831844 containerd[1615]: 2025-07-12 10:13:38.799 [INFO][5027] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 12 10:13:38.831844 containerd[1615]: 2025-07-12 10:13:38.800 [INFO][5027] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 12 10:13:38.831844 containerd[1615]: 2025-07-12 10:13:38.802 [INFO][5027] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 12 10:13:38.831844 containerd[1615]: 2025-07-12 10:13:38.802 [INFO][5027] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.13e7d44352e51e8848b647ea95ec95631bf31ef70bf1c8bf0b8cd81172ecfa5c" host="localhost" Jul 12 10:13:38.831844 containerd[1615]: 2025-07-12 10:13:38.803 [INFO][5027] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.13e7d44352e51e8848b647ea95ec95631bf31ef70bf1c8bf0b8cd81172ecfa5c Jul 12 10:13:38.831844 containerd[1615]: 2025-07-12 10:13:38.809 [INFO][5027] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.13e7d44352e51e8848b647ea95ec95631bf31ef70bf1c8bf0b8cd81172ecfa5c" host="localhost" Jul 12 10:13:38.831844 containerd[1615]: 2025-07-12 10:13:38.815 [INFO][5027] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.13e7d44352e51e8848b647ea95ec95631bf31ef70bf1c8bf0b8cd81172ecfa5c" host="localhost" Jul 12 10:13:38.831844 containerd[1615]: 2025-07-12 10:13:38.815 [INFO][5027] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.13e7d44352e51e8848b647ea95ec95631bf31ef70bf1c8bf0b8cd81172ecfa5c" host="localhost" Jul 12 10:13:38.831844 containerd[1615]: 2025-07-12 10:13:38.815 [INFO][5027] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 10:13:38.831844 containerd[1615]: 2025-07-12 10:13:38.815 [INFO][5027] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="13e7d44352e51e8848b647ea95ec95631bf31ef70bf1c8bf0b8cd81172ecfa5c" HandleID="k8s-pod-network.13e7d44352e51e8848b647ea95ec95631bf31ef70bf1c8bf0b8cd81172ecfa5c" Workload="localhost-k8s-coredns--674b8bbfcf--gmrqc-eth0" Jul 12 10:13:38.833665 containerd[1615]: 2025-07-12 10:13:38.817 [INFO][5015] cni-plugin/k8s.go 418: Populated endpoint ContainerID="13e7d44352e51e8848b647ea95ec95631bf31ef70bf1c8bf0b8cd81172ecfa5c" Namespace="kube-system" Pod="coredns-674b8bbfcf-gmrqc" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--gmrqc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--gmrqc-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"a3916880-a410-45ec-874b-6fc524a9fcbc", ResourceVersion:"807", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 10, 13, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-gmrqc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliae5d468535f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 10:13:38.833665 containerd[1615]: 2025-07-12 10:13:38.818 [INFO][5015] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="13e7d44352e51e8848b647ea95ec95631bf31ef70bf1c8bf0b8cd81172ecfa5c" Namespace="kube-system" Pod="coredns-674b8bbfcf-gmrqc" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--gmrqc-eth0" Jul 12 10:13:38.833665 containerd[1615]: 2025-07-12 10:13:38.818 [INFO][5015] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliae5d468535f ContainerID="13e7d44352e51e8848b647ea95ec95631bf31ef70bf1c8bf0b8cd81172ecfa5c" Namespace="kube-system" Pod="coredns-674b8bbfcf-gmrqc" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--gmrqc-eth0" Jul 12 10:13:38.833665 containerd[1615]: 2025-07-12 10:13:38.821 [INFO][5015] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="13e7d44352e51e8848b647ea95ec95631bf31ef70bf1c8bf0b8cd81172ecfa5c" Namespace="kube-system" Pod="coredns-674b8bbfcf-gmrqc" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--gmrqc-eth0" Jul 12 10:13:38.833665 containerd[1615]: 2025-07-12 10:13:38.822 [INFO][5015] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="13e7d44352e51e8848b647ea95ec95631bf31ef70bf1c8bf0b8cd81172ecfa5c" Namespace="kube-system" Pod="coredns-674b8bbfcf-gmrqc" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--gmrqc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--gmrqc-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"a3916880-a410-45ec-874b-6fc524a9fcbc", ResourceVersion:"807", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 10, 13, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"13e7d44352e51e8848b647ea95ec95631bf31ef70bf1c8bf0b8cd81172ecfa5c", Pod:"coredns-674b8bbfcf-gmrqc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliae5d468535f", MAC:"ee:0c:7c:a5:b1:d9", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 10:13:38.833665 containerd[1615]: 2025-07-12 10:13:38.830 [INFO][5015] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="13e7d44352e51e8848b647ea95ec95631bf31ef70bf1c8bf0b8cd81172ecfa5c" Namespace="kube-system" Pod="coredns-674b8bbfcf-gmrqc" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--gmrqc-eth0" Jul 12 10:13:38.849478 systemd-networkd[1543]: caliab89c4ade0f: Gained IPv6LL Jul 12 10:13:39.257302 containerd[1615]: time="2025-07-12T10:13:39.257270634Z" level=info msg="connecting to shim 13e7d44352e51e8848b647ea95ec95631bf31ef70bf1c8bf0b8cd81172ecfa5c" address="unix:///run/containerd/s/129b921d4cc5200fd3cf69dadc6ea162ee26d0ebf914d5d4c250cc082e80b256" namespace=k8s.io protocol=ttrpc version=3 Jul 12 10:13:39.282496 systemd[1]: Started cri-containerd-13e7d44352e51e8848b647ea95ec95631bf31ef70bf1c8bf0b8cd81172ecfa5c.scope - libcontainer container 13e7d44352e51e8848b647ea95ec95631bf31ef70bf1c8bf0b8cd81172ecfa5c. Jul 12 10:13:39.290841 systemd-resolved[1491]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 12 10:13:39.325571 containerd[1615]: time="2025-07-12T10:13:39.325547302Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-gmrqc,Uid:a3916880-a410-45ec-874b-6fc524a9fcbc,Namespace:kube-system,Attempt:0,} returns sandbox id \"13e7d44352e51e8848b647ea95ec95631bf31ef70bf1c8bf0b8cd81172ecfa5c\"" Jul 12 10:13:39.362837 containerd[1615]: time="2025-07-12T10:13:39.362806512Z" level=info msg="CreateContainer within sandbox \"13e7d44352e51e8848b647ea95ec95631bf31ef70bf1c8bf0b8cd81172ecfa5c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 12 10:13:39.397102 containerd[1615]: time="2025-07-12T10:13:39.396500225Z" level=info msg="Container 7205f18bb82fad763e835862dc30dd419cb971c671c84bee73b14914a32efae9: CDI devices from CRI Config.CDIDevices: []" Jul 12 10:13:39.401183 containerd[1615]: time="2025-07-12T10:13:39.401160957Z" level=info msg="CreateContainer within sandbox \"13e7d44352e51e8848b647ea95ec95631bf31ef70bf1c8bf0b8cd81172ecfa5c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7205f18bb82fad763e835862dc30dd419cb971c671c84bee73b14914a32efae9\"" Jul 12 10:13:39.402606 containerd[1615]: time="2025-07-12T10:13:39.401574492Z" level=info msg="StartContainer for \"7205f18bb82fad763e835862dc30dd419cb971c671c84bee73b14914a32efae9\"" Jul 12 10:13:39.403429 containerd[1615]: time="2025-07-12T10:13:39.403166304Z" level=info msg="connecting to shim 7205f18bb82fad763e835862dc30dd419cb971c671c84bee73b14914a32efae9" address="unix:///run/containerd/s/129b921d4cc5200fd3cf69dadc6ea162ee26d0ebf914d5d4c250cc082e80b256" protocol=ttrpc version=3 Jul 12 10:13:39.438613 systemd[1]: Started cri-containerd-7205f18bb82fad763e835862dc30dd419cb971c671c84bee73b14914a32efae9.scope - libcontainer container 7205f18bb82fad763e835862dc30dd419cb971c671c84bee73b14914a32efae9. Jul 12 10:13:39.475367 containerd[1615]: time="2025-07-12T10:13:39.475342061Z" level=info msg="StartContainer for \"7205f18bb82fad763e835862dc30dd419cb971c671c84bee73b14914a32efae9\" returns successfully" Jul 12 10:13:40.132847 systemd-networkd[1543]: vxlan.calico: Link UP Jul 12 10:13:40.132853 systemd-networkd[1543]: vxlan.calico: Gained carrier Jul 12 10:13:40.251921 kubelet[2945]: I0712 10:13:40.251752 2945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-gmrqc" podStartSLOduration=39.251738823 podStartE2EDuration="39.251738823s" podCreationTimestamp="2025-07-12 10:13:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 10:13:40.251579418 +0000 UTC m=+44.674506553" watchObservedRunningTime="2025-07-12 10:13:40.251738823 +0000 UTC m=+44.674665961" Jul 12 10:13:40.449500 systemd-networkd[1543]: caliae5d468535f: Gained IPv6LL Jul 12 10:13:40.589705 containerd[1615]: time="2025-07-12T10:13:40.589619387Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 10:13:40.597128 containerd[1615]: time="2025-07-12T10:13:40.597107928Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=47317977" Jul 12 10:13:40.626087 containerd[1615]: time="2025-07-12T10:13:40.625821862Z" level=info msg="ImageCreate event name:\"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 10:13:40.633773 containerd[1615]: time="2025-07-12T10:13:40.633748215Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 10:13:40.634237 containerd[1615]: time="2025-07-12T10:13:40.634216764Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 2.790102585s" Jul 12 10:13:40.634277 containerd[1615]: time="2025-07-12T10:13:40.634237409Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 12 10:13:40.658062 containerd[1615]: time="2025-07-12T10:13:40.657953091Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 12 10:13:40.663078 containerd[1615]: time="2025-07-12T10:13:40.662926808Z" level=info msg="CreateContainer within sandbox \"3c522c156e0db86b752debfcfdad0d6bdebf409c74ff5916a98443b93c7cee58\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 12 10:13:40.671858 containerd[1615]: time="2025-07-12T10:13:40.668757128Z" level=info msg="Container 1e8d4eb22bb6e454fa41033ad8dcb366642f0833f0b6e4de6d5f2ba6cb14db5a: CDI devices from CRI Config.CDIDevices: []" Jul 12 10:13:40.687815 containerd[1615]: time="2025-07-12T10:13:40.687789473Z" level=info msg="CreateContainer within sandbox \"3c522c156e0db86b752debfcfdad0d6bdebf409c74ff5916a98443b93c7cee58\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"1e8d4eb22bb6e454fa41033ad8dcb366642f0833f0b6e4de6d5f2ba6cb14db5a\"" Jul 12 10:13:40.688392 containerd[1615]: time="2025-07-12T10:13:40.688371507Z" level=info msg="StartContainer for \"1e8d4eb22bb6e454fa41033ad8dcb366642f0833f0b6e4de6d5f2ba6cb14db5a\"" Jul 12 10:13:40.689697 containerd[1615]: time="2025-07-12T10:13:40.689680676Z" level=info msg="connecting to shim 1e8d4eb22bb6e454fa41033ad8dcb366642f0833f0b6e4de6d5f2ba6cb14db5a" address="unix:///run/containerd/s/b720c9d88484c5d426ac14fda4a61bf1f966d61e375454188a656ec1884277c8" protocol=ttrpc version=3 Jul 12 10:13:40.724569 systemd[1]: Started cri-containerd-1e8d4eb22bb6e454fa41033ad8dcb366642f0833f0b6e4de6d5f2ba6cb14db5a.scope - libcontainer container 1e8d4eb22bb6e454fa41033ad8dcb366642f0833f0b6e4de6d5f2ba6cb14db5a. Jul 12 10:13:40.760125 containerd[1615]: time="2025-07-12T10:13:40.760101496Z" level=info msg="StartContainer for \"1e8d4eb22bb6e454fa41033ad8dcb366642f0833f0b6e4de6d5f2ba6cb14db5a\" returns successfully" Jul 12 10:13:41.039089 containerd[1615]: time="2025-07-12T10:13:41.038797401Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 10:13:41.040099 containerd[1615]: time="2025-07-12T10:13:41.039543944Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Jul 12 10:13:41.040632 containerd[1615]: time="2025-07-12T10:13:41.040591761Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 382.245164ms" Jul 12 10:13:41.040632 containerd[1615]: time="2025-07-12T10:13:41.040607330Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 12 10:13:41.041443 containerd[1615]: time="2025-07-12T10:13:41.041097252Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 12 10:13:41.044047 containerd[1615]: time="2025-07-12T10:13:41.044018471Z" level=info msg="CreateContainer within sandbox \"703cdd2c96d581ad304ee18f238bf54e438bae7f306b82c8eb763b3da07e0a64\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 12 10:13:41.051582 containerd[1615]: time="2025-07-12T10:13:41.051544138Z" level=info msg="Container 926fa12170899c34cd0a187aad8a848a9d19cd5b46ddd5d77f3be54fc63270b8: CDI devices from CRI Config.CDIDevices: []" Jul 12 10:13:41.054618 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount707576931.mount: Deactivated successfully. Jul 12 10:13:41.069893 containerd[1615]: time="2025-07-12T10:13:41.069828203Z" level=info msg="CreateContainer within sandbox \"703cdd2c96d581ad304ee18f238bf54e438bae7f306b82c8eb763b3da07e0a64\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"926fa12170899c34cd0a187aad8a848a9d19cd5b46ddd5d77f3be54fc63270b8\"" Jul 12 10:13:41.070408 containerd[1615]: time="2025-07-12T10:13:41.070337622Z" level=info msg="StartContainer for \"926fa12170899c34cd0a187aad8a848a9d19cd5b46ddd5d77f3be54fc63270b8\"" Jul 12 10:13:41.071115 containerd[1615]: time="2025-07-12T10:13:41.071085598Z" level=info msg="connecting to shim 926fa12170899c34cd0a187aad8a848a9d19cd5b46ddd5d77f3be54fc63270b8" address="unix:///run/containerd/s/15abe9293e4aec2d6eaa96b0f981acd91341d025beb8acf4d063b935d898ef62" protocol=ttrpc version=3 Jul 12 10:13:41.089493 systemd[1]: Started cri-containerd-926fa12170899c34cd0a187aad8a848a9d19cd5b46ddd5d77f3be54fc63270b8.scope - libcontainer container 926fa12170899c34cd0a187aad8a848a9d19cd5b46ddd5d77f3be54fc63270b8. Jul 12 10:13:41.138966 containerd[1615]: time="2025-07-12T10:13:41.138935593Z" level=info msg="StartContainer for \"926fa12170899c34cd0a187aad8a848a9d19cd5b46ddd5d77f3be54fc63270b8\" returns successfully" Jul 12 10:13:41.217589 systemd-networkd[1543]: vxlan.calico: Gained IPv6LL Jul 12 10:13:41.227493 kubelet[2945]: I0712 10:13:41.227463 2945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-ddbdff869-x7wvt" podStartSLOduration=27.105266805 podStartE2EDuration="32.22745237s" podCreationTimestamp="2025-07-12 10:13:09 +0000 UTC" firstStartedPulling="2025-07-12 10:13:35.918814655 +0000 UTC m=+40.341741787" lastFinishedPulling="2025-07-12 10:13:41.041000221 +0000 UTC m=+45.463927352" observedRunningTime="2025-07-12 10:13:41.187273634 +0000 UTC m=+45.610200777" watchObservedRunningTime="2025-07-12 10:13:41.22745237 +0000 UTC m=+45.650379507" Jul 12 10:13:41.242375 kubelet[2945]: I0712 10:13:41.242344 2945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-ddbdff869-9bsjp" podStartSLOduration=26.548253822 podStartE2EDuration="32.242333843s" podCreationTimestamp="2025-07-12 10:13:09 +0000 UTC" firstStartedPulling="2025-07-12 10:13:34.963753085 +0000 UTC m=+39.386680223" lastFinishedPulling="2025-07-12 10:13:40.65783311 +0000 UTC m=+45.080760244" observedRunningTime="2025-07-12 10:13:41.242178892 +0000 UTC m=+45.665106034" watchObservedRunningTime="2025-07-12 10:13:41.242333843 +0000 UTC m=+45.665260986" Jul 12 10:13:42.184238 kubelet[2945]: I0712 10:13:42.184214 2945 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 12 10:13:42.194610 kubelet[2945]: I0712 10:13:42.174170 2945 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 12 10:13:43.992073 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1183385804.mount: Deactivated successfully. Jul 12 10:13:44.429315 containerd[1615]: time="2025-07-12T10:13:44.429282613Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 10:13:44.431002 containerd[1615]: time="2025-07-12T10:13:44.429516153Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=66352308" Jul 12 10:13:44.431002 containerd[1615]: time="2025-07-12T10:13:44.430353184Z" level=info msg="ImageCreate event name:\"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 10:13:44.432278 containerd[1615]: time="2025-07-12T10:13:44.432072078Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 10:13:44.432663 containerd[1615]: time="2025-07-12T10:13:44.432600438Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"66352154\" in 3.391488987s" Jul 12 10:13:44.432663 containerd[1615]: time="2025-07-12T10:13:44.432619973Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\"" Jul 12 10:13:44.495339 containerd[1615]: time="2025-07-12T10:13:44.495310937Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 12 10:13:44.608493 containerd[1615]: time="2025-07-12T10:13:44.608464829Z" level=info msg="CreateContainer within sandbox \"c240116403376cf31febd4024d7965f78487dd7c1c09ba4f1c0dcb0253c15844\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 12 10:13:44.617251 containerd[1615]: time="2025-07-12T10:13:44.615736142Z" level=info msg="Container b8b0022fa79ac67ee93a01984ed619cd85a2a790867c99ac65513f1ce5c5c27a: CDI devices from CRI Config.CDIDevices: []" Jul 12 10:13:44.632449 containerd[1615]: time="2025-07-12T10:13:44.632431653Z" level=info msg="CreateContainer within sandbox \"c240116403376cf31febd4024d7965f78487dd7c1c09ba4f1c0dcb0253c15844\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"b8b0022fa79ac67ee93a01984ed619cd85a2a790867c99ac65513f1ce5c5c27a\"" Jul 12 10:13:44.637834 containerd[1615]: time="2025-07-12T10:13:44.637811846Z" level=info msg="StartContainer for \"b8b0022fa79ac67ee93a01984ed619cd85a2a790867c99ac65513f1ce5c5c27a\"" Jul 12 10:13:44.639000 containerd[1615]: time="2025-07-12T10:13:44.638983308Z" level=info msg="connecting to shim b8b0022fa79ac67ee93a01984ed619cd85a2a790867c99ac65513f1ce5c5c27a" address="unix:///run/containerd/s/f25b54b2065831bbee0ca7daf35cf9acfc1ecc18160ca884d33012c2c47bc21b" protocol=ttrpc version=3 Jul 12 10:13:44.910750 systemd[1]: Started cri-containerd-b8b0022fa79ac67ee93a01984ed619cd85a2a790867c99ac65513f1ce5c5c27a.scope - libcontainer container b8b0022fa79ac67ee93a01984ed619cd85a2a790867c99ac65513f1ce5c5c27a. Jul 12 10:13:45.002817 containerd[1615]: time="2025-07-12T10:13:45.002782324Z" level=info msg="StartContainer for \"b8b0022fa79ac67ee93a01984ed619cd85a2a790867c99ac65513f1ce5c5c27a\" returns successfully" Jul 12 10:13:45.625522 containerd[1615]: time="2025-07-12T10:13:45.625488963Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b8b0022fa79ac67ee93a01984ed619cd85a2a790867c99ac65513f1ce5c5c27a\" id:\"d9bfe6cc46d9a6c348fee9cf1e48dadd80271217a820d3847c88929a4e8e8951\" pid:5423 exited_at:{seconds:1752315225 nanos:606835235}" Jul 12 10:13:45.978791 kubelet[2945]: I0712 10:13:45.973429 2945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-768f4c5c69-x88vq" podStartSLOduration=26.588089583 podStartE2EDuration="34.951391907s" podCreationTimestamp="2025-07-12 10:13:11 +0000 UTC" firstStartedPulling="2025-07-12 10:13:36.096591605 +0000 UTC m=+40.519518739" lastFinishedPulling="2025-07-12 10:13:44.459893926 +0000 UTC m=+48.882821063" observedRunningTime="2025-07-12 10:13:45.870407833 +0000 UTC m=+50.293334971" watchObservedRunningTime="2025-07-12 10:13:45.951391907 +0000 UTC m=+50.374319043" Jul 12 10:13:46.157059 containerd[1615]: time="2025-07-12T10:13:46.157017897Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 10:13:46.157530 containerd[1615]: time="2025-07-12T10:13:46.157500613Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8759190" Jul 12 10:13:46.157530 containerd[1615]: time="2025-07-12T10:13:46.157509013Z" level=info msg="ImageCreate event name:\"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 10:13:46.159192 containerd[1615]: time="2025-07-12T10:13:46.158890518Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 10:13:46.159192 containerd[1615]: time="2025-07-12T10:13:46.159117268Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"10251893\" in 1.663781628s" Jul 12 10:13:46.159192 containerd[1615]: time="2025-07-12T10:13:46.159133435Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\"" Jul 12 10:13:46.167661 containerd[1615]: time="2025-07-12T10:13:46.167649791Z" level=info msg="CreateContainer within sandbox \"6bb3ab75372195941ba7dec219e1c01deb9e5250f7ba185d92aa924b181e90fe\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 12 10:13:46.181897 containerd[1615]: time="2025-07-12T10:13:46.178757129Z" level=info msg="Container 844bb34d4e58d2aad3e80fa9df89f17ac917d398d5386ed663bdb68ab75d653d: CDI devices from CRI Config.CDIDevices: []" Jul 12 10:13:46.192747 containerd[1615]: time="2025-07-12T10:13:46.192729614Z" level=info msg="CreateContainer within sandbox \"6bb3ab75372195941ba7dec219e1c01deb9e5250f7ba185d92aa924b181e90fe\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"844bb34d4e58d2aad3e80fa9df89f17ac917d398d5386ed663bdb68ab75d653d\"" Jul 12 10:13:46.193553 containerd[1615]: time="2025-07-12T10:13:46.193498087Z" level=info msg="StartContainer for \"844bb34d4e58d2aad3e80fa9df89f17ac917d398d5386ed663bdb68ab75d653d\"" Jul 12 10:13:46.194421 containerd[1615]: time="2025-07-12T10:13:46.194377167Z" level=info msg="connecting to shim 844bb34d4e58d2aad3e80fa9df89f17ac917d398d5386ed663bdb68ab75d653d" address="unix:///run/containerd/s/6e3e20bca984f7d3725aafde0335a4a144569a40958e33c6cff593ea46632587" protocol=ttrpc version=3 Jul 12 10:13:46.217485 systemd[1]: Started cri-containerd-844bb34d4e58d2aad3e80fa9df89f17ac917d398d5386ed663bdb68ab75d653d.scope - libcontainer container 844bb34d4e58d2aad3e80fa9df89f17ac917d398d5386ed663bdb68ab75d653d. Jul 12 10:13:46.243293 containerd[1615]: time="2025-07-12T10:13:46.243213875Z" level=info msg="StartContainer for \"844bb34d4e58d2aad3e80fa9df89f17ac917d398d5386ed663bdb68ab75d653d\" returns successfully" Jul 12 10:13:46.255789 containerd[1615]: time="2025-07-12T10:13:46.255759784Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 12 10:13:47.673412 containerd[1615]: time="2025-07-12T10:13:47.673172556Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 10:13:47.673646 containerd[1615]: time="2025-07-12T10:13:47.673529373Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=14703784" Jul 12 10:13:47.684773 containerd[1615]: time="2025-07-12T10:13:47.673876537Z" level=info msg="ImageCreate event name:\"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 10:13:47.684906 containerd[1615]: time="2025-07-12T10:13:47.675293266Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"16196439\" in 1.419426561s" Jul 12 10:13:47.684933 containerd[1615]: time="2025-07-12T10:13:47.684909519Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\"" Jul 12 10:13:47.685230 containerd[1615]: time="2025-07-12T10:13:47.685219983Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 10:13:47.737521 containerd[1615]: time="2025-07-12T10:13:47.737503489Z" level=info msg="CreateContainer within sandbox \"6bb3ab75372195941ba7dec219e1c01deb9e5250f7ba185d92aa924b181e90fe\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 12 10:13:47.741517 containerd[1615]: time="2025-07-12T10:13:47.740894216Z" level=info msg="Container 1fec137562d059a0497452e257a5f4caf886476d4083f000f0d89c1376f0b4cf: CDI devices from CRI Config.CDIDevices: []" Jul 12 10:13:47.746706 containerd[1615]: time="2025-07-12T10:13:47.746658555Z" level=info msg="CreateContainer within sandbox \"6bb3ab75372195941ba7dec219e1c01deb9e5250f7ba185d92aa924b181e90fe\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"1fec137562d059a0497452e257a5f4caf886476d4083f000f0d89c1376f0b4cf\"" Jul 12 10:13:47.747140 containerd[1615]: time="2025-07-12T10:13:47.747038205Z" level=info msg="StartContainer for \"1fec137562d059a0497452e257a5f4caf886476d4083f000f0d89c1376f0b4cf\"" Jul 12 10:13:47.748132 containerd[1615]: time="2025-07-12T10:13:47.748084981Z" level=info msg="connecting to shim 1fec137562d059a0497452e257a5f4caf886476d4083f000f0d89c1376f0b4cf" address="unix:///run/containerd/s/6e3e20bca984f7d3725aafde0335a4a144569a40958e33c6cff593ea46632587" protocol=ttrpc version=3 Jul 12 10:13:47.778481 systemd[1]: Started cri-containerd-1fec137562d059a0497452e257a5f4caf886476d4083f000f0d89c1376f0b4cf.scope - libcontainer container 1fec137562d059a0497452e257a5f4caf886476d4083f000f0d89c1376f0b4cf. Jul 12 10:13:47.800031 containerd[1615]: time="2025-07-12T10:13:47.799997869Z" level=info msg="StartContainer for \"1fec137562d059a0497452e257a5f4caf886476d4083f000f0d89c1376f0b4cf\" returns successfully" Jul 12 10:13:48.598250 kubelet[2945]: I0712 10:13:48.597799 2945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-5skpj" podStartSLOduration=26.072836358 podStartE2EDuration="36.597760413s" podCreationTimestamp="2025-07-12 10:13:12 +0000 UTC" firstStartedPulling="2025-07-12 10:13:37.199984069 +0000 UTC m=+41.622911200" lastFinishedPulling="2025-07-12 10:13:47.724908124 +0000 UTC m=+52.147835255" observedRunningTime="2025-07-12 10:13:48.592383538 +0000 UTC m=+53.015310675" watchObservedRunningTime="2025-07-12 10:13:48.597760413 +0000 UTC m=+53.020687552" Jul 12 10:13:48.878621 kubelet[2945]: I0712 10:13:48.878492 2945 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 12 10:13:48.878621 kubelet[2945]: I0712 10:13:48.878527 2945 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 12 10:13:51.447686 kubelet[2945]: I0712 10:13:51.447612 2945 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 12 10:13:59.251585 containerd[1615]: time="2025-07-12T10:13:59.251524559Z" level=info msg="TaskExit event in podsandbox handler container_id:\"479e01e5cdb5db9e53d13c70f602154ab0c3d70f9300796591572fe312c685de\" id:\"04df5a60ff6976273c684f8a6c1fb6724b15f4445e6f54e2692fcc0c5aaa4414\" pid:5544 exited_at:{seconds:1752315239 nanos:247092725}" Jul 12 10:14:05.102963 kubelet[2945]: I0712 10:14:05.102858 2945 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 12 10:14:08.162372 containerd[1615]: time="2025-07-12T10:14:08.162347358Z" level=info msg="TaskExit event in podsandbox handler container_id:\"af69f0b8966db47dfb44e9a6a2b63cbd2090a155159cb1e52c796e2ca56d7855\" id:\"c319c7f97ed40df51468ec7670dbdeabc51e28e5558e4280da7d53a83d8dea92\" pid:5606 exited_at:{seconds:1752315248 nanos:162156360}" Jul 12 10:14:13.054007 systemd[1]: Started sshd@7-139.178.70.103:22-139.178.89.65:45118.service - OpenSSH per-connection server daemon (139.178.89.65:45118). Jul 12 10:14:13.267411 sshd[5619]: Accepted publickey for core from 139.178.89.65 port 45118 ssh2: RSA SHA256:er45sfTMlm4F7XCjgdqGTRql8gwHayiMHL6WgBzS/8A Jul 12 10:14:13.271939 sshd-session[5619]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 10:14:13.279450 systemd-logind[1597]: New session 10 of user core. Jul 12 10:14:13.286550 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 12 10:14:14.103461 sshd[5622]: Connection closed by 139.178.89.65 port 45118 Jul 12 10:14:14.103777 sshd-session[5619]: pam_unix(sshd:session): session closed for user core Jul 12 10:14:14.110115 systemd[1]: sshd@7-139.178.70.103:22-139.178.89.65:45118.service: Deactivated successfully. Jul 12 10:14:14.112072 systemd[1]: session-10.scope: Deactivated successfully. Jul 12 10:14:14.112960 systemd-logind[1597]: Session 10 logged out. Waiting for processes to exit. Jul 12 10:14:14.114211 systemd-logind[1597]: Removed session 10. Jul 12 10:14:15.568447 containerd[1615]: time="2025-07-12T10:14:15.568386111Z" level=info msg="TaskExit event in podsandbox handler container_id:\"af69f0b8966db47dfb44e9a6a2b63cbd2090a155159cb1e52c796e2ca56d7855\" id:\"5c5512aceffac643093b1705707bb66c8aedee1aab9def553e10e9679ce03170\" pid:5665 exited_at:{seconds:1752315255 nanos:568005613}" Jul 12 10:14:16.266791 containerd[1615]: time="2025-07-12T10:14:16.266719965Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b8b0022fa79ac67ee93a01984ed619cd85a2a790867c99ac65513f1ce5c5c27a\" id:\"85ad0eee5f686b4a19386721f1a3189781e43827c2cad6b9af85cc33e6a31506\" pid:5646 exited_at:{seconds:1752315256 nanos:266333370}" Jul 12 10:14:16.277714 containerd[1615]: time="2025-07-12T10:14:16.277657801Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b8b0022fa79ac67ee93a01984ed619cd85a2a790867c99ac65513f1ce5c5c27a\" id:\"86b2d6c0f5e5d2dbb7c1652b8e9a6f73604717690dbd4e34b3be800d6b5f6222\" pid:5684 exited_at:{seconds:1752315256 nanos:276170039}" Jul 12 10:14:19.118458 systemd[1]: Started sshd@8-139.178.70.103:22-139.178.89.65:45124.service - OpenSSH per-connection server daemon (139.178.89.65:45124). Jul 12 10:14:19.264992 sshd[5704]: Accepted publickey for core from 139.178.89.65 port 45124 ssh2: RSA SHA256:er45sfTMlm4F7XCjgdqGTRql8gwHayiMHL6WgBzS/8A Jul 12 10:14:19.267147 sshd-session[5704]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 10:14:19.272796 systemd-logind[1597]: New session 11 of user core. Jul 12 10:14:19.278493 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 12 10:14:19.995967 sshd[5707]: Connection closed by 139.178.89.65 port 45124 Jul 12 10:14:19.996682 sshd-session[5704]: pam_unix(sshd:session): session closed for user core Jul 12 10:14:20.000099 systemd[1]: sshd@8-139.178.70.103:22-139.178.89.65:45124.service: Deactivated successfully. Jul 12 10:14:20.001381 systemd[1]: session-11.scope: Deactivated successfully. Jul 12 10:14:20.002166 systemd-logind[1597]: Session 11 logged out. Waiting for processes to exit. Jul 12 10:14:20.003212 systemd-logind[1597]: Removed session 11. Jul 12 10:14:25.008102 systemd[1]: Started sshd@9-139.178.70.103:22-139.178.89.65:42380.service - OpenSSH per-connection server daemon (139.178.89.65:42380). Jul 12 10:14:25.109346 sshd[5728]: Accepted publickey for core from 139.178.89.65 port 42380 ssh2: RSA SHA256:er45sfTMlm4F7XCjgdqGTRql8gwHayiMHL6WgBzS/8A Jul 12 10:14:25.111947 sshd-session[5728]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 10:14:25.115541 systemd-logind[1597]: New session 12 of user core. Jul 12 10:14:25.121620 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 12 10:14:25.254479 sshd[5732]: Connection closed by 139.178.89.65 port 42380 Jul 12 10:14:25.255747 sshd-session[5728]: pam_unix(sshd:session): session closed for user core Jul 12 10:14:25.263882 systemd[1]: sshd@9-139.178.70.103:22-139.178.89.65:42380.service: Deactivated successfully. Jul 12 10:14:25.266067 systemd[1]: session-12.scope: Deactivated successfully. Jul 12 10:14:25.267754 systemd-logind[1597]: Session 12 logged out. Waiting for processes to exit. Jul 12 10:14:25.272494 systemd[1]: Started sshd@10-139.178.70.103:22-139.178.89.65:42396.service - OpenSSH per-connection server daemon (139.178.89.65:42396). Jul 12 10:14:25.274743 systemd-logind[1597]: Removed session 12. Jul 12 10:14:25.328897 sshd[5745]: Accepted publickey for core from 139.178.89.65 port 42396 ssh2: RSA SHA256:er45sfTMlm4F7XCjgdqGTRql8gwHayiMHL6WgBzS/8A Jul 12 10:14:25.329855 sshd-session[5745]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 10:14:25.334662 systemd-logind[1597]: New session 13 of user core. Jul 12 10:14:25.339591 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 12 10:14:25.470769 sshd[5748]: Connection closed by 139.178.89.65 port 42396 Jul 12 10:14:25.471797 sshd-session[5745]: pam_unix(sshd:session): session closed for user core Jul 12 10:14:25.477815 systemd[1]: sshd@10-139.178.70.103:22-139.178.89.65:42396.service: Deactivated successfully. Jul 12 10:14:25.479238 systemd[1]: session-13.scope: Deactivated successfully. Jul 12 10:14:25.480935 systemd-logind[1597]: Session 13 logged out. Waiting for processes to exit. Jul 12 10:14:25.484322 systemd[1]: Started sshd@11-139.178.70.103:22-139.178.89.65:42412.service - OpenSSH per-connection server daemon (139.178.89.65:42412). Jul 12 10:14:25.488034 systemd-logind[1597]: Removed session 13. Jul 12 10:14:25.547268 sshd[5758]: Accepted publickey for core from 139.178.89.65 port 42412 ssh2: RSA SHA256:er45sfTMlm4F7XCjgdqGTRql8gwHayiMHL6WgBzS/8A Jul 12 10:14:25.548830 sshd-session[5758]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 10:14:25.552049 systemd-logind[1597]: New session 14 of user core. Jul 12 10:14:25.557479 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 12 10:14:25.660517 sshd[5761]: Connection closed by 139.178.89.65 port 42412 Jul 12 10:14:25.660825 sshd-session[5758]: pam_unix(sshd:session): session closed for user core Jul 12 10:14:25.663094 systemd[1]: sshd@11-139.178.70.103:22-139.178.89.65:42412.service: Deactivated successfully. Jul 12 10:14:25.664320 systemd[1]: session-14.scope: Deactivated successfully. Jul 12 10:14:25.664898 systemd-logind[1597]: Session 14 logged out. Waiting for processes to exit. Jul 12 10:14:25.666155 systemd-logind[1597]: Removed session 14. Jul 12 10:14:30.264433 containerd[1615]: time="2025-07-12T10:14:30.264179427Z" level=info msg="TaskExit event in podsandbox handler container_id:\"479e01e5cdb5db9e53d13c70f602154ab0c3d70f9300796591572fe312c685de\" id:\"46a587277f18c53118ee5c5ccd5b2dff27d70c104c0c1eaad46143f01c8b5727\" pid:5784 exit_status:1 exited_at:{seconds:1752315270 nanos:233301483}" Jul 12 10:14:30.672633 systemd[1]: Started sshd@12-139.178.70.103:22-139.178.89.65:40716.service - OpenSSH per-connection server daemon (139.178.89.65:40716). Jul 12 10:14:30.824513 sshd[5801]: Accepted publickey for core from 139.178.89.65 port 40716 ssh2: RSA SHA256:er45sfTMlm4F7XCjgdqGTRql8gwHayiMHL6WgBzS/8A Jul 12 10:14:30.826571 sshd-session[5801]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 10:14:30.830996 systemd-logind[1597]: New session 15 of user core. Jul 12 10:14:30.838491 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 12 10:14:31.321634 sshd[5804]: Connection closed by 139.178.89.65 port 40716 Jul 12 10:14:31.322029 sshd-session[5801]: pam_unix(sshd:session): session closed for user core Jul 12 10:14:31.326782 systemd-logind[1597]: Session 15 logged out. Waiting for processes to exit. Jul 12 10:14:31.327080 systemd[1]: sshd@12-139.178.70.103:22-139.178.89.65:40716.service: Deactivated successfully. Jul 12 10:14:31.328820 systemd[1]: session-15.scope: Deactivated successfully. Jul 12 10:14:31.330252 systemd-logind[1597]: Removed session 15. Jul 12 10:14:36.344502 systemd[1]: Started sshd@13-139.178.70.103:22-139.178.89.65:40730.service - OpenSSH per-connection server daemon (139.178.89.65:40730). Jul 12 10:14:36.464620 sshd[5822]: Accepted publickey for core from 139.178.89.65 port 40730 ssh2: RSA SHA256:er45sfTMlm4F7XCjgdqGTRql8gwHayiMHL6WgBzS/8A Jul 12 10:14:36.465671 sshd-session[5822]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 10:14:36.469991 systemd-logind[1597]: New session 16 of user core. Jul 12 10:14:36.473497 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 12 10:14:37.239133 sshd[5825]: Connection closed by 139.178.89.65 port 40730 Jul 12 10:14:37.239727 sshd-session[5822]: pam_unix(sshd:session): session closed for user core Jul 12 10:14:37.250826 systemd[1]: sshd@13-139.178.70.103:22-139.178.89.65:40730.service: Deactivated successfully. Jul 12 10:14:37.253435 systemd[1]: session-16.scope: Deactivated successfully. Jul 12 10:14:37.254828 systemd-logind[1597]: Session 16 logged out. Waiting for processes to exit. Jul 12 10:14:37.256011 systemd-logind[1597]: Removed session 16. Jul 12 10:14:38.405303 containerd[1615]: time="2025-07-12T10:14:38.405262941Z" level=info msg="TaskExit event in podsandbox handler container_id:\"af69f0b8966db47dfb44e9a6a2b63cbd2090a155159cb1e52c796e2ca56d7855\" id:\"45303c71ae969abde1c4ac3f4bd1580d5cbc33c3d7b9b4771783460427e4cc78\" pid:5848 exited_at:{seconds:1752315278 nanos:397575655}" Jul 12 10:14:42.256554 systemd[1]: Started sshd@14-139.178.70.103:22-139.178.89.65:56784.service - OpenSSH per-connection server daemon (139.178.89.65:56784). Jul 12 10:14:42.392626 sshd[5858]: Accepted publickey for core from 139.178.89.65 port 56784 ssh2: RSA SHA256:er45sfTMlm4F7XCjgdqGTRql8gwHayiMHL6WgBzS/8A Jul 12 10:14:42.395523 sshd-session[5858]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 10:14:42.399038 systemd-logind[1597]: New session 17 of user core. Jul 12 10:14:42.405540 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 12 10:14:43.006314 sshd[5861]: Connection closed by 139.178.89.65 port 56784 Jul 12 10:14:43.007212 sshd-session[5858]: pam_unix(sshd:session): session closed for user core Jul 12 10:14:43.013309 systemd[1]: sshd@14-139.178.70.103:22-139.178.89.65:56784.service: Deactivated successfully. Jul 12 10:14:43.014758 systemd[1]: session-17.scope: Deactivated successfully. Jul 12 10:14:43.020720 systemd-logind[1597]: Session 17 logged out. Waiting for processes to exit. Jul 12 10:14:43.022368 systemd[1]: Started sshd@15-139.178.70.103:22-139.178.89.65:56788.service - OpenSSH per-connection server daemon (139.178.89.65:56788). Jul 12 10:14:43.024044 systemd-logind[1597]: Removed session 17. Jul 12 10:14:43.081149 sshd[5874]: Accepted publickey for core from 139.178.89.65 port 56788 ssh2: RSA SHA256:er45sfTMlm4F7XCjgdqGTRql8gwHayiMHL6WgBzS/8A Jul 12 10:14:43.082029 sshd-session[5874]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 10:14:43.085255 systemd-logind[1597]: New session 18 of user core. Jul 12 10:14:43.099553 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 12 10:14:43.750011 sshd[5877]: Connection closed by 139.178.89.65 port 56788 Jul 12 10:14:43.759647 sshd-session[5874]: pam_unix(sshd:session): session closed for user core Jul 12 10:14:43.761927 systemd[1]: Started sshd@16-139.178.70.103:22-139.178.89.65:56790.service - OpenSSH per-connection server daemon (139.178.89.65:56790). Jul 12 10:14:43.770165 systemd-logind[1597]: Session 18 logged out. Waiting for processes to exit. Jul 12 10:14:43.770750 systemd[1]: sshd@15-139.178.70.103:22-139.178.89.65:56788.service: Deactivated successfully. Jul 12 10:14:43.772037 systemd[1]: session-18.scope: Deactivated successfully. Jul 12 10:14:43.773493 systemd-logind[1597]: Removed session 18. Jul 12 10:14:43.937310 sshd[5884]: Accepted publickey for core from 139.178.89.65 port 56790 ssh2: RSA SHA256:er45sfTMlm4F7XCjgdqGTRql8gwHayiMHL6WgBzS/8A Jul 12 10:14:43.937883 sshd-session[5884]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 10:14:43.943850 systemd-logind[1597]: New session 19 of user core. Jul 12 10:14:43.949540 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 12 10:14:44.997426 sshd[5890]: Connection closed by 139.178.89.65 port 56790 Jul 12 10:14:45.007074 sshd-session[5884]: pam_unix(sshd:session): session closed for user core Jul 12 10:14:45.007664 systemd[1]: Started sshd@17-139.178.70.103:22-139.178.89.65:56806.service - OpenSSH per-connection server daemon (139.178.89.65:56806). Jul 12 10:14:45.027089 systemd[1]: sshd@16-139.178.70.103:22-139.178.89.65:56790.service: Deactivated successfully. Jul 12 10:14:45.029027 systemd[1]: session-19.scope: Deactivated successfully. Jul 12 10:14:45.029575 systemd-logind[1597]: Session 19 logged out. Waiting for processes to exit. Jul 12 10:14:45.030514 systemd-logind[1597]: Removed session 19. Jul 12 10:14:45.118030 sshd[5900]: Accepted publickey for core from 139.178.89.65 port 56806 ssh2: RSA SHA256:er45sfTMlm4F7XCjgdqGTRql8gwHayiMHL6WgBzS/8A Jul 12 10:14:45.121744 sshd-session[5900]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 10:14:45.129522 systemd-logind[1597]: New session 20 of user core. Jul 12 10:14:45.133482 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 12 10:14:46.040343 containerd[1615]: time="2025-07-12T10:14:46.033202128Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b8b0022fa79ac67ee93a01984ed619cd85a2a790867c99ac65513f1ce5c5c27a\" id:\"5e331d543cd851ca4699229b196683620a0b146c37c69528aed652cc22e7b36c\" pid:5928 exited_at:{seconds:1752315286 nanos:28526161}" Jul 12 10:14:46.692582 sshd[5909]: Connection closed by 139.178.89.65 port 56806 Jul 12 10:14:46.701349 systemd[1]: Started sshd@18-139.178.70.103:22-139.178.89.65:56812.service - OpenSSH per-connection server daemon (139.178.89.65:56812). Jul 12 10:14:46.718641 systemd[1]: sshd@17-139.178.70.103:22-139.178.89.65:56806.service: Deactivated successfully. Jul 12 10:14:46.715971 sshd-session[5900]: pam_unix(sshd:session): session closed for user core Jul 12 10:14:46.719992 systemd[1]: session-20.scope: Deactivated successfully. Jul 12 10:14:46.720178 systemd[1]: session-20.scope: Consumed 571ms CPU time, 70.9M memory peak. Jul 12 10:14:46.720910 systemd-logind[1597]: Session 20 logged out. Waiting for processes to exit. Jul 12 10:14:46.722119 systemd-logind[1597]: Removed session 20. Jul 12 10:14:47.069645 sshd[5939]: Accepted publickey for core from 139.178.89.65 port 56812 ssh2: RSA SHA256:er45sfTMlm4F7XCjgdqGTRql8gwHayiMHL6WgBzS/8A Jul 12 10:14:47.070463 sshd-session[5939]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 10:14:47.073064 systemd-logind[1597]: New session 21 of user core. Jul 12 10:14:47.078610 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 12 10:14:47.443542 sshd[5947]: Connection closed by 139.178.89.65 port 56812 Jul 12 10:14:47.446971 sshd-session[5939]: pam_unix(sshd:session): session closed for user core Jul 12 10:14:47.455422 systemd-logind[1597]: Session 21 logged out. Waiting for processes to exit. Jul 12 10:14:47.455667 systemd[1]: sshd@18-139.178.70.103:22-139.178.89.65:56812.service: Deactivated successfully. Jul 12 10:14:47.457149 systemd[1]: session-21.scope: Deactivated successfully. Jul 12 10:14:47.458069 systemd-logind[1597]: Removed session 21. Jul 12 10:14:52.459626 systemd[1]: Started sshd@19-139.178.70.103:22-139.178.89.65:60630.service - OpenSSH per-connection server daemon (139.178.89.65:60630). Jul 12 10:14:52.592891 sshd[5961]: Accepted publickey for core from 139.178.89.65 port 60630 ssh2: RSA SHA256:er45sfTMlm4F7XCjgdqGTRql8gwHayiMHL6WgBzS/8A Jul 12 10:14:52.597113 sshd-session[5961]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 10:14:52.601725 systemd-logind[1597]: New session 22 of user core. Jul 12 10:14:52.605491 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 12 10:14:53.062903 sshd[5964]: Connection closed by 139.178.89.65 port 60630 Jul 12 10:14:53.063254 sshd-session[5961]: pam_unix(sshd:session): session closed for user core Jul 12 10:14:53.067030 systemd-logind[1597]: Session 22 logged out. Waiting for processes to exit. Jul 12 10:14:53.067181 systemd[1]: sshd@19-139.178.70.103:22-139.178.89.65:60630.service: Deactivated successfully. Jul 12 10:14:53.068536 systemd[1]: session-22.scope: Deactivated successfully. Jul 12 10:14:53.070727 systemd-logind[1597]: Removed session 22. Jul 12 10:14:58.142955 systemd[1]: Started sshd@20-139.178.70.103:22-139.178.89.65:60636.service - OpenSSH per-connection server daemon (139.178.89.65:60636). Jul 12 10:14:58.252607 sshd[5978]: Accepted publickey for core from 139.178.89.65 port 60636 ssh2: RSA SHA256:er45sfTMlm4F7XCjgdqGTRql8gwHayiMHL6WgBzS/8A Jul 12 10:14:58.253590 sshd-session[5978]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 10:14:58.256688 systemd-logind[1597]: New session 23 of user core. Jul 12 10:14:58.265494 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 12 10:14:58.823807 sshd[5981]: Connection closed by 139.178.89.65 port 60636 Jul 12 10:14:58.827360 sshd-session[5978]: pam_unix(sshd:session): session closed for user core Jul 12 10:14:58.830050 systemd-logind[1597]: Session 23 logged out. Waiting for processes to exit. Jul 12 10:14:58.831296 systemd[1]: sshd@20-139.178.70.103:22-139.178.89.65:60636.service: Deactivated successfully. Jul 12 10:14:58.833096 systemd[1]: session-23.scope: Deactivated successfully. Jul 12 10:14:58.835451 systemd-logind[1597]: Removed session 23. Jul 12 10:15:00.070931 containerd[1615]: time="2025-07-12T10:15:00.070767495Z" level=info msg="TaskExit event in podsandbox handler container_id:\"479e01e5cdb5db9e53d13c70f602154ab0c3d70f9300796591572fe312c685de\" id:\"4f2ee937490ff6ac9a2074eb7413fb773aa1f68b0079a3928c7f422ea895c6f5\" pid:6004 exited_at:{seconds:1752315299 nanos:983332089}"